Behind a Royal Wedding

Zara Phillips enters Cannongate Kirk.

Zara Phillips enters Cannongate Kirk.

The marriage of the Queen’s granddaughter, Zara Phillips, to Rugby player Mike Tindall has been widely reported, especially by the celebrity press. It has been referred to as “the other” royal wedding, for its stark contrast with the marriage of William and Kate (the Duke and Duchess of Cambridge) a few months before.

That contrast isn’t just in the status of those getting married – Zara being 13th in line to the British throne, William 2nd. William and Kate’s wedding was a public spectacle, with all the pomp and ceremony of state, while Mike and Zara’s was a “quiet” family affair. Unfortunately the later wedding still generated significant public interest, and the result was a bizarre clash of family and celebrity, privacy and publicity. Read More

Simon Kirby: The Language Organism

Language is a method of sharing thoughts. It is uniquely human: Many species communicate using pre-specified techniques, such as markings on a flower to direct bees, or gestures between mammals – but only humans have the flexibility of language. Language is, perhaps, the key evolutionary advantage the human race has over everything else on planet earth.

So how have we come to develop this trait?

That’s the question Simon Kirby has spent the last 21 years trying to answer, now assisted by one of the world’s leading research groups on the topic. Their research suggests that Darwin’s model of natural selection is not a terribly good explanation. Indeed our culture actually shields us from natural selection, making our genes progressively less important to language as we develop. Simon goes on to speculate that domestification (being buffered from purely survival instincts) is a key condition of the emergence of language.

Kirby’s evidence is especially interesting because, unlike Chomsky, he does not propose an innate underlying structure for the development of language. Such a dominance of unbounded cultural transmission would be both liberating and terrifying: Liberating because it suggests unrealised flexibility in language, especially forms enabled by future technology. Terrifying because (certainly from a relativist perspective, but arguably more widely) shared thought through language is what defines our very being.

This article is based on Simon’s well-attended inaugural lecture to the University of Edinburgh, presented on 22 March 2011. Read More

Thoughts on the Resolution of Nothing

I ponder nothing. Endlessly. Nothing in the intangible sense – the increasing dominance of things without physical form in society and economy. Nothing in the sceptical nihilistic sense – the “meaninglessness of existence”. Even the nothing inherent in the stupidity required for cleverness.

Nothing isn’t new. The problem baffled thinkers for much of the 20th century. In the 21st we may finally be being overwhelmed by it. Possibly without realising. How society resolves a potentially uncomfortable relationship with nothing is important. And intriguing. It’s possibly the most difficult problem to resolve, yet underpins many contemporary issues.

This article introduces 3 approaches to resolving nothing. They are an attempt to summarise various different articles I’ve written over the past year. Broadly:

  • Tangible Renaissance: Physical representations of nothing. Idols to communicate abstract values. Belief in certainty.
  • Virtual Illusion: Virtual consumerism. An economy base on nothing, happily sustained in the denial of the meaninglessness. Belief in who cares?
  • Post-Existential Skepticism: Understanding built from nothing. Presumption of illusion. Belief in uncertainty.

This text is poorly researched, incomplete, and, well, uncertain. But it might be an interesting summary of the extent of my current confusion. This is written from a Western, especially British-American perspective. Keep these quotes in mind: Read More

Turning the Health World Upside Down

There’s a growing acceptance of the links between health, wealth and wider society. Not just the impact of wealth inequalities on measures like life expectancy. But the importance of fixing the underlying social causes of medical problems, rather than just administering the medicine and wondering why the patient doesn’t get better.

It’s convenient to frame this as a Third World problem. And while it is, it’s also a problem within and between developed countries. For example, people from one area of Glasgow (in Scotland) live a decade longer than people residing in another area of the same city, in spite of (theoretically) having access to precisely the same medical expertise.

A most basic analysis of Great Britain (and much of the developed world) reveals an organizational chasm, which most people are not prepared to cross: For example, medical services and social care provision are completely different activities – separate funding, differing structures, responsibilities, professional bodies. Even though individual “patients” shift seamlessly between them. It’s an organisational situation made worse by the difficulty both groups seem to have integrating with anything – in my experience (largely failing to integrate public transport into health and social services), a combination of:

  • The intrinsic (internal) complexity of the service itself, which leaves little mental capacity for also dealing with “external” factors.
  • The tendency to be staffed by those with people-orientated skills, who are often less able to think strategically or in abstract.
  • The dominance of the government, with a natural tendency towards bureaucracy and politicized (irrational) decision making.

Complexity is the biggest problem, because it keeps getting worse: More (medical) conditions and treatments to know about, higher public expectations, greater interdependence between different cultures and areas of the world. Inability to manage growing complexity ultimately threatens modern civilization – it will probably be one of the defining problems of the current age. So adding even further complexity in the form of understanding about “fringe issues” is far from straightforward.

Beyond these practicalities lurk difficult moral debates – literally, buying life. Public policy doesn’t come much harder than this.

Into this arena steps Nigel Crisp. Former holder of various senior positions within health administration, now a member of the UK‘s House of Lords. Lord Crisp’s ideas try to “kill 2 birds with one stone”: For the developed world to adopt some of the simple, but more holistic approaches to health/society found in the less developed world, rather than merely exporting the less-than-perfect approach developed in countries like Britain.

To understand Crisp’s argument requires several sacred cows to be scarified: That institutions like the National Health Service (which in Britain is increasingly synonymous with nationhood, and so beyond criticism) are not perfect. That places like Africa aren’t solely populated by people that “need aid” (the unfortunate, but popular image that emerged from the famines of the 1980s). That the highest level of training and attainment isn’t necessarily the optimum solution (counter to most capitalist cultures). If you’ve managed to get that far, the political and organisational changes implied are still genuinely revolutionary: To paraphrase one commenter, “government simply doesn’t turn itself upside down”.

While it is very easy to decry Nigel Crisp’s approach as idealistic, even naively impractical, he is addressing a serious contemporary problem. And his broad thinking exposes a lot of unpleasant truths. This article is based on a lecture Crisp gave to a (mostly) medical audience at the University of Edinburgh. And the response of his audience. The lecture was based on his book, Turning the World Upside Down: the search for global health in the 21st Century (which I have not read). Read More

Alex van Someren’s Lucky Acorns

Alex van Someren. Alex van Someren is one of those rare people, without whom our modern world would probably be a little bit different. From writing the first book about programming ARM architecture, the computer processor which now sits at the core of almost every mobile phone on the planet. To providing the technology that made Secure Socket Layer (SSL) more commercially viable, and helped enable the ecommerce internet revolution of the late 1990s.

Yet his story is fascinating because it is a definitive study in luck: Not just pure chance. But the type of luck that comes from a combination of unusual personal interests, social circumstance, and the active pursuit of something different.

It’s a reality that few “successful” entrepreneurial people acknowledge, because it’s an uncomfortable reality: It doesn’t fit neatly into a 5-point plan for instant fame and fortune [also see box below]. And it leaves a nagging doubt that the outcome could easily have been unsuccessful. And while I suspect that Alex isn’t comfortable with pure chance, he provides ample examples of how other elements of luck can be biased. How the odds can be improved. The dice loaded more favourably.

Those examples make Alex van Someren worth understanding. This article is based on a talk he gave to the Edinburgh Informatics Forum. Read More

Difference and the Same

‘Blogosphere luminary, Larísa, thinks I’m smart. In capitals, because the word itself evidently lacks sufficient emphasis. Her implication, that this is a good thing.

Yet it’s driving me mad.

This article tries to explain why. It defines aspects of intelligence as difference from average, and then quantifies this as degrees of shared reality. The article provides a model where genius and stupidity are almost identical, where the closer someone is to the join, the closer they come to insanity – the “reality of one”.

It explains why wider human society continues to believe extremes of intelligence can be a positive attribute, in spite of the social disconnection associated with this. The article shows how perception-based, consumerist social structures have built reward structures upon this delusion. The nature of illusion is then considered, with particular reference to aesthetics, and the role of empathy in maintaining illusion among humans.

The article lastly introduces the concept of social gravity – the tendency of humans to the same – and then challenges the idea that everyone should be dragged back towards that single point of gravity: Rather, by maintaining multiple illusions, a social structure emerges where multiple extremes of difference can be maintained, while still averaging to the same.

Like some of my more abstract writing, this isn’t terribly well researched. Equally, the topic so broad, it isn’t practical to consider every counter-argument or divergence of thought within the text, and still maintain some form of readability. It may be helpful to first read Michael Gazzaniga’s Science of Mind Constraining Matter, which provides the rationale for some of the statements made in this article. Read More

Michael Gazzaniga on the Science of Mind Constraining Matter

Michael Gazzaniga. Can neuroscience explain it? You know – consciousness, being, the number 42. And if everything you thought you were transpired to be nothing more than an easily deceived heap of neurons, would that trouble “you”?

During October 2009, Michael Gazzaniga gave a fascinating series of Gifford lectures exploring how our brains process the information that gives us our sense of “I”. Gazzaniga drew extensively from neuropsychological studies of people with “split brains” (explained later) to develop the notion of a single “interpreter” within the brain – a part of the brain that analyses all the data available for meaning.

Michael Gazzaniga then attempted to rationalise the interpreter, concluding that our focus should be on the interactions of people, not the brain itself. This logic was then expanded to wider society – social structure, interaction, and law. Those later thoughts raised many more questions than were answered.

This article attempts to summarise the key themes in a non-technical manner, with a few naive attempts to interrogate the theories developed. This is my interpretation of 6 hours of lectures. Interpretation, because I tend to recreate Gazzaniga’s conclusions by re-analysing the information presented. With a complex topic such as this, it is likely that some of my interpretations will differ from his. Sections titled “Interlude” are entirely my analysis. Read More

Valuing Nothing

In 2007 I wrote some introductory Thoughts on a Socio-Economic Environment based on Nothing. This article continues to explore the value of things in a highly intangible, knowledge-based economy. It wanders through internet-based payment systems, economic structure, role of government, organisation of information, community, and society, before disappearing into the realms of philosophy. It contains no answers, but may prove thought-provoking. Read More

Infecting the Ad Pool

Malicious Advertising (Malvertising) is becoming a problem. This is the practice of purchasing advertising space on unsuspecting websites, then using that space to run adverts which automatically redirect the user’s browser to a malware site – a site that distributes viruses, spyware, and other computer nasties.

The practice first emerged in 2006. Already 2008 has seen may large publishers (website operators) attacked, including Classmates, USA Today, Photobucket, and MySpace.

Late last night I visited one of my own websites and got immediately redirected off to a domain already blacklisted by Google, which in turn redirected to another site that was intent on installing a scareware “virus checker”. ZAM (a gaming network), already plagued by “XP Online Scanner” adverts earlier this year, had again been hit by malicious adverts. The timing, just after midnight UTC Saturday, was impeccable: Advertising networks tend to work sensible business hours, ensuring 48 hours of infestation before anyone starts to investigate it. [Although I should add that in this case I did get a positive resolution within 24 hours.]

My response was to temporarily abandon the advertising network that had delivered the “malvert”, and switch to affiliate advertising I control.

This article explains why publishers have a very low tolerance of malverts, and consequently why it is in the best interests of advertising networks to deal with malvertising before it becomes widespread.

Valuing Users

The cost to a malware writer of placing a single malvert is in the order of $0.001, with the publisher receiving somewhat less than that. The pricing model assumes a high volume of advertising is ignored by users: An advertiser might need to screen thousands of adverts to get any referrals (click-throughs). It does not assume that the adverts will immediately refer every user to the advertiser’s site, without user interaction.

For malware writers this is both cheap and highly effective: Quantcast and Compete suggest (a recent case of malicious advertising) attracted 1-2% of all US internet users in May: A dominance achieved by less than 500 other sites worldwide. Something advertising agencies can only dream about. Quantcast’s demographic analysis also indicates that the old, poor or poorly educated are more likely than other internet users to be caught by malware.

The publisher got a fraction of a cent, and may have lost 1 or more customers forever:

New visitors essentially bounce straight into “virus hell”. They are never coming back; not after “what you did to their computers”. Regular visitors assume your site was “hacked” (a security breach on your servers), and loose confidence. Even if they stay, they’ll think twice about typing their credit card number in again. If the site relies on viral traffic, they will be sure to tell their friends not to visit as well.

So Block the Advert!

Unless the publisher has a very strong community, they might never realise why their users are leaving: Malverts may be targeted by location or time of day, such that the publisher never sees them.

Assuming the publisher knows about the malvertising, finding the source transpires to be exceptionally hard. Malicious adverts may be embedded in an advert that looks perfectly normal, but only triggers an automatic redirect under certain circumstances. So even in simple cases, where the publisher has a direct relationship to advertisers, finding malware requires the advert to be tested.

But adverts are increasingly run via networks, who increasingly rely on advertising exchanges. So a large publisher could be running practically any advertising campaign in existence. I was running over 2,000 different campaigns (many of which have multiple adverts), and my site is small fry.

So once a malicious advert enters the system, it can spread like a virus throughout online advertising networks, almost unchecked.


Publishers who care about their customers (and consequently also tend to have the most valuable advertising inventory) are likely to avoid any advertising network that delivers malvertising:

  • They might establish direct relationships with reputable advertisers, which cuts the networks out of the loop completely. Only viable for large publishers or those in specific niches.
  • Or perhaps they will change to text or non-interactive adverts? Advertisers that rely on being able to communicate using imagary will have problems: The only publishers to remain with malware-infested networks will be those that do not care about their users. Precisely the sites that were probably not good places to advertise anyway.

Users will gradually grow more paranoid. Pop-up advertising is a perfect example: Browsers gave too much control to scripts, and not enough control to the user. The result was that pop-up blocking features became commonplace, and pop-ups became a redundant technology.

What are users’ “solutions” to malvertising? Completely blocking all adverts and disabling all scripting. How does that help advertisers, networks or publishers? It doesn’t.

Sadly users’ solutions will not include disabling Flash, the poor design of which seems to be at the heart of the malicious advertising (something countered by Adobe). Flash is so critical for online video most users cannot browse the internet without it.


There still seems to be a lack of appreciation of the damage potential of malicious advertising. But there are solutions available to the industry collectively, as many of the authors below demonstrate:

Bill Joos on Pitching

Bill Joos (or William Wallace Joos, as he prefers to be called in Scotland) spoke at a Edinburgh Entrepreneurship Club/Edinburgh-Stanford Link event on 11 March 2008. Bill experienced plenty of pitches while with Garage Technology Ventures, and shared the top ten mistakes for early stage/startup company business plans and pitches. While his focus was on pitching to venture capitalists, much of what he said is applicable to any business planning process. This article summarises his talk. Read More

Notes from Disneyland

Tim at the HP garage, 367 Addison Avenue, Palo Alto. Credit: John Lee. I was finally talked into visiting Silicon Valley, the region of California at the heart of many of the technological innovations of the last 50 years. This is what I came back with.

“It’s the little differences. I mean they got the same shit over there that they got here, but it’s just – it’s just there it’s a little different.” – Quentin Tarantino

Everything is bigger, of course. The exit ramp from the aircraft, the portions of food, the hotel rooms, the sprawl of the city. That might go without saying, but it hits you like the cars should when you forget to look the right way before crossing the street. Actually, drivers are remarkably careful.

Technology is deeply embedded in the local economy. From the local food delivery service’s pickup trunk emblazoned with the domain name ““, to the head offices of businesses most will ever only experience via a website. The results are obvious too. Ramshackle houses occupy land worth millions of dollars, while local commercial centres seem to consist primarily of restaurants and bars. An alien might struggle to understand what everyone did to earn a Dime.

So why liken it to Disneyland? It isn’t just the inherent unreality of the place. Or the fact that it makes me feel about 25 years younger. (That’s almost a negative age.)

For an explanation, take a trip up Judah Street on the San Francisco tram. At each stop the doors open and the mass of humanity that didn’t make it hobble on board. Inequality isn’t an American phenomena, but it is far more extreme than I expected. Yet the society seems to function strangely oblivious to how the “other half” live.

There was just one moment when I felt a real pulse. Enough to convince me that Disney magic wasn’t complete. Paul Saffo commented that the biotech revolution would ultimately lead to a divergence of the species, as the wealthy became able to extend their lives. That was enough to silence the room.

Paul Saffo on The Revolution After Electronics

Paul Saffo spoke to Stanford’s Media X conference on the art of predicting the future. Specifically predicting which technology will come to dominate the next decade. Paul’s talk may at first seem somewhat contradictory in nature: Demonstrating how to do it, while simultaneously showing it can’t be done. This article summarises the talk.

30 Year Cycle

Every 30-50 years a new science turns into a technology. With approximate dates:

  • 1900: Chemistry
  • 1930: Physics
  • 1960: Electronics
  • 2000: Biology

We are now on the cusp of a revolution from electronics to biology. The precise inflection point, the point of change, may not yet be clear.

Paul noted that Thomas Watson’s famous misquote, “I think there is a world market for maybe 5 computers”, was made in 1953, right on the cusp of the electronics revolution: Aside from the fact that he was talking about a specific machine, and not all computers, the quote is a good example of how it is difficult to predict the future at such points of radical change.

Forecasting the Future

The goal is not to be right, but “to be wrong and rich”: It is easy to take the view that one cannot forecast. If you do attempt to forecast you will still mostly be wrong, but the very act of trying will increase your chance of success over those that do not try.

The further away from a point in time you predict into the future, the greater the level of uncertainty. The difficulty in forecasting is finding a balance between being too narrow and too broad. Forecasting might use wildcards. The “hard part” is to be wild enough.

Typically forecasts for a new product or technology’s introduction are linear: The magnitude of the amount of use of the technology is forecast to grow steadily with time.

Reality tends to be represented as an S-shaped curve: In the early stages the magnitude of use is below the expectation generated by the linear forecast. Usage then rapidly grows, such that the actual usage rises above the prediction in the later stages. The result is that in the first part, forecasters tend to over-estimate performance, while latterly they under-estimate performance. Venture capitalists tend to have linear expectations, and so are disappointed in the early stages, while failing to see the later potential.

Robots and Inflection Points

Stanley, winner of the 2005 DARPA Grand Challenge. Paul Saffo used the example of DARPA’s annual competition for robot-driven cars. In the first year only a handful of competing robot drivers made it out of the starting gate. No car completed the challenge. The next year 22 out of 25 robots got further than the leader in the first race.

The example gives a quantifiable measure of how the technology is developing, year to year.

Spotting the inflection point, the place at which real, dramatic change starts to occur, can still be hard. Sometimes it can be spotted using data which has been ignored or hidden. Sometimes it is a case of looking for what does not fit. The anonymous quote, “history doesn’t repeat itself, but sometimes it rhymes”, is apt. Look back in time as far as you look forward.

The good news is that if you miss an indicator, you still have lots of time to spot another.


Paul contested that the last three decades had been characterised by a dramatic cheapening of a component technology, which in turn had led to the widespread use of a product:

  • 1980s: Cheap processors led to the processing age. The result, widespread use of PC.
  • 1990s: Cheap communications lasers led to the access age. The result was the network infrastructure to support the World Wide Web.
  • 2000s: Cheap sensors are leading to the interaction age. Applications are currently missing, but widespread use of robots appears to be the future.

Biology and Electronics

Electronics is building biology, and Paul expects that eventually biology will rebuild electronics: These technologies are far from isolated.

An example of developments in electronics progressing biology can clearly be seen from work on the human genome. A well funded government-backed project was beaten by a far smaller project. The smaller project was able to successfully deploy robots, with the results that the cost of the work dropped by a factor of 10 each year. The government project had been funded based on the cost of technology at the outset, and initially failed to adequately respond fully to the changing cost structure.

The creation of the first artificial genome in January 2008 may yet prove to be the inflection point.

Trust Instincts at Your Peril

“Assume you are wrong**” (** and forecast often)

Paul used the example of the sinking of a US naval fleet near Honda, on the west coast of the United States, on 8 September 1923. The fleet had been navigating using a forecasting technique called “dead reckoning”. The coastline had a (then) new technology available to assist navigation – radio direction finding. This allowed a bearing to be given between a land station and the fleet.

The radio direction finding gave an unexpected result that did not match the forecasted position. The lead boat in the fleet concluded that their position was more favourable than anticipated (closer to their destination), and turned sharply… straight into the rocks they had been trying to avoid. The 11th boat in the fleet did not trust the judgement of the lead boat, and when the fleet turned, it hedged its bets, slowing and waiting to see what happened. It was one of only 5 ships from the fleet not to run around.

The morale of the tale: Hedge your bets, but embrace uncertainty. Or as written once on a tipping jar:

“If you fear change, leave it in here.”

Divergence of the Species

The question was asked, will biotech lead to a further aggregation of wealth? Yes. The electronics revolution had itself deepened inequality. Biotech raises a particularly ugly spectre which extends beyond wealth, to life itself. The wealthy would be likely to use their wealth to extend their lives. The ultimate outcome – species divergence. Currently the rich tend to benefit from better health care, and so extend life. But biotech is likely to create a lot more options.

Michael Malone on The Protean Corporation

Michael S. Malone is perhaps best known for his work defining the “Virtual Corporation” in the early 1990s. At Stanford’s Media X conference he proposed the next iteration of organisational development – the Protean Corporation. The topic forms the basis of Malone’s next book. This article is based on his talk.


The total number of consumers is growing exponentially. Wireless broadband covers an ever-increasing amount of territory. The US may become the first truly “entrepreneurial society”, with skill-based work that never last more than a few years, where people never plan to do the same work forever: A mixture of creativity and volatility. The increasing size of the customer base will lead to larger organisations. Simultaneously, competitive threats can appear from anywhere, particularly in fast-moving technology sectors.

The result is two contradictory forces:

  1. Centrifugal: Technology enables workers to be spread out.
  2. Centripetal: Humans still need a sense of legacy and wider purpose; and are inherently social creatures. The “fatal flaw” of the Virtual Corporation was that once everything has been pulled apart, nothing is left.

Throughout history, from pre-corporations (such as early modern trading companies and guilds), through Taylorism to the virtual/adaptive/wired organisations, two trends can be seen:

  • Increased autonomy of employees, with greater communication between them.
  • Reduced management control.

The Protean Corporation

The paradox is simple: How to build an enterprise that lasts, while still being flexible and adaptive?

Michael used the Quantum atom to demonstrate the shape of things to come: An organic form, in constant flux, which retains its core. The design attempts to recreate the structures within Hewlett Packard, where a group of long-term employees remained at the core, with the traditional enterprise formed around them.

The Protean Corporation has three parts:

  1. Core
  2. Inner Ring
  3. Cloud

The Core are the permanent staff – people that have been with the business since as long as anyone can remember – “the immortals”. Their role is to protect the culture of the company, which they do somewhat informally. For example, they might be highly regarded by other employees for their experience or ability to get a result out of the organisation. Largely unseen, they are the people that make the organisation run smoothly. They may not be immediately apparent to the senior management, and as such they need to be protected from a new CEO – they are likely to be accidentally culled along with the rest of the workforce.

The Inner Ring are the traditional full time employees. They manage and operate the business. Their job is to recruit the Cloud.

The Cloud are 90% of the organisation. Their employment might last a matter of hours or days. They might work remotely, never having met their employer. The cloud is so transient that they might make errors before they have time to learn. It is critical that the Core is able to watch over the Cloud, and maintain the company’s culture and standards.

The role of the company’s board is merely to adjudicate and not to manage – to act like the company’s Supreme Court.

Competence Aggregator

The Protean Corporation will be fixed in perpetual motion. The most important role in such a corporation will be “competence aggregators”. These people pull individuals together for specific projects, much like creating start-up companies within the corporation. Competence Aggregators exist within the Cloud, but are still governed by the Core. The Competence Aggregators will be the new superstars of the economy.

Private and Public

The shape-shifting Protean Corporation can exist in both public and private sectors of the economy. To achieve this is the zenith of the concept.

A key problem remains: There is no way to accurately value the Protean Corporation. Its assets are intangible, and not reflected in conventional accountancy-based corporate market valuation. It is a similar issue to that which limits social entrepreneurs – there is no way to measure the performance of non-profit organisations. The ultimate limitation on the whole process is the lack of a market for intellectual capital.

The Protean Corporation in Practice

Wikipedia was cited as an example of a protean-like corporation.

I personally recognise the existence of both the Core, Inner Ring and Cloud from the Open Directory Project. The Core was part-formalised as “Editalls” – floating editors that had no fixed role, but which were always trusted and experienced veterans of the project. The Inner Ring consisted of “Meta” editors (and later Admins), who appointed everyone else, and took the formal leadership role. However, neither group entirely matched these roles. The Cloud, the regular editors, were just as described by Malone: The majority of the organisation, often with very limited ties to the project, many moving on after a short period of work.

The Core is also commonly found within British local government: In most long-established authorities there are a handful of people who both provide a sense of stability, and can simply get things done that nobody else can (usually through some combination of contacts and experience). Without these people I suspect that much of local government would be rendered totally dysfunctional (as close to collapse as a public body can become).

It was noted that Intel had originally shunned the concept of the Virtual Organisation, yet had subsequently developed into one “by walking backwards into it”. For example, only 20% of its “employees” are now traditional permanent staff. Far more contribute “virtually” or as suppliers. Yet all need access to company data and systems, so have to be trusted. A fifth have never met their boss face-to-face, and half of those never expect to: Such an organisation is logically already facing the challenges that the Protean Corporation seeks to answer.

Michael S. Malone’s book is called The Future Arrived Yesterday: The Rise of the Protean Corporation and What It Means for You.

David Law on Design as a Competitive Advantage

David Law has successfully launched and run a number of influential design businesses, including Speck Design. His work ranges from iPod skins to “camera armor” to video conferencing environments. David spoke to a small group at the University of Edinburgh on 26th March 2008. He proposed that design should be at the core of a modern business, as a competitive advantage to differentiate a business from others. This article summarises David’s argument, describing why there is a need to differentiate, his approach to design, targeting of niches, and how to stay ahead.

Design to Differentiate

Things are getting easier to make. There has never been a more informed consumer. Markets for consumer products are highly competitive, with little barrier to entry. All this means that popular designs are likely to be emulated, eroding prices downward. The aim of most manufacturing is simply to reduce cost to remain competitive.

The solution? Differentiation. A small company cannot differentiate products through marketing, but it can differentiate through good design.

Approach to Design

David sees design as “supercharged problem solving”. The aim is to satisfy a user need.

How is that need found? Observe users. Don’t ask them, watch them. Find where they get mad, and design a product that takes away their pain.

Then create lots of prototypes quickly. For real. CAD is too slow and lacks realism. Better to create a paper mock-up, which can be seen and handled. Keep on iterating until the design is right.

Development Triangle

The development process behind a new product can be weighted between three objectives:

  • Speed
  • Innovation
  • Cost

For example, a project might be orientated towards speed, with a new product developed in a few weeks. Other projects might be highly cost sensitive. David believes that most companies never consider the balance of objectives, and so tend to end up “somewhere in the middle”.

The Niche

The mantra “always start in a niche” goes against the instinct of many entrepreneurs, who tend to gravitate towards the biggest problem or market, since the reward from success are greater there.

However, niches have a number of distinct advantages:

  • Higher margins
  • Lower competition
  • Easier to “get in to” and find needs within
  • Appreciate audience.

David used the example of Camera Armor: Products that protect SLR equipment while in use. SLRs are a niche within a larger camera market. From this niche it was possible to develop into the larger market for smaller digital cameras – creating innovative cases and a rather dinky little tripod that snaps out from the bottom of the camera when needed.

Staying Ahead

David Law’s teams consists of a small number of designers. All their products are manufactured elsewhere (in China). The manufacturing process is simple – the real value of what they do is in design.

Could China produce good design? David argued that design needs proximity to the market. However, he did cite the example of Samsung: Historically a manufacturer competing on price alone, they have successfully developed a design-orientated approach, and are increasingly producing genuinely good designs in the vein of companies such as Sony or Apple. [It is possible that eventually Chinese manufacturers will follow this path, and become more design-orientated themselves.]

But if it is easy to copy products, how can value be maintained in good design? It depends on the product:

  • Where a key part of the design can be patented, a successful design will pay a long term dividend.
  • Where a design cannot be patented (most common), the method is simple: Keep on innovating, and always keep a step ahead.

Change of Domain

I have combined my personal writing and profile onto one domain, If everything works as planned, all the old links and content will redirect to this new location: You won’t notice the change.

Why change?

When I started writing ‘blogged content 6 months ago, I wasn’t entirely sure what I would write about, or whether I would keep posting enough content to make the site worth reading. I am still writing my thoughts and ideas – the only glue that ties the content together is me. So hosting them – quite literally – under my name seems appropriate.

I have had some form of personal internet homepage since the last century: Originally the page was hosted on Geocities (we all start somewhere), and for the last 7 years at In the past unusual names were a distinct disadvantage: Few know how to spell or say my surname. But on the internet an unusual names makes one easier to find. It should be time to exploit that advantage; not that I need to: I already hold the top position on Google’s search results for my surname, Howgego.

Dave McClure on Social Networking and Web 2.0

Dave McClure addressed a Edinburgh Entrepreneurship Club/Edinburgh-Stanford Link event on 29 January 2008. He outlined some of the advantages of “Web 2.0”, talked extensively on the use of real-time metrics to evolve web services, developed a history of social networking websites, and highlighted the interesting aspects of Facebook. This article summarises Dave’s talk, with some additional commentary from myself.

Advantages of Web 2.0

Web 2.0 is characterised by the:

  • low cost of acquiring large numbers of users,
  • ability to generate revenue through advertising/e-commerce,
  • use of online metrics as feedback loops in product development,
  • sustainable long term profitability (at least for some).

Dave McClure did not actually try and define the term, which was probably wise. Generally the term is applied to websites and services where users collaborate or share content.

Web 2.0 has a number of advantages (although it could be argued that some of these apply to earlier iterations of the internet too):

  • APIs – the ability to act as a web-based service, rather than just a “website”.
  • PC-like interface, albeit still 5 years behind contemporary PC interfaces.
  • RSS feeds (for data sharing) and widgets (user interfaces embedded elsewhere).
  • Use of email mailing lists for retaining traffic. While email certainly isn’t a “web 2.0” technology, his argument is that email is increasingly overlooked as a means of retaining website visitors.
  • Groups of people acting as a trusted filter for information over the internet.
  • Tags (to give information structure) and ratings (to make better content stand out).
  • Real-time measurement systems rapidly giving feedback. Key is the immediacy of the information, and the ability to evolve the web service to reflect that.
  • Ability to make money from advertising, leads and e-commerce. While true since about 1995, the web user-base is now far larger, so the potential to leverage revenue also greater.

Metrics for Startups

I believe the ability to very accurately analyse website usage, implement changes, and then analyse the results, is a key advantage of web-based services. It is an advantage often overlooked by information technology professionals and programmers. I’m not sure why – possibly because web service developers:

  • don’t appreciate how hard/expensive gathering equivalent information is in other sectors of the economy, or
  • are scared to make changes in case they loose business, and/or believe their initial perception of what “works” to be optimum, or
  • just lack the pre-requite analytical curiosity to investigate?

Or perhaps Web 2.0 just isn’t mature enough yet for developers to have to worry too much about optimisation: A new concept for a site will probably either fail horribly or generate super-normal profits. The sector isn’t yet competing on very tight margins, where subtle optimisation can make or break profitability. Of course, optimisation of websites can deliver substantial changes in user behaviour. For example, I have found that a relatively subtle change to the position of an advert can alter the revenue generated by over 20%.

Dave McClure developed the AARRR model. AARRR segments the five stages of building a profitable user-base for a website:

  1. Acquisition – gaining new users from channels such as search or advertising.
  2. Activation – users’ first experience of the site: do they progress beyond the “landing page” they first see?
  3. Retention – do users come back?
  4. Referral – do users invite their friends to visit?
  5. Revenue – do all those users create a revenue stream?

For each stage, the site operator should analyse at least one metric. The table below gives some possible metrics for each stage, with a sample target conversion ratio (the proportion that reach that stage).

Category User Status (Test) Conversion Target %
Acquisition Visit Site – or landing page or external widget 100%
Doesn’t Abandon: Views 2+ pages, stays 10+ seconds, 2+ clicks 70%
Activation Happy 1st Visit: Views x pages, stays y seconds, z clicks 30%
Email/Blog/RSS/Widget Signup – anything that could lead to a repeat visit 5%
Account Signup – includes profile data 2%
Retention Email or RSS leading to clickthrough 3%
Repeat Visitor: 3+ visits in first 30 days 2%
Referral Refer 1+ users who visit the site 2%
Refer 1+ users who activate 1%
Revenue User generates minimum revenue 2%
User generates break-even revenue 1%

These metrics become critical to the design of the product. Poor activation conversion ratio? Work on the landing page(s): Guess at an improvement, test it out on the site, analyse the feedback, and iterate improvements. Gradually you’ll optimise performance of the site.

I find this attempt to structure analysis and relate it back to core business performance, very interesting. However, the sample metrics can be improved on a lot, depending on the nature of the site. For example, to track virality (referral), I might watch the monthly number of adds, or monitor the number of new links posted on forums (Google’s Webmaster tools allow that). Tracking users all the way through the tree from arrival to revenue generation needs to done pragmatically where revenue is generated from very infrequent “big-ticket” sales: With minimal day-to-day data, it can take a long time to determine whether a change genuinely has improved long-term revenue, or whether natural fluctuations in day-to-day earnings just contrived to make it a “good day/week/month”.

Now I know this approach works, but why it works is less clear. We might like to think that we are genuinely improving the user experience, and maybe we are. However, it could be argued that merely the act of change is perceived by users as an improvement – a variation of the Hawthorne effect. The counter argument to the Hawthorne effect can be seen on sites with low proportions of repeat visitors: The majority of those experiencing the improvement will not know what was implemented before.

History of Social Networking

Dave McClure’s interpretation of the timeline of the development of social networking sites is as interesting for what it includes, as for what it omits: No Geocities; no usenet; no forums; no MUDs… The following timeline shows key services in chronological order, except without dates – all the services shown were created within the last ten years:

  • Email lists (Yahoo Groups)
  • 1.0 Social Networks (Friendster) – these early network established the importance of up-time (service reliability) and the ability of users to manipulate pages.
  • Blogs – links between weblogs acting as networks.
  • Photos and video (Flickr, YouTube) – created a sense of community, and allowed tagging/grouping of content.
  • 2.0 Social Networks (LinkedIn)
  • Feeds and shared social information ( event planner)
  • Applications and widgets – the ability to embed data about a user’s friends in applications is probably “the most powerful change on the internet in the last ten years”.
  • Hosted platforms (OpenSocial, Facebook) – most services are likely to allow 3rd-party developers to provide applications on their platforms.
  • Vertical communities (Ning) – ultimately this may develop such that a service like Facebook acts as a repository for a user’s online identity, while specific groups of people gather on other networks.
  • Availability of information – a single sign-on, with automatic data transfer between services.

The future may be “Social Prediction Networks”. This is a variation on the theme of using trusted networks to filter content: Instead of Blogging meets Search, I characterise Social Prediction Networks as Digg meets Facebook. Shrewd observers will note Facebook has already implemented Digg-like features, while simultaneously topic-specific, community-orientated Digg-clones are being launched. People gather into interest groups around a topic, and then through use of tagging and rating, the community filters content. The system effectively predicts what other people in the group will find useful. This may be an optimum approach for groups above the Dunbar number (or an equivalent number representing the maximum number of people a person can form stable relationships with).

Interesting Aspects of Facebook

Three were discussed:

  1. Social graph (friend list) – email and SMS (mobile phone) service providers have rich data on the frequency of communication between people, yet aren’t using this information to form social networks. Dave noted that two major email service providers, Yahoo and AOL, are currently struggling to thrive – this could be an avenue for their future development.
  2. Shared social activity streams – knowledge of what your friends think is important. Friends are more likely to influence you than people you do not know.
  3. API/Platform – dynamic behaviour and links across your social network.

Further Observations

Will growth in social networks continue? Yes – the friend list adds value to the content.

Will others compete? Probably, as a “long-tail” of networks, likely topic-specific.

Can social networks be monetarized better? Currently social networking services generate far less revenue than search services. The challenge for social networking sites is to move towards the wealthy territory of search services. At the same time, search services are moving towards becoming more like social networking sites.

How can traditional companies engage with social networking sites? Social networking sites work best for sales where a product has a strong aspect of peer pressure in the decision to buy. The most important advice is not to create a copy of a website: Instead provide less complex content that uses social networks to draw users to a website.

Applications for social networks tend to be over-complicated, normally because programmers attempt to implement functions found in software they have previously written for other platforms or websites. Generally the successful applications are very simple. Some developers have opted to break complex applications into a series of smaller applications, and use the virality of social networking sites to build traffic for one application from another.

Social network applications are exceptionally viral. They can gain users very rapidly, yet also loose users just as fast. Much of this virality comes from feeds, which typically alert friends when a user installs an application. Within a few years the feed is likely to be based on actual usage of an application.

Facebook now allows applications to be added to “fan pages” (or product pages) – so individual users need not now be forced to install an application to use it.

Those using email lists for retention are best to focus on the title of the email, and not the content. Merely make it easy to find a URL in the content. The key decision for the reader is whether to open the email. What the email says is almost irrelevant – they’ve already decided to visit the site based on the title.

Gravatars and Identity

Gravatars are “globally recognised avatars”. Here, an avatar is a simple image representing the author of a ‘blog or forum comment. The name is derived from Hindu philosophy, although the blog/forum avatars are the direct descendants of the avatars found in video games, specifically role-play titles. This article discusses the limitations of Gravatars, and hints at a future based on game-like automated customisation for forum avatars.

Be warned that this is another inadequately researched “thoughts” article, that covers a lot of rather well-discussed territory superficially, and perhaps needs to be developed further.

Gravatars in Practice

The idea is simple: Instead of uploading your image to every website you interact with, upload it centrally, and allow each website you use to retrieve your avatar from the central source. Gravatars are linked to your email address, which already uniquely identifies you on the internet. Gravatars are currently still the preserve of hardcore bloggers. And no, they are not installed on this site yet either (comments are infrequent here). While implementing the code to support Gravatars is straightforward, it is still rarely done on ‘blogs, and almost never added to internet forums. Like OpenID, it is the sort of idea that needs to attain a critical mass of widespread use before it will become truly useful.

I opted to try using Gravatars at El’s Extreme Anglin’ forums. Partly because (by design) BBPress has no avatar features by default, yet users still expect to be able to personalise their posts by using avatars. Partly because not allowing image uploads or remote image hosting removes a potential avenue of attack by hackers. Partly because it seems logical.

However, already some issues are emerging:

  1. Where users attempt to create a Gravatar account, they invariably fail to get Gravatars working, with the result that the default image shows.
  2. The majority of users don’t already have, or don’t wish to use Gravatars.

In my opinion, the first problem is a design failing of Gravatar’s website: After uploading an image, Gravatar needs to be told to use the image that has just been uploaded. This final step in the process is not sufficiently clear to most users because it should not be necessary – “I just gave you an image to use, why aren’t you using it?”

Multiple Identities and Avatars

The second problem in part reflects the tendency of ordinary internet users (that is, not the people that post a lot of blog comments) not to have Gravatars associated with their email addresses. That may change in time, particularly in tech-savvy areas such as gaming.

But one specific reason for not using Gravatars is the fact that a user may want to display a different image depending on the type of site they are posting on. Gravatar’s service allows multiple images to be uploaded, but only one image can be used at a time. The only way I know to attach different images to different websites is to use different email addresses. Sure, there is no shortage of free email services… but doesn’t that merely replace one administrative saving (an avatar that follows you) with another (a need to create and monitor a new email account)?

At the root of the problem is the premise that one person = one email = one identity = one avatar. In the sphere of online gaming, at least, that is a very contentious, and consequently dangerous, assumption to make.

It is worth analysing our perceptions on this.

Some people have a desire for separate visual identities, yet all managed from the same email address. Deep philosophical debate can ensue. Does that mean our emails are closer to us as physical entities than our avatars? Or is it just a purely pragmatic visual thing? A lolcat might look great on a casual discussion forum, but would be less convincing (or socially acceptable) against a formal piece of academic writing.

Sometimes it is very practical: On a service such as Facebook, I find it useful to see a picture of what a person physically looks like, because most of the people I have befriended there are people that I am likely to meet and talk to physically. (And I’m terrible at remembering names, so am frequently confused by friend requests from cute animals or blurry-looking groups of drunk people.) In contrast, on a gaming discussion forum, seeing an image of the actual person posting is not especially relevant, and can even be somewhat distracting.

Every online game that introduces something akin to Tabula Rasa‘s surname (where the surname is linked to the player, and shows on all their alts), seems to upset people that want to separate out characters/avatars from any link to other characters/avatars. Yet in Live Action Role-Play (like a Massively Multiplayer Online Game RolePlay-Player-vs-Player server, but without the computers), it was often said that most players end up playing themselves: While you can attempt to change your visual identity, your behaviour ultimately reflects who you are. Clay Shirky draws an interesting conclusion from the case of Kaycee Nicole, a famous internet hoax involving false identity:

“When the community understands that you’ve been doing it and you’re faking, that is seen as a huge and violent transgression. And they will expend an astonishing amount of energy to find you and punish you. So identity is much less slippery than the early literature would lead us to believe.”

Avatars of the Future

Are these perceptions changing over time? Personally I’ve found that over the last ten years my real and virtual identities have merged: I no longer actively try and isolate one from another, and pretend that one is a different person from the other. But that may simply reflect my growing personal acceptance of who I am, and not be related to physical-vs-virtual identity. At the other end of the scale their are the social networking virgins: Young adults who continue to refuse to engage in any for of internet networking with their peers, because they fear that they will no longer be able to hide the truth about what they really do from polite society, potential employers, or anyone else that might “use the web against them”. Will they change with time?

The key question remains, will multiple avatars always be a requirement of an online presence, or is this merely a transitional phase while people experiment with the concept? It might be argued that in either case Gravatar is the wrong approach, since currently there is a need for multiple visual identities – a mainstream need, not the need of a quirky few – yet the system struggles to accommodate that need. It follows that linking a visual internet identity to an email address is flawed.

A solution would be to add a further sub-classification of avatar after the email address: [email protected]:work would somehow determine that the site displaying the avatar was a work-related one, and display a sensible work-related avatar.

But avatars are still incredibly basic. On some forums, you will now find a line below the avatar that says “I’m feel a tired”, yet the avatar still shows a happy smiling face. Or the poster is on holiday in Florida… yet there is still snow in the background of their picture. Better to alter the face in the image to reflect the mood or alter the background of the avatar to reflect the place. (With appropriate alt and title tags, of course!)

The historic link between forum and game avatars is already coming full circle, with avatar generators for “games” like World of Warcraft and Gaia Online that allow the creation of forum avatars based on virtual-world appearance. It isn’t a huge step forward to make avatars a lot more “realistic” than they traditionally have been.

With all those customisation options, perhaps the old method of site-specific avatars wasn’t so bad after all?

Mike Masnick on Techdirt, Information and Consultancy

These are notes from a talk given by Mike Masnick, CEO of Techdirt, a “technology information company”. Mike addressed a small Edinburgh Entrepreneurship Club/Edinburgh-Stanford Link gathering on 22 January 2008. He outlined the company’s history and philosophy – “use what’s abundant to solve what’s scarce” – and outlined an interesting approach to the delivery of expert/consultancy business services. Read More

Networks of Trust in Personal Information Management

In an earlier article, I mused on the role of “thought leaders” in indirectly influencing the popularity of websites. These are further rough thoughts on the topic. Caveat: this text is not well researched.

My basic premise is this: ‘Web authors and ‘bloggers are creating trust-based filters for information. Many online writers are looking to evoke discussion and change. But most readers, most of the time, are just trying to get through the day, and aren’t too interested in discussion and change. For them, the author “sounds like they know what they’re talking about”. That creates a sense of trust, and validates the author as a reliable filter for information on that topic. Even the most objective and discerning people don’t have time to review everything themselves. They merely spend more time determining which source to trust. Likely, others will trust them, which makes the source they trust a very important actor indeed.

So we create a network of trust for information. If I want to know something about topic x I might follow the recommendation of author y, because I trust their depth of reading on that topic.

Of course, that doesn’t mean that all webmasters and bloggers are automatically trusted. Far from it. The internet or “blogosphere” is so easy to publish to, it fills up with low-grade content faster than any other media in history. Authors have to earn trust, at least from their early readers. Subsequent readers may be more prepared to trust because others are already trusting (a herd or celebrity mentality).

Why is this happening? Take Herbert Simon‘s statement that, “the rapid growth of information causes scarcity of attention.” The sentiment is repeated in Davenport and Beck’s “The Attention Economy“. We simply can’t manage all the available information any more.

Is that really a new problem? It probably hasn’t been possible to know everything there is to know since the early Victorian era. In some cases there are now technical barriers to knowledge: Simply being well educated isn’t enough to allow one to understand most cutting edge scientific developments in depth. In most cases the prime problem is volume of information: In our World of Warcraft example, more information is written than is possible for a human to read. Finding the important or useful information within can be immensely time-consuming.

Trusting people one barely knows to filter information does not automatically turn these authors into celebrities. In a few cases it may do – some readers will feel the need to trust only those who appeal to many. However, if there is a trend towards writing in narrow niches with in-depth content, rather than content with mass-appeal, an individual author may never be known to millions of people, because the topics they write about aren’t sufficiently mainstream.

Those narrow niches will similarly prevent most authors from emulating the role of pre-internet mass media, notably newspapers. They do, none the less, retain the same duty to their readers: Their readers may be inclined to trust them, but that trust will be eroded if abused. Of course, much like modern mass media, readers can still be subtly manipulated…

As I noted, Google’s biasing of sources by the number and strength links to the source. This automated approach fails to value who is creating links, so has become less valuable as the internet has become more mainstream and prone to abuse. There does not yet seem to be an effective automated equivalent of personalised networks of trust – perhaps because emulating humans is hard to do?

Bill Urschel on Internet Advertising Innovation

Bill Urschel is the CEO of the internet advertising exchange, AdECN. William spoke to a Edinburgh Entrepreneurship Club/Edinburgh-Stanford Link gathering on 14 November 2007, about the development of AdECN, its role as an exchange market for internet advertising space, and the future of internet advertising. This article is based on Bill’s talk, which he gave in a personal capacity.

Development of AdECN

William Urschel first realising the market potential for computer/internet ventures when writing computer books. He has started a number of software/internet businesses since, and looks for three things in a new venture:

  1. Market: Something to address of a manageable size, with an overall growth trend (“the rising tide lifts all boats”).
  2. People: 1-5 people with either technology or business backgrounds, and the correct attitude and work ethic.
  3. Product: Address a need… and it is nice if it works.

Historically, advertisers would pay an advertising network, who would then display adverts using the advertising inventory on publishers’ websites. It was common for the network the advertiser dealt with to run adverts across multiple networks. Often business flowed from network to network to network, before an advert actually appeared on a publisher’s site. This resulted in reduced revenue for the publisher, as each network “middleman” took their share: Perhaps for every $1 of advertiser’s money spent, just $0.18 would reach publishers. Waste still existed in the market: Half of the display advertising market was either going unsold or “under-sold” (sold for a significantly lower value than it could attain, simply to fill the space).

How AdECN Works

AdECN was launched in 2002, but didn’t “get moving” until 2004. Its role is to act as a stock exchange for network-to-network advertising deals. The ECN part of the name, meaning Electronic Communication Network, is derived from financial stock markets.

Networks continue to deal directly with their own advertisers and their own publishers. The process will first try and match an advertiser’s demand to a publisher’s inventory within the same network. When advertising demand and publisher inventory within the first network are mismatched, AdECN steps in to broker a deal between different networks. The result is that advertisers get their adverts published, and publishers fill their inventory with paying adverts. The whole auction process takes place in 6-7ms, at the time the publisher’s page is viewed.

AdECN has been careful to make itself an ally of the networks, not a competitor to them:

  • It does not deal directly with advertisers or publishers – it has a distinct role in providing the infrastructure for the exchange.
  • Networks split the commission on the deals between them, just like stock brokers.
  • AdECN levies a flat fee, so is neutral to whoever wins or losses the auction.

The neutrality of AdECN is seen as their main competitive advantage over Yahoo and Google: AdECN isn’t an advertising network in its own right. [Although as described later, AdECN may simply be becoming the new breed of advertising network, in a marketplace where advertisers will increasingly deal directly with publishers. I did not get the chance to query this apparent contradiction.]

Contextual and Behavioral Data

Adverts can be targeted contextually or behaviorally:

  • Context considers simple variables such as time of day or location (typically the country viewer is resident in).
  • Behavior (or, behaviour, or “profile”) considers variables such as the age of the viewer and their search patterns.

Currently 95% of all targeting is contextual because it has historically been difficult to match behavioral information in a fast and ethical manner. In the next “3-5 years”, behavioral advertising will move to dominate 80% of online [display?] advertising.

AdECN capture a lot of data, which is increasingly the added value it can offer networks. By design it does not store data: Data is used only in the (near-instant) auction process. Individual networks/advertisers can bolt on their own “black boxes” to AdECN – bespoke software they design to utilise auction data so that their advertising spend is optimised. The most common use of black boxes is to split Cost per Click (CPC – advertiser pays when someone click the advert) and Cost per Action (CPA – advertiser pays when an action is completed, such as an enquiry form completed, or product sold).

Privacy remains a key issue. Self-regulation is seen as the way forward. This is based on not keeping personal data, and instead focusing on core questions like “what is the consumer going to buy?” The history of Gator (spyware installed which monitored browsing habits) shows that consumer pressure will eventually win over advertising network which don’t stick to reputable privacy practices.

In Hindsight

For the first two years of the venture, AdECN did not perform well. For an internet startup, two years is a long time. In the early years, AdECN’s team were “too abstract and too technical”. The software was eventually rewritten. Fortunately the venture’s backers were able to see the long-term potential. The lack of barriers to entry into the exchange did allow many networks to trial it, which allowed business to slowly build.

By 2004 they were “in the right place, at the right time”. They were bought by Microsoft. Bill Urschel couldn’t reveal specifics, but stated that there was “no b” in the price paid. His final round of investors received a x9.7 return over four months, so nobody was complaining. They sold “too early”, but in practice they had to sell: Similar (although William claims not actually exchanges) competitors Rightmedia and Doubleclick sold to Yahoo and Google respectively. It became inevitable that Microsoft had to buy an exchange.

The Future

The underlying market is expanding, and forecast to continue to grow. Critically:

  • Online advertising accounts for only 7% of total advertising spend, yet occupies more than 7% of consumers’ time: Advertisers are behind the trend, and will logically seek to catch up.
  • Display advertising (on publishers’ sites) is growing faster than search advertising (on sites such as Google search results).
  • With exchanges such as AdECN, display advertising now has the same data/targeting advantages search had 6-7 years ago. Real-time auctions and targetting have taken much longer.

The industry itself will like change, particularly what is meant by the term “ad network”: Advertising agencies can now deal with publishers directly, and use the exchange to handle excess supply or demand – there is no need for the old middlemen, the advertising networks.

The average CPM (Cost per Mile, where a mile is a thousand advert impressions) rates are likely to remain the same where already high (for example, rates around $25 will see little change). However, targeting will allow undersold inventory to be utilised much more effectively, so space sold closer to $0.25 will increase in value. As noted earlier, behavioral/profile targeting is likely to develop such that it dominates within 3-5 years.

Could exchanges move into the television and print advertising arena? Current systems could be improved, but the exchange really needs real-time auctions to flourish.