In 2007 I wrote some introductory Thoughts on a Socio-Economic Environment based on Nothing. This article continues to explore the value of things in a highly intangible, knowledge-based economy. It wanders through internet-based payment systems, economic structure, role of government, organisation of information, community, and society, before disappearing into the realms of philosophy. It contains no answers, but may prove thought-provoking. Read More
Exploration is Dead. Long Live Exploration!
Something happened at the start of July 2008 that only happens once every 2 years. For a brief period, everything about the world was not public knowledge. A handful of people worked day and night to fill this chasm of information. To document everything that was suddenly new and uncertain. Meanwhile the world filled up with hardened veterans, many of whom seem to struggle with, well, everything:
“How do I get to Northrend?” – Well, perhaps that new harbour or zeppelin tower that’s been built might give you a clue?
“Where’s Dalaran?” – Did you try riding to the end of the road and then looking up to see what’s blocking out the sun? (Dalaran is pictured right.)
The world is, of course, the World of Warcraft. And the 2-yearly occasion is the start of public testing of the latest expansion, Wrath of the Lich King: The only time a significant proportion of the game world changes.
What’s alarming is that these questions are not from new, inexperienced players. These are from people that have already played the existing game for months or years. They clearly want to know, but seem to have lost the basic ability to explore the game world themselves.
This article explores the concept of “exploration”, and tries to explain how one of the most complex virtual worlds ever created has become popular among players that are not natural explorers. Read More
Social Reconstruction of Public Transportation Information
The UK‘s local public transport data is effectively a closed dataset. The situation in the US seems similar: In spite of the benefits only a handful of agencies have released raw data freely (such as BART and TriMet on the west coast of America).
That hasn’t stopped “screen-scraping” of data or simply typing in paper timetables (from Urban Mapping to many listed here). Unfortunately, the legal basis for scraping is complex, which creates significant risks for anyone building a business. For example, earlier this year, airline Ryanair requested the removal of all their data from Skyscanner, a flight price comparison site that gathers data by scraping airlines’ websites. How many airlines would need to object to their data being scraped before a “price comparison” service becomes unusable?
User-generated mapping content is evolving, often to circumvent restrictive distribution of national mapping. Services include OpenStreetMap and the recently announced Google Map Maker.
Micro-blogging, primarily through Twitter, has started to show the potential of individual travellers to report information about their journeys: Ron Whitman‘s Commuter Feed is a good example. Tom Morris has also experimented with London Twitter feeds.
This article outlines why the “social web”/tech-entrepreneur sector may wish to stop trying to use official sources of data, and instead apply the technology it understands best: People. Read More
Mike Masnick on Techdirt, Information and Consultancy
These are notes from a talk given by Mike Masnick, CEO of Techdirt, a “technology information company”. Mike addressed a small Edinburgh Entrepreneurship Club/Edinburgh-Stanford Link gathering on 22 January 2008. He outlined the company’s history and philosophy – “use what’s abundant to solve what’s scarce” – and outlined an interesting approach to the delivery of expert/consultancy business services. Read More
Networks of Trust in Personal Information Management
In an earlier article, I mused on the role of “thought leaders” in indirectly influencing the popularity of websites. These are further rough thoughts on the topic. Caveat: this text is not well researched.
My basic premise is this: ‘Web authors and ‘bloggers are creating trust-based filters for information. Many online writers are looking to evoke discussion and change. But most readers, most of the time, are just trying to get through the day, and aren’t too interested in discussion and change. For them, the author “sounds like they know what they’re talking about”. That creates a sense of trust, and validates the author as a reliable filter for information on that topic. Even the most objective and discerning people don’t have time to review everything themselves. They merely spend more time determining which source to trust. Likely, others will trust them, which makes the source they trust a very important actor indeed.
So we create a network of trust for information. If I want to know something about topic x I might follow the recommendation of author y, because I trust their depth of reading on that topic.
Of course, that doesn’t mean that all webmasters and bloggers are automatically trusted. Far from it. The internet or “blogosphere” is so easy to publish to, it fills up with low-grade content faster than any other media in history. Authors have to earn trust, at least from their early readers. Subsequent readers may be more prepared to trust because others are already trusting (a herd or celebrity mentality).
Why is this happening? Take Herbert Simon‘s statement that, “the rapid growth of information causes scarcity of attention.” The sentiment is repeated in Davenport and Beck’s “The Attention Economy“. We simply can’t manage all the available information any more.
Is that really a new problem? It probably hasn’t been possible to know everything there is to know since the early Victorian era. In some cases there are now technical barriers to knowledge: Simply being well educated isn’t enough to allow one to understand most cutting edge scientific developments in depth. In most cases the prime problem is volume of information: In our World of Warcraft example, more information is written than is possible for a human to read. Finding the important or useful information within can be immensely time-consuming.
Trusting people one barely knows to filter information does not automatically turn these authors into celebrities. In a few cases it may do – some readers will feel the need to trust only those who appeal to many. However, if there is a trend towards writing in narrow niches with in-depth content, rather than content with mass-appeal, an individual author may never be known to millions of people, because the topics they write about aren’t sufficiently mainstream.
Those narrow niches will similarly prevent most authors from emulating the role of pre-internet mass media, notably newspapers. They do, none the less, retain the same duty to their readers: Their readers may be inclined to trust them, but that trust will be eroded if abused. Of course, much like modern mass media, readers can still be subtly manipulated…
As I noted, Google’s biasing of sources by the number and strength links to the source. This automated approach fails to value who is creating links, so has become less valuable as the internet has become more mainstream and prone to abuse. There does not yet seem to be an effective automated equivalent of personalised networks of trust – perhaps because emulating humans is hard to do?