'Blogosphere luminary, Larísa, thinks I'm smart. In capitals, because the word itself evidently lacks sufficient emphasis. Her implication, that this is a good thing.

Yet it's driving me mad.

This article tries to explain why. It defines aspects of intelligence as difference from average, and then quantifies this as degrees of shared reality. The article provides a model where genius and stupidity are almost identical, where the closer someone is to the join, the closer they come to insanity - the "reality of one".

It explains why wider human society continues to believe extremes of intelligence can be a positive attribute, in spite of the social disconnection associated with this. The article shows how perception-based, consumerist social structures have built reward structures upon this delusion. The nature of illusion is then considered, with particular reference to aesthetics, and the role of empathy in maintaining illusion among humans.

The article lastly introduces the concept of social gravity - the tendency of humans to the same - and then challenges the idea that everyone should be dragged back towards that single point of gravity: Rather, by maintaining multiple illusions, a social structure emerges where multiple extremes of difference can be maintained, while still averaging to the same.

Like some of my more abstract writing, this isn't terribly well researched. Equally, the topic so broad, it isn't practical to consider every counter-argument or divergence of thought within the text, and still maintain some form of readability. It may be helpful to first read Michael Gazzaniga's Science of Mind Constraining Matter, which provides the rationale for some of the statements made in this article. On this page:


Smart means applicable intelligence. Ingenuity. Where intelligence is an ability to understand and organise information. Smart uses that intelligence to adapt to the environment around us. Popular perceptions of intelligence tend to reflect simplistic analysis like "IQ" tests. The higher the number, the further along the The Bell Curve, the more likely you are to be living the American Dream. It's a necessarily simple illusion.

Unfortunately, intelligence transpires to be multifaceted. Elements like speed, perception, and memory can all vary. The concept underpins a lot of cognitive psychology, although the ideas of people like Howard Gardner remain far from mainstream Western education.

Someone with strong perceptual reasoning and slow cognitive ability, might demonstrate "genius" when writing a book, yet is a "dunce" in school tests. Simultaneously both intelligent and stupid. On average, they are average. But in truth, the only thing they are is difference (sic). Anything but average.

Latin implies genius comes, literally, from birth. Its modern-day usage tends to indicate exceptional intelligence or ability.

Ingenuity is arguably a subset of genius: Applied genius. A curious concept, as Arthur Schopenhauer hints in the distinction, "talent hits a target no one else can hit, genius hits a target no one else can see." If no one else can see it, how applicable is it? Fortunately there are sufficient overlaps between peoples' abilities for the ideas created by "genius" to make the lives of others better.

Genius itself is an inherently lonely place, since, almost by definition, there is nobody quite like you.

Consider Grigori Perelman, the Russian mathematician who solved the Poincaré conjecture. In effect, defining what shapes are possible in 3-dimensional space. He's since become almost as famous as a recluse, as for his proof. Not wanting money, "success", people to be interested in him, or, it seems, anything else that mainstream society expects that he should want.

This doesn't just exemplify extremes of "difference" away from the social norms (in all directions). Perelman has become famous for not wanting to be famous. And that circular outcome is far more indicative of the underlying pattern:


David Hume is attributed with the observation that society perceives genius and ignorance the same. Both types of people are merely disconnected from everyone else. This disconnection is key understanding what at first seems nonsense: That absolute genius and absolute stupidity can be precisely the same thing.

Using a "flat earth" logic, genius and stupidity are at opposite ends of the same "line", with most people tending towards the mid-point of the line. Most observers can see neither "end" of the line, but logically deduce that those ends will never meet. Yet the earth transpires to be round: If you travel east, you will eventually reach the same destination as if you travel west. So, instead consider "Intelligence" as a spherical object. Skewed and multi-dimensional, so rather more difficult to visualise than the earth. But definitely joined up at each end.

I will quantify intelligence in terms of "reality". Specifically the degree to which a reality is shared by people. The fewer people share a reality, the more disconnected they are from mainstream society.

This assumption stems from the observation that the only apparent agreement in philosophical thought is that we don't agree. We are what defines what is. In my corruption of Biblical creation, Adam and Eve validate the reality of each other. In effect, create one another. (And for the curious - the notion of "God" would only emerge to temporarily resolve the patterns they cannot collectively understand - like, why are we standing in this garden?) The interchangeable use of the words "reality" and "intelligence" stems from the idea of reality as what can be understood, rather than what is. Those states differ because what is is merely a set of possibilities, which can be understood by different people in different ways, largely depending on their capacity to understand and organize information. That is, their intelligence.

Reality Circle.

Assume a simplistic (IQ-style) model of intelligence, where people tend towards the average. The greatest amount of shared reality occurs at this average, because of the tendency towards average among the population. The further away from the average, the more likely you are to be embracing elements of reality that aren't so widely shared.

This is why it is desirable to be average, not difference: "The same" grants the richest sense of shared reality, and, logically, for a social animal, the most fulfilling life.

As one approaches the extremes of genius/stupidity, one approaches a "reality of one". A unique understanding. A reality not shared with anyone else.

Of course, by (my) definition, you can't have a reality in which just one person agrees with it. This would be pure madness.

Indeed, as we approach a reality of one, we progressively lose sanity. Lose context. Every step forward requires an ever-more complex set of checks that the world behind is still there. Patterns of thought becomes less and less certain. Without a structure to constrain thought, eventually one's head explodes: Overwhelmed by doubt about the things that everyone else subconsciously knows to be.

This is what drives me mad.

I appear to have contradicted myself: A reality of one is not obtainable, so how can it be the ultimate destination for anything? Surely this unattainability implies that intelligence is not spherical? That there is no point at which genius meets stupidity?


Human evolutionary advantage seems predicated on adaptability. Specifically the plasticity of a human's brain, compared to other animals. Brain size transpires to be something of a myth: Human brains are evolving to be smaller, because a smaller size makes it easier for information to transfer around the brain. So human evolution allows progressively more complex pattern to be solved, and associated realities to be explored, and ultimately shared.

This means the system is not in equilibrium - even the simplistic notion of "average intelligence" is constantly changing. And absolute extremes of intelligence are altogether less absolute than the simple circle implies.

It also explain why, at a basic evolutionary level, above-average intelligence is considered preferable to average (or below) intelligence: Not only does the Baldwin effect favour those who can learn new skills; but by later life, the typical intelligence level of humanity as a whole will have improved, so someone that is slightly above-average at birth is much more average in later life.

But here lies the root of the confusion:

So, the near-averages regard the "smart" differences favourably, because the differences are a source of useful ideas to be imitated, while still allowing the near-averages to maintain their rich sense of reality. However, the differences see few of these benefits. Their lives are punctured by failure. Their realities forever on the edge of madness.


In the final analysis, being difference is only desirable if you're not. So, if you are, why bother?

Well, some don't. Indeed, many of the most inspired people end up prematurely over-dosing on sleep or hanging in their own wardrobe.

Part of the explanation lies in social reward structures. Perelman's reclusiveness transpires to stem from rejection by his "peers" (the Steklov). In effect, isolation from the one "social" group that might have relevance to his reality. In contrast, fame and fortune are the currency of popular average, and all but useless to anyone that exists near the extreme.

But these reward structures are more confused than that example suggests:

Since Gilgamesh, written narrative and storytelling has made it easier for humans to empathise with people they have never met. The more communication, the greater the sense of shared reality. In an age of widespread dissemination of "knowledge" and hyper-communication between individuals, shared reality tends to dominate. Difference requires an illusion to maintain. That may take the form of topic, ability or interest-based groups of people - such as obscure academic schools of thought. Such things are part of a bigger illusion - fortunately one that elitist universities already excel: Maintaining a very positive reputation among a wider population that generally doesn't understand its academics.

Consumerist values are inherently shared values: Changing how the people around the owner of the consumerist good, regard that owner. The more people consider such a good to be desirable, the greater its value. This logically motivates people to seek the point of greatest shared value. A distinctly average position, that appears to disincentivise difference. However, consumerism is premised on the inherent contradiction of everyone wanting something that hardly anyone has - if everyone had it, nobody would want it. This consumerist contradiction is commonly resolved through a form of dual-identity:

Not all difference is associated with such tokens of value. Indeed, some combinations or extremes of difference are far less rewarding. But, it demonstrates that the lack of equilibrium in human society - that allows difference and the same - is maintained on a web of illusion. An illusion built heavily upon irrationality.

Such dominance of perception is perhaps only be a problem for those that seek to apply rationality where there is none. Those that are totally reliant on structure to organise chaos.


After claiming I was "smart", Larísa noted that I was, "as far as you can come from the old cliché [of players of online games] about the fat, stupid no-lifer living in his mothers basement". Removing the words "as far as you can come from", reveals how I actually am. At least, how I see me. With the caveat that I'm only living in my mother's basement in a metaphorical sense - which is possibly worse.

How can we disagree, so fundamentally, about me?

Easily! Everything we think we know about one another comes from what we have written in articles such as this. In my case, these are rather poor representations of my no-life. And not just the inherent remoteness and weakness of reality associated with difference. For example, this article contains a day's worth of curious (and hopefully interesting) thought, but has taken almost a month to write. The overall process is dominated by my stupidity, I merely don't write those parts down. Your impression, as a reader, is a very selective illusion.

This notion of illusion mirrors thought on aesthetics.

One person can (subjectively) see beauty in (for example) a painting, that another person does not see. Yet by stating that it is beautiful, they implicitly demand that everyone else agree with them - what Immanuel Kant called "universal validity". It's a contradiction that transpires to be rather hard to resolve. I can assume all facts are, in fact, "normative" - social judgements - and effectively remove empirical judgements from the problem. Such a cheap solution hints at the underlying issue: The structure through which thought is applied.

Established early-enlightenment Western thinking regarded beauty as rational (for example, the idea that beauty could be quantified using mathematics) and egoistic (for example, the idea that beauty must be self-serving). Roughly the opposite of my logic, which stresses irrational, us-centric behaviour. My hypothesis is that thought is gradually evolving from one extreme to the other. That evolution parallels the shift of wider scientific thought from determinism (fixed natural laws) and reductionism (explaining a thing by the sum of its parts), to chaos (divergent dynamic systems) and emergence (isolated components do not explain the whole system).

The issue can also be considered from the opposite extreme. For example, if we can have 2 words for the same thing, why is it so hard to have 2 things for the same word?

Well, we can: The most complex - that is, beautiful - forms of the English language do just that. Consider my earlier line, "This is what drives me mad." This is left undefined. You probably read it as referring to the previous paragraph, but it could also refer to the next paragraph. Or both. Or maybe I was using the sentence to convey uncertainty?


Underlying this social illusion is empathy - the human ability to perceive how other humans are feeling. Without empathy, illusion loses context, meaning, plausibility, belief. That hints at why empathy occurs across the spectrum:

Neurologically, the same brain activity occurs regardless of whether a person is experiencing something themselves, or is empathising with another person. Mirror neurons might challenge the very notion of an independant (rather than collective) mind. In the meantime, they surely blur the boundary between illusion and what is:

If your view (mental feeling) about me is derived from your view of similar conditions, then your view about me isn't about me at all. Your view reflects upon you. If we are both broadly similar, then we might agree on a shared sense of truth. But if we are separated by difference, illusion dominates.

So, illusion allows us to disagree without disagreeing. For limited difference to co-exist with the same.

Empathy doesn't come "built-in" at birth. It develops in the early years of life. The most popular example is the inability of very young children (roughly under age 4) to hold a false belief - they cannot acknowledge that others may hold a belief that is wrong.

Perhaps reality is primarily a social construct after all? And most interestingly, one that can be manipulated as the human develops.


The uncanny notion that the sky is down (not up) is madness. Likely a "reality of one", even though up and down are inherently arbitrary concepts. Yet we all agree that the sky is up, and (mostly) think nothing more of it.

This social gravity - the tendency towards the same - dominates humanity. While illusion allows some flexibility, it doesn't allow the full range of possibilities to be explored. And so we seem to be trapped on one side of the uncanny valley. We just don't perceive ourselves to be trapped, because everyone is trapped in more-or-less the same place.

Social gravity dominates justice and law. Systems of jury trial (the judgement of average), the very laws that underpin them. Unfortunately, the more we understand about the difference, the more the system is challenged: If my (let's assume) autism, means I can't comprehend a particular socially-agreed notion of "wrong", should that be a mitigating factor when I commit that wrong? Using the model of difference presented here, if the legal system is prepared to accept a plea of insanity, it should at least consider a plea autism. Such pleas of difference are a the logical evolution of legal systems that are prepared to moderate justice depending on circumstances. Yet, in the final analysis, every crime can be argued to be the result of some kind of non-average human. An alternative, absolute notion of law ("all murders should be hanged") is certainly easier to rationally manage, but goes against common human instinct ("all murders should be hanged... unless the murder was unintended").

Social gravity is apparent in the "medicalisation of existence". The idea that every divergence of humanity from the same is some sort of medical condition, implicitly in need of a cure. A broken bone is both painful and not terribly useful, so most medical ethics would agree that the bone should be fixed. A broken mind is far more contentious. Extremes of difference may be socially unpleasant for everyone involved (both afflicted and affected), but may yield the very "moments of genius" that wider society needs to evolve. It would be ironic if, for example, in the rush to "cure autism" humanity damned itself to mediocrity.

Fortunately, social gravity is biased by optimism. A tendency towards the slightly above-average. The popular view that "smart is good" may not eliminate an attempt to normalise society, but will ensure that such normalization is actually a little bit better than normal.

But that's still a bleak view, likely to suppress the scale of evolutionary change, rather than - well, what?


What if there wasn't a single social gravity? Instead many different points, to which different communities of humans gravitated.

Conventional logic would view that like a divergence of the species: Communities would be unable to share anything outside their community. Including copying new methods discovered by other communities. Evolution falters. Certainly becomes inconsistent. And we'd probably end up slaughtering one another, in some kind of dystopian hell.

But that logic overlooks the apparent ease with which the human mind can learn of, and exist around, illusions.

Rather than introduce the mind to one illusion, introduce it to several. Each of these illusions is off-set from the centre of social gravity, but balanced around that centre for each person. For (a simple) example, 3 different illusionary points forming a triangle, where the centre of the triangle is the center of social gravity. On balance, everyone is the same, even though nobody actually occupies the center point of social gravity. Both difference and the same.

In the simplest form, each illusion is shared by a different community of people. But unlike the earlier conventional logic, balance is now maintained by each person existing in several different communities, allowing any one person to cross-reference their respective illusions.

In a more complex form, each illusion is unique to the individual. This more complex form would require mutually opposing "realities of one" to average out at the centre of social gravity - which makes about as much sense as saying, the difference between nothing and nothing is everything.

The shape of thought is also unclear. My logic assumes it is unbiased - a thought in one direction is just as likely as a thought in the other - and hence (in a 2-point model) the 2 thoughts average out in the middle. Of course conventional "academic" structure tends to be rather linear - sequences of arguments, where consensus is built over time, and divergence tends to only occur towards the end. In contrast, my logic implies the reconciliation of several different sequences, each of which is built upon fundamentally different initial assumptions.

This doesn't just allow vastly more different combinations of possibilities to be considered across all human thought. It parallels the creative thought process (for example, James Austin's 4 kinds of luck), and reflects the emergent structure of the brain itself (neurons may take one of hundreds of thousands of possible paths, not a predictable route). And it does all this while maintaining the same sense of common ground between humans, that more structured forms of thought currently tend to ensure.

It's easy to argue that this is nonsense. And to many people alive now it probably always will be. But it might make more sense to those born into a world where it is quite normal to have relationships with people solely through a form of digital communication, in addition to actual physical relationships. The adaptation of the mind to manage these multiple illusions is really no adaptation at all. But such logic appears to offer the opportunity of an intellectual and social structure that mimics a chaotic system. Potentially, immensely more beneficial to humanity than conventional structured thought, although a huge challenge the just about everything modern society was established upon.

Read More

Similiar writings: Future, Philosophy, Psychology, Society, Thoughts, Society.

Archived Comments and Reactions