A Typology of Post-Singularity Minds

Posted by:

How might the far future look like? That’s a question that’s intriguing me quite a lot. Science fiction movies and TV series might present some interesting ideas and concepts, but there’s certainly more to the future than such media can show. Some science fiction novels do present some plausible and fascinating scenarios. I’m most interested in visions that involve artificial general intelligence (strong AI) or vastly augmented human minds. Some of the most thought-provoking novels are:

  • Accelerando by Charles Stross in which post-humanity is threatened by artificial intelligences adhering to a rather aggressive economic system.
  • Permutation City by Greg Egan in which the protagonists manage to create their own reality – and recreate themselves.
  • Diaspora by Greg Egan which starts with depicting the respective cultures of acorporeal and corporeal post-humans, and their difficult diplomatic relations. Actually, this novels ends as totally mind-boggling voyage into new dimensions.
  • Schild’s Ladder by Greg Egan which features post-human drama caused by a science experiment that accidentally created a new Big Bang.
  • The Golden Age (trilogy) by John C. Wright that presents a rather positive libertarian / anarcho-capitalistic future in which wealthy individuals are supported by powerful artificial intelligences.

The Golden Age actually introduces different mind-types that are based on reconfigurations of human neurobiology. Alone the ingenious classification of post-human minds is a compelling reason to read that trilogy.

Anyway, I’m having a more general typology of post-Singularity minds in mind. In 2010 I’ve presented my first classification on my old blog. In that post I differentiated between four types of minds, called

  • Bliss minds, which are just totally blissed out.
  • Universal minds, which are generalists and most similar to our contemporary mental configuration.
  • Utility minds, which love working more than anything else.
  • Focus minds, which have a highly complex internal structure and are super-effective specialists.

Most recent reflections made it clear to me that this classification is too simple. My updated typology uses two different dimensions in which minds can greatly differ from each other: Hedonic type and functional type. While the hedonic type defines how a mind’s internal motivation structure looks like, the functional type determines which tasks that mind is optimized for.

A Framework for Thinking about Motivation

In order to understand how post-human minds might be motivated, I’ve devised a framework that’s motivated by a relatively new finding in psychology: Wanting something and liking it are separate things: You can want something without liking it (addictive drugs for example) and liking something without wanting it (e.g. food when you are full).

An interesting insight is quoted in the Psychology Today article When We Want Something More Although We Like it Less:

“liking is mediated by opioid systems and primary sensory and valuation regions, whereas wanting is encoded by midbrain dopamine activity in efferent regions such as nucleus accumbens”.

Consequently, it would be possible to manipulate liking and wanting independently by changing the respective neurotransmitter concentrations in those brain regions. And even though liking and wanting are mainly emotional phenomena, they are also effective on the subconscious level. Despite this detail which you might keep in mind, I will refer to liking and wanting as emotional systems.

Reflecting on this psychological fact, I wondered what we actually mean when we talk about “preferences”. This is anything but clear. When we prefer A, do we want A, like A, or think A would be good for us? Usually we don’t make such fine distinctions when we think about preferences. For the sake of simplicity, I just define preferences as “thinking it would be good”. Preferences are mental phenomena and not necessarily loaded with emotions. That’s not an official definition, but only my private terminology for talking about motivation.

So, we have at least three different motivational subsystems: Liking, wanting (which I will also refer to as “desire”), and preferring. The liking system only has the role of providing hedonic (“happiness”) feedback about a specific perception, and thus is a rather passive subsystem. Desire, on the other hand, actually motivates us to get active. Finally, preferences behave passively again, because the sole thought that something is good doesn’t automatically translate to working towards the desired thing (otherwise we would do much more sports). So far, we have two emotional phenomena, which are passive, respectively active, and a single passive mental phenomenon. Is there an active mental phenomenon?

Yes, there is. Due to the lack of a better term for it, I will just call it “will“. After all, there’s the concept of willpower, which is basically the ability to follow through mentally preferred actions in spite of emotional resistance. Does our will encompass pursuing all our preferences? Usually not, because it’s quite possible to prefer being the president of the United States sometime in the future, but not working towards actually becoming the next president. The fulfillment of some preferences is just too unlikely, or just not important enough to justify a real will.

In the end, our actions are motivated by our will and our desires. Of course, how our will and our desires look like in reality is strongly influenced by our preferences and “liking”. The latter phenomena are usually the origin of the first ones. We desire something, because we already liked it in the past. We pursue goals, because we once preferred the results of reaching those goals. But once will, goals, and desires are in place, their original motivation is not necessary anymore (even though it might be quite helpful). Will and desires can become habitual and detached from their actual utility for us. Of course, there are good habits, so that’s not necessarily a bad thing.

Hedonic Type

Now, to get back to post-human minds and their hedonic type, let’s consider what would happen, if we modify the “liking” subsystem somehow. It is reasonable to expect that post-human beings can change their liking subsystem at will. But why would they do that? After all, manipulating your own motivation system is really dangerous, and a rather unusual step anyway. Without considering whether we actually desire having a different hedonic setup, it is clear that we can have preferences about what we “should” like and how much. After all, often it would be quite convenient to like work, writing, sports, or learning much more (and “time-wasting” activities much less) than we actually do.

You can bet that once the technology exists to safely and effectively change your own hedonic configuration, many people would in fact make use of it. How far would they go? Surely, many of us would prefer to be happier in general. But how much more happier? Perhaps there’s even the technology to reach superhuman levels of happiness. Probably, a modification of the hedonic evaluation system could be made in such a way, that the other motivation subsystems are not directly affected. Even though you like something much more, you won’t necessarily desire or prefer it much more. That’s especially true if you just raise your hedonic set-point without making more detailed changes. You would be happier without necessarily becoming reckless, crazy or simply less productive.

What goes for liking could also go for disliking. You could just stop disliking something, without changing your attitude towards it. Is it really so great to dislike “bad weather”? What’s the actual use of that? In The Hedonic Imperative the philosopher David Pearce goes even much farther and suggests that beings could only be motivated by gradients of bliss. It would certainly be a huge challenge to design such a motivational systems so that sane behavior results from it, but if the hedonic system becomes more of less decoupled from the other motivational subsystems, it looks like a definite possibility.

And if that really works out well, why not go to the absolute limits and simply change your happiness level to maximum uniformly all the time? In such a case, the hedonic subsystem is of no use whatsoever for determining your motivation. If everything looks equally great, you can’t take “liking” as criterion for determining your actions. Only preferences, desires, and your will would remain as effective components of your motivation.

One of my first objections such such an extreme setup was that your preferences, desires, and your will could become “fixed” and won’t change over time. Well, hedonic feedback is quite important, but after a while I realized that there are certainly other meaningful criteria you can use to update your preferences (and consequently through technological means your desires). For example, you could set up some core principles for yourself, and adapt your preferences so that they stay as close to those principles as possible. Meaningful core principles could be altruism, the pursuit of truth, simplicity (keeping everything as simple as possible, but as complex as necessary), or minimizing internal conflicts.

What could become a problem, though, are things like empathy, aesthetics, or moral intuition. If the hedonic system does not return meaningful feedback, it is probably quite difficult to make aesthetic judgments or interpreting the emotions of others correctly. You would need to compensate for the lack of hedonic feedback by some other kind of evaluation system. Let me call such an ersatz evaluation system a hedonic emulator. How well would hedonic emulators work? Of course, it can be reasoned that it wouldn’t work as well as real hedonic feedback. In any case, it would require additional resources, so minds using hedonic emulators might be less “effective”. There might also be cases in which a hedonic emulator actually “outperforms” normal hedonic feedback, but it’s unreasonable to expect that it would outperform it in any possible situation – especially since post-human minds could use real hedonic feedback and hedonic emulators both at the same time and take the best out of both systems.

Interestingly, this insight suggests that the setup of a hedonic system has to do with efficiency. Using different gradients of bliss would be more efficient than staying at maximum bliss (or simply shutting down the “liking” system altogether). Unfortunately these considerations also make it quite reasonable to expect that beings which are also motivated by negative hedonic feedback can do a lot of tasks more efficiently than beings without negative feelings.

With all of this background, each of the following hedonic types would make sense in some way:

  • Original entities (Origents): These are beings whose hedonic configuration corresponds to that of animals which evolved in a lethally competitive environment, which breeds harsh negative emotions which are are strong and plentiful, and positive emotions that are rather limited in intensity.
  • Alternative entities (Alterents): These are entities designed to experience a vast spectrum of both intense positive and negative emotions. In that respect they are like Origents, but usually more efficient at a great variety of tasks, because of their enhanced emotional spectrum.
  • Hedonic entities (Hedonents): For motivating themselves, they only use gradients of extremely positive emotions. They usually don’t experience negative emotions by design.
  • Bliss entities (Blissents): They are even more extreme than hedonents, because they only experience constant maximal bliss (but possibly different flavors of maximal bliss) – they are as happy as they can possibly be, all the time. Only their desires and preferences (which do not depend on the hedonic phenomena of “liking” and “disliking”) motivate blissents.
  • Neutral entities (Neutralents): As the name suggests, they only experience neutral emotions. They also have a constant intensity of hedonic qualia: Zero. Like blissents they are only motivated by desires and preferences. For hedonic consequentialists these beings actually have no direct ethical relevance, so it is not uncommon that they are created for certain tasks and then deleted afterwards.

Note that this classification is actually not complete. The types I listed are just the types that are “reasonable”.

Functional Type

If minds can be reprogrammed in various ways, this opens up possibilities for rather extreme specialization. And higher levels of specialization enable higher levels of efficiency. Here, the kinds of specialization are rather general in nature. It’s more connected with the question “what function do I have in society” than “what kind of job do I have”. So, here are my ideas for a sensible classification of functional types.

  • Utility minds: These are specialized minds which do rather technical work that doesn’t require vast amounts of creativity. Nevertheless, they totally love what they (are configured to) do.
  • Universal minds: As generalists they can do more or less everything, but don’t excel in most areas. Their main purpose is to be creative and recombine old stuff into interesting new stuff. Creatures which where spawned by blind evolution are universal minds.
  • Integrators: Coordination, management, representation, aggregation, curation, mediation, and synthesis are the main tasks which integrators are made for. They are basically the centers of group- or hive-minds.

This actually might look like a caste system, but I really just see it as “societal function” system. Changing your functional type might be very well possible. Perhaps you want to try all of those different “lifestyles” sequentially. Or you could try to be everything at the same time, though by definition that would rather put you into the category of universal minds again.

Also, this classification doesn’t necessarily imply a hierarchic relations of the different types to each other. Every type simply has different tasks. You might expect that a situation in which integrators have the most power and utility minds the least power looks plausible, and I wouldn’t really disagree. But since the minds in the far future could vastly differ in size, it is also not hard to imagine that there are extremely powerful utility and universal minds, as well as almost irrelevant integrators.

Type in General

In principle, the two different type dimensions are independent. Every type combination exists, though some may be more common than others. The hedonic type on a mind can be modified without changing the functional type, and vice versa. A practical way of writing down a compound type would be “functional type” (without the “mind”) “short form of hedonic type”, so a mind can be for example a:

  • Universal origent
  • Integrator alterent
  • Utility blissent
  • Universal hedonent
  • Utility neutralent

… and so on.

Even if my current typology is far off the mark and in reality future minds would look completely different, these ideas are certainly worth thinking about. And they might provide an interesting basis for fascinating science-fiction stories.

Any thoughts? What are your favorite types? Which ones would you like to try out for an extended period of time? Do you perhaps have a completely different typology in mind which you would like to propose as alternative?

0
  Related Posts

Add a Comment