Paradigms And Classification Of Upgraded Minds

Posted by:

Minds Of The Future

When thinking about the minds of a society which fully embraces technology, it soon becomes clear that these minds will leverage technology to improve themselves. In the past, the improvement was achieved with the use of external tools like writing and computers which allowed for distributed cognition that not only encompassed thinking processes within a group of people (facilitated by language), but also included artefacts like notes, books and smartphones into various forms of cognition. In the future, these previously external tools will become integral parts of human minds via the use of neural interfaces and neural implants or nanomachines interacting with the nervous system.

On the other hand, there will be artificial minds (“artificial intelligences”) created by humans. There will be an extremely wide range of different types of AIs, a whole ecology of them with lots of different architectures, niches, and capabilities. Why? Because different AIs are faced with widely different tasks and it is unlikely that a very sophisticated AI highly optimized for one task with look very similar to one that is highly optimized for a very different task.

In comparison, animal minds are faced with a range of basic tasks that are pretty much the same for all of them: Find food, maintain homeostasis, don’t get eaten by predators, navigate in a complex and sometimes chaotic 3D environment, operate a body composed of a myriad of different cells, avoid hazards that can hurt or even kill you, spread your genes. For that reason, different animal brains have core architectures that are relatively similar to one another. Of course, there’s still a huge range of different animal brains, because they inhabit different ecological niches and control quite different bodies. However, it won’t be necessary for various different AIs to be designed for such a set of “common tasks”. They just need to be good at the tasks for which they are designed for, which can range from proving mathematical theorems, over analysing social networks, to building structures on the moon. So, it is to be expected that the diversity of artificial minds will exceed the already impressive diversity of natural minds.

The Associative Paradigm vs. The Formal Paradigm

Today’s computers, which are essentially based on the Von Neumann Architecture, and animal brains function very differently. Computers operate with formal instructions that are typically defined by programmers, which is why I say that they are based on what I call the formal paradigm.

Roughly simplified, animal brains are composed of neurons that form and rewire connections between them via synapses. This rewiring depends on whether incoming stimuli are correlated in time. Animals learn by making associations, so their brains are based on what I call the associative paradigm.

Both paradigms have advantages and disadvantages. Formal programs can be analysed in depth, so that their behaviour is well understood – unless “messy” techniques like evolutionary computation are used. Programs can be run rather quickly and with perfect precision by computers, while humans have tremendous problems with formal tasks like logical reasoning, execution of algorithms in their heads, or the application of correct statistical thinking. On the other hand, the associative paradigm allows for the solution of very complex tasks, even in a changing environment. Minds based on the associative paradigm can learn and adapt, while conventional programs need to explicitly address all the complexity relevant to the desired task, which means that a lot of hard thinking and programming is required for such a program to run properly.

Now, computers can somewhat work with the associative paradigm by using machine learning techniques like support vector machines or artificial neural networks, but it’s relatively inefficient to use conventional general purpose processors for associative tasks. That problem can be solved by using hardware that is specifically designed for performing associative tasks, for example the latest “neurosynaptic” chip by IBM. But this step represents a clear departure from the formal paradigm, so that it could become debatable whether machines using such brain inspired hardware should be called “computers”. It might be more fitting to call them “associators”. Or it might be even justified to call them “minds”, at least once they reach a really high level of complexity.

One notion is clear though: We will use different hardware for different tasks, simply because that’s much more efficient than using a single type of hardware for everything. Trying to emulate a human brain with a Von Neumann computer is about as silly as using neural networks for computing vast amounts of arithmetic operations, unless you don’t have other options. While machines will become more human by embracing the associative paradigm on hardware level, humans will become better equipped to apply the formal paradigm by interfacing more intimately with computers via neural interfaces that link our thoughts with programs executed in the cloud, or in “formal”/”conventional” computing cores implanted in our brains.

The Symbiotic Paradigm

While the formal and the associative paradigms are naturally at odds with each other, it is possible to transcend them both by shifting to a new paradigm that I call the symbiotic paradigm. The name of the paradigm is inspired by symbiotic relations found in nature, for example fungi and algae entering a symbiotic relationship with each other to form lichens. By definition, in a symbiotic relationship both involved parties profit from each other. Who are the parties in this special paradigm? They are associative processes running on appropriate “associative”/”neural” hardware and formal processes running on “formal”/”conventional” hardware.

The symbiotic paradigm entails two core ideas. One is that software should ideally run on hardware that is optimized for and respects the structural paradigm of that software (be it “associative” or “formal”). The other is that complex problems are best split up into sub-problems that are purely associative or purely formal. These sub-problems are then solved with the use of “associative” hardware solving the associative ones and “formal” hardware solving the formal ones. Therefore, in the symbiotic paradigm a complex problem gets solved by a mixed or symbiotic hardware configuration, possibly resembling the structures of lichens, or the relationship between mitochondria and their eukaryotic host cells.

Under the symbiotic paradigm, it is natural for advanced AIs to use brain-like hardware in addition to more conventional hardware. On the other hand, it’s natural for upgraded humans to interlink with computers on a neuronal level. This process of approximation of AIs becoming more “human” and humans gaining “computer powers” is an example of convergence. Its logical conclusion is the eventual merger of AIs and humans to entities I call upgraded minds. These upgraded minds will run on symbiotic hardware that combines the advantages of computers and brains.

What About Motivation?

An important contemporary difference between humans and computers is that humans are motivated by desires and preferences, while computers only do what they are programmed to do – and only when they are told to run those programs. A computer doesn’t become proactive and spontaneously decide to do something because it “feels like doing it”, or because it’s a “good idea” that fits the general purpose of the computer (whatever that may be). Some researchers who want to create an AI that is capable of general intelligence, a so called Artificial General Intelligence, think that an AGI should have a so called utility function. Such a utility function would equip an AGI with preferences which would motivate it to act purposefully on its own. However, a neatly defined utility function is still rooted in the formal paradigm. So, it’s at least questionable whether formal utility functions have their place in the symbiotic paradigm, respectively in upgraded minds. It might turn out to be prohibitively complicated to make AGIs behave reasonably with formal utility functions. Instead, they may need fuzzy preferences or even desires.

Consciousness Of Upgraded Minds

Being an upgraded mind would be similar to contemporary human existence in the sense that there will be both conscious and unconscious mental processes. Why? As humans incorporate computers into their minds, these would most likely fulfil formal tasks like solving mathematical equations you feed to them via some kind of interface. For example, this interface might work via augmented reality where you see an equation and then an overlay appears that displays its solution. A more advanced possibility would be that you simply “know” the answer – it simply appears in your mind out of “nowhere”. The formal computation of the mathematical equation is done on a simple computer chip, so it wouldn’t create conscious experiences. Only the final answer is “forwarded” to the consciousness.

So, if you incorporated computers into your mind, then you would expand your unconscious mind and grant it improved capabilities. If you actually wanted to extend your consciousness, you could do so by interlinking your brain with sufficiently advanced and complex “associative”/”neural” hardware.

On the other hand, as AIs use ever more complex associative processes and finally reach general intelligence, they would gain more and more attributes that are common to humans, simply because their architecture uses more and more design features shared with the human brain. Conscious experience is one of those features. Since the previous argument is not really waterproof, I will give another one: Consciousness is most likely an emergent phenomenon of complex systems, as claimed for example by integrated information theory. Systems gain more consciousness as they increase their complexity. At the very least, these considerations should sound plausible.

In actuality, we don’t know whether any system outside of ourselves has conscious experiences or not, until we develop and prove a sufficiently good theory of consciousness. And if you want to split hairs, then we will never know for sure, since we cannot scientifically prove theories but only disprove them, at least if you buy into the reasoning of Karl Popper.

Classification Of Upgraded Minds

One of the most intriguing questions that come up rather naturally is: What would upgraded minds be like? The broadest possible answer to that question is: “It depends!” On what would it depend? Mainly on the tasks, goals, preferences and desires such an upgraded mind is equipped with. Different tasks require different capabilities and possibly different hardware configurations. My classification is therefore primarily based on preferences and essential capabilities of different upgraded minds.

As any proper classification, the upgraded mind classification has several different classes. I have chosen to call the classes by Greek capital letters, so that they have a common theme. To keep it simple, a mind either belongs to a class or not, even though in reality it would be more appropriate to think in continua rather than binary class membership relations. A mind can belong to several different classes at once, so these classes do overlap. Therefore, it is necessary to specify all classes that a mind belongs to in order to get the full classification of that mind, for example a mind could be in the classes Alpha, Pi, and Phi at once, so it would be an “Alpha Pi Phi mind”.

Note that out of grammatical convenience I generally refer to minds with the article “it”. This does not mean that I don’t see them as persons. The question when a mind should be considered to be a person is a quite difficult one and is outside of the scope of this post.

The Upgraded Mind Classes

Alpha / Anima

An anima (latin for soul) animates or represents a specific entity, and basically brings it to “life”. A proper subclass of the alpha class is the class of controlling alphas who actually have more or less full control over the entity they represent. For example, these entities can be physical objects like cars, houses, or bodies (biological or robotic).

While not actually “upgraded”, today’s animal and robot minds are controlling Alphas. In theory, any concrete or abstract entity could be represented by an Alpha, for example a rock, a network, the biosphere, the planet, mathematics, art, justice, poetry, and so on (the more abstract an entity, the less it is possible to be controlled).

Why would that be a good idea? It would make the world much more alive to have such an exhaustive plethora of alphas. You could go ahead and talk to the Alpha of each object or idea you encounter. The world would literally become self-explanatory. Rather than consulting an expert knowing a lot about an object or subject, the object or subject in question would be directly represented by its Alpha, probably the best expert that you can ever get.

In general, it’s not necessary that an entity should only have a single Alpha. However, if an entity has multiple Alphas, their responsibilities should not overlap too much, because that would cause nasty conflicts. Having a controlling world Alpha could be a real nightmare, but it would be quite reasonable to have one or more purely representative world Alphas who are deeply aware of the state of the world and provide guidance. An example for the latter is “Webmind”, the spontaneously emerging internet Alpha from the “Wake, Watch, Wonder” trilogy by Robert J. Sawyer.

Delta

A Delta mind is capable of atomic self-modification. This means that it can change itself completely, down to the atomic level – this change is realized with nanomachines operating on the hardware of the delta, or by creating a modified copy of that hardware via atomically precise manufacturing. The name “Delta” is inspired by the use of the capital Delta in mathematics and physics as symbol for a difference. A Delta can basically make itself “different”.

Singularitarians typically assume that the first real AGI will sooner or later become a Delta and improve itself extremely rapidly, basically leaving all other minds behind. That is not necessarily the case. Whether that’s true or not, is primarily a design decision by the creators of that AGI if they want a Delta mind or a non-Delta mind. Nevertheless, it’s easy to see that a Delta is potentially extremely dangerous both to itself and to its environment. During self-modification things can go horribly wrong, with all possible kinds of nightmarish consequences.

Thus, it would seem reasonable to disallow the creation of Deltas. But such a law would most likely have the consequence that sooner or later a Delta is created illegally in the underground by a possibly very unpleasant faction. That Delta would either give that faction a significant advantage, or it would escape and behave rather uncontrollably, or it would simply mess itself up completely by doing the wrong self-modifications. So, the more sane approach is to create “guardian Deltas” who protect us from Deltas created by groups that are hostile, or simply incompetent. Eventually, it might even make sense to grant the right to become a Delta to all upgraded minds, or at least to those who are sufficiently “enlightened”.

Epsilon / Evanescent Minds / Transient Minds

Animals have the natural desire for self preservation. Upgraded minds with a specific task might be expected to adopt the goal of self-preservation as indirect sub-goal, or rather boundary condition, because their destruction would typically make it impossible for them to fulfil their task. Simple AIs however, are indifferent about their own existence, if only because they are not aware of the concept of their own survival or death and its consequences.

For more complex AGIs or upgraded minds it would still be possible to make them indifferent about their own survival, but then that indifference would have to be more of an integral design feature. One possibility to create such an Epsilon mind would be to give it an “expiry date” and once that date is reached its highest desire would be set to self-termination. To prevent unnecessary suffering, the Epsilon would also need to be made indifferent or even happy about having such an expiry date. This should be complemented with making its death an exceedingly blissful experience.

Shutting down an Epsilon mind at its expiry date would not necessarily mean its real end. Rather than being deleted, the mind could be stored in an inactive state, so that it could be reactivated in the future when required. In that case, the mind would get a new expiry date, or none at all, if the mind gets converted to a non-Epsilon mind.

But why would one want to create Epsilons anyway? Apart from helping to prevent mind overpopulation (even if storage space will become extremely cheap, digitalized minds can be copied and multiplied at ridiculously high speeds, so storage space will still remain a significantly scarce resource), the creation of Epsilons would allow for rapid and relatively safe creation of experimental minds. Delta minds might want to create transient epsilon copies of themselves in order to safely test a self-modification they plan doing. Better having a defective mind not claiming the right to a perpetuated existence than making yourself kaput. Still, to a non-Epsilon mind the idea of creating Epsilons naturally sounds rather cruel, so it will probably be quite a controversial issue whether to allow the creation of Epsilons or not. Again, this is a delicate ethical issue which is simply too complex to be discussed in this post.

Eta (H) / Hedonic Minds

Hedonic minds are optimized for experiencing extremely intense positive feelings. It might even be their sole purpose to feel as happy as possible, which would put them into the Eta_1 subclass (1 like 1 single purpose). Etas who are not in the Eta_1 subclass might have other goals than simply feeling ecstatic all the time. It might seem difficult to construct Etas so that their extreme intensity of pleasure doesn’t interfere too negatively with their general functioning, but that might simply be a design challenge.

That challenge might be hardest for the Eta_max subclass consisting of minds who are at a constant and maximal level of happiness. In contrast to them, more conservative Eta minds would be motivated by gradients of bliss, as proposed by the British philosopher David Pearce.

Iota / Integrators

Integrators have the capability to inspect other minds and to integrate their knowledge and skills into themselves very effectively. For some, it may be surprising that I don’t expect this capability as a universal feature of upgraded minds. After all, we are talking about the far future in which people are supposedly able to download karate skills from the internet instantly, just like the character Neo does that in the Matrix movie. Sure, it might be trivial for all minds to download all kinds of skills once they are extracted from a capable mind and saved in some kind of data format. The really difficult part is unpacking that skill file and installing it properly in your mind so that it works for you just as well as for the mind the skill has originally been extracted from. That would require massive “rewiring” in your associative hardware components! Do it wrong, and you possibly overwrite already existing skills, or trigger weird side-effects. And even if the skill file was installed properly, it might use up way more mental resources than if you learned that skill naturally. Due to all of these factors, it is to be expected that integrators will be rather special minds who are optimized for extracting, absorbing, and installing skill files with great efficiency.

Their sophisticated mind “rewiring” capabilities also make integrators inherently suited for the task of modifying and upgrading other minds. They could be called “connectome doctors”.

Lambda

Lambda minds must follow specific laws and rules that are deeply ingrained in their mind architecture. They must check all their actions for compatibility with these basic Lambda laws. If they notice that a planned action is incompatible with such a law, then the Lambda mind won’t execute that action. For example, the Three Laws of Robots devised by science fiction author Isaac Asimov are such basic laws that his fictional robots must follow, so these robots are Lambda minds. Lambda laws have been proposed as safety measures for AGIs to make them “human friendly”.

One basic problem with the Lambda law approach is that the effectiveness of any law depends on how that law is interpreted. Effectively, any Lambda mind would have to be its own judge interpreting the meaning of its laws according to its own ontological model of the world. If that ontological model of the world deviates too much from the model that normal humans use, then the latter should not expect the Lambda mind to behave according to their own interpretation of the Lambda laws. Even humans don’t clearly agree with each other on what rather fuzzy concepts like “friendliness” or “justice” actually mean. It would be necessary to devise some kind of standard ontology that a Lambda mind had to use in order to resolve this issue. Unfortunately, it is hard to conceive how ontologies could be standardized effectively without formalizing them. As ontologies, which partition the complex world around us, naturally emerge out of associative processes, it is quite unnatural and fundamentally difficult to formalize them. Extremely intelligent upgraded minds might be up to that task, whereas unupgraded humans would probably give up from utter frustration.

Note that Lambda laws would interfere with the general performance of Lambda minds, because Lambda law conformity checks would require Lambda minds to predict the consequences of their actions in more or less great detail – and that is a really resource intense task. Requiring such a conformity check for each action that a Lambda mind plans on doing, would significantly slow it down – at least compared to unrestricted minds. While it would be possible to reduce the performance losses inflicted by conformity checks by using heuristics and only checking certain classes of actions, these modifications would increase the risk that a Lambda mind actually breaks one of its Lambda laws. Therefore, it seems unlikely that Lambda laws would suffice as single safety measure for minds. What set of safety measures would actually be a good idea is outside of the scope of this post.

Mu

Natural sentient minds can suffer quite a lot. Artificial sentient minds would usually be prone to suffering, too. But it might also be possible to design them such that they are actually unable to suffer. I call such minds Mu minds. Rather than experiencing pain and other negative feelings, they would experience neutral “urgency alerts” if something bad happens to them. Such urgency alerts would just shift their attention to the problem at hand, rather than causing unpleasant subjective experiences. If the problem is more serious, the alert might be harder to ignore, or it might dampen the positive feelings the Mu mind is experiencing at that time.

It is a priori not clear how much it would affect the functioning a mind to replace negative subjective experiences with neutral urgency alerts. The latter might actually work better, at least in some cases. Nevertheless, it is reasonable to expect that negative subjective experiences are quite useful for certain purposes. How much of their functional use can be replicated with neutral urgency alerts is an open question.

Rather than being actual Mu minds, I expect that the typical upgraded mind would have the capability to suffer, but also the ability to switch off negative feelings at will, once they are experienced as too troubling.

The choice of “Mu” as name for Mu minds is inspired by the word mu which is a key concept in Buddhism and basically means “nothing” or “without”.

Pi / Parallel Minds

Humans can do some multitasking, but they are not very good at it. We can keep only a handful of objects in our conscious minds at once – depending on how complex those objects are. For example, we can keep numbers with six digits in our conscious minds, but listening to and understanding six different conversations at the same time would rightly be called a superhuman feat.

Pi minds can do proper multitasking by definition. How would that work? It’s about higher parallelization of mental activity. Imagine having multiple streams of consciousness at once, so that you could for example have multiple streams of thought at once – but each consciousness stream would only have one single stream of thought. These “parallel” streams of consciousness would only be weakly connected. When trying to listen to two songs at once, humans mostly experience disharmonic garbage. But if you had two parallel streams of consciousness, you could enjoy each song separately, even if you listen to them both at the same time.

How much those different streams interact with each other is mainly a design feature of the Pi mind architecture. If they were completely separated, they would basically define separate minds, each of which not knowing what the other minds were doing. And if they were integrated too strongly, you would get the typical problems, interferences and disharmonies that are characteristic for a “normal” mind trying to do too many things at once.

With multiple parallel streams of consciousness, Pi minds would be able to have meaningful conversations with multiple persons at once. Or just do different mental activities at once without sacrificing performance on any single of those activities. What would it mean to demand “full attention” from such a mind? It could mean the full attention of one single stream of consciousness. Or full attention of all parallel streams. Maybe it could even mean that all streams should be integrated into one single larger consciousness stream.

There may be different setups for a Pi mind. One possibility is to have several mostly equivalent consciousness streams. The alternative to that would be having one main stream of consciousness monitoring and moderating subordinate sub-streams. Even complex hierachies of consciousness streams might be possible.

How would Pi minds work on a technical level? A rather crude option would be the multiplication of different regions of the mind responsible for certain task. For example, if the region responsible for perceiving visual information was duplicated, the resulting mind could have two separate “fields of view”. Perhaps one for perceiving the “real”/material world and another for seeing a virtual world. Both visual streams would be neatly separated, unlike the overlays in augmented reality for example.

More sophisticated and elegant designs might be possible, but they would certainly be more complex than the design of a non-Pi mind. That’s why it seems likely that most upgraded minds will only have rather limited “Pi capabilities”, if any. Which minds would be real Pi minds however? Most probably those whose attention is demanded frequently by many different other minds. In other words: Popular minds would have good reasons to be or become Pi minds.

Rho

The definition of a Rho mind is based on its motivation. If its main motivation is to serve some other entity, then it is a Rho mind. So, Rhos are basically servants or “robots”. It may feel natural to compare them to slaves, but the distinction between slaves and Rho minds is that slaves usually do not want to be slaves and are not intrinsically motivated to serve their masters. In contrast, it’s the main purpose and a great pleasure for Rho minds to serve their masters – by definition! If a mind doesn’t get great pleasure from serving a master, then it’s not a real Rho mind. Even if it may seem ethically questionable to create Rho minds to serve us, it is quite possible to make ethical arguments in favour of creating them, as Steve Peterson has done.

Phi

What happens if you mix the motivation of a workaholic and the knowledge of an expert system with the best mind upgrade technology of the future? You would essentially get a Phi mind that is heavily optimized for excelling at one or more disciplines. Phi minds do not care about small talk or popular culture (unless they specialize in exactly that, of course). Their special discipline is their calling and fulfilment. They are simply not interested in anything else and want to minimize their time spent with anything else than their real purpose.

Vernor Vinge’s science fiction novel “A Deepness in the Sky” features so-called Focus technology used to turn normal humans into slaves focusing monomaniacally on a single subject. They become so obsessed with their task that they can drive themselves into death from exhaustion and negligence of basic bodily needs. These “zipheads” might be technically Phi minds, but they are a rather bad and negative example.

In a more positive future, Phi minds would certainly care about their survival, unless they are transient epsilon minds. They would naturally try to keep themselves in good shape in order to function optimally. Also, they would be highly motivated to improve themselves even further, as to excel even more at their special task or discipline. If they didn’t want to be actual Delta minds, they would instruct other minds (preferably integrators) with the task to install performance upgrades in themselves. Compared to typical humans, Phi minds would derive much greater pleasure from engaging with their special task, if only to keep themselves on track. The idea of being a Phi mind might feel weird for other minds, but it wouldn’t have to feel bad.

Psi

Psi minds have the deeply psychological ability to inspect and analyse their own intricate, usually unconscious, cognitive processes – in real-time or later on. Humans have the disadvantage that they cannot directly inspect how their minds work on the level of the unconscious/subconscious. That is a dissatisfying state of affairs, because humans might actually want to know why exactly they act like they do – especially after they have done something that they regret. Artificial minds however, could be designed so that they can inspect their own cognition processes on the elementary level by a special kind of introspection. This might work the following way: Mental processes are copied on an elementary level into a kind of Psi log that stores them for a certain time (probably not indefinitely, because that would eat up too much storage space). This Psi log can then be read and analysed by the Psi mind, or other minds if they are granted access to that log.

Obviously, the requirement to include a Psi log that monitors and stores basic mental processes would make the Psi mind architecture more complicated than that of a more generic mind. This will probably be the main reason why it is to be expected that not all future upgraded minds will be Psi minds. It would be reasonable however, to equip certain Delta minds and minds who make very important decisions with “Psi” abilities. While the advantage of Psi abilities for Delta minds is that they can understand and therefore improve themselves better, one major advantage of having decision makers with Psi abilities is that it would be possible for external inspectors to judge whether the reasoning processes of those decision makers were made fully rationally or not – and to see how exactly they came to their final conclusions.

Other Characteristics Of Upgraded Minds

Apart from the class membership of a mind, it has other defining characteristics. Different minds come in different sizes and they may have different degrees of consciousness.

Size

The size of an upgraded mind might be defined as the number of bits that is required to store the whole mind in some kind of data storage. In the case of minds stored mostly on biological data storage devices (yes, I mean brains), it is not easy to say how many bits would be required to store that mind “truthfully” on a different data storage device. Necessarily, this question depends on what you define as “truthful” copy of a mind. And this in turn, essentially depends on the extent of what you define as your “identity”. Are you only your connectome or are you your whole body (possibly including your microbiome)? Or are you just a set of ideas, memories, preferences, and emotional reaction patterns?

In any case, it should be relatively easy to measure the size of a digitalized mind. Just count the bits. Because the size of upgraded minds would span multiple orders of magnitude, it would be useful to use a logarithmic scale to get a simple measure of the size of a mind. We are counting bits, so taking the logarithm to the base 2 would be natural. A logarithmic mind size of 60 would mean that the mind in question has 2 to the power of 60 bits which is about 10 to the power of 18 bits, or 1 Exabit (or 125 Terabyte) (keep in mind that 2^10 = 1024 is approximately 10^3=1000). A mind of logarithmic size 100 would have 10 to the power of 30 bits and would contain more information than all of mankind combined. Very advanced planet Alpha minds might even reach logarithmic sizes in excess of 120 – a million “mankinds” or more! Bigger is not always better, since huge minds would require huge amounts of hardware that also needs to be powered somehow. Smaller minds would have advantages like the ability to be copied and transmitted quicker and the ability to fun faster.

Consciousness

Lately, there have been proposed a number of different methods to quantify consciousness. Most prominently, integrated information theory features formulas for effective information, integration (“Phi”), and qualia of a mind. The physicist and futurist Michio Kaku suggests quantifying consciousness by counting “feedback loops” in the mind. There may be many other ways to quantify consciousness, perhaps each telling you about a different aspect of consciousness. It may be difficult to actually calculate the “size” of the consciousness of a mind, but having at least rough orders of magnitude would already be quite helpful to get a better intuition of what kind of mind you are dealing with.

What Kind Of Mind Would You Want to Be?

If we are going to upgrade ourselves, what kind of upgraded minds will we turn into? What would we like to become? These questions may actually be rather premature questions, because our understanding of possible future minds is still extremely small. If you ask children what they want to become as adults, you typically don’t expect these answers to be realistic. In the same manner, our aspirations to become a certain kind of upgraded mind might change radically over time. In spite of that caveat, it is still important to ask such questions, because their answers might inspire hope for a much better future.

What about me? Ideally, I would like to make copies of myself who test out different modes of existence. If copying turns out to be too problematic, I would try to experience different class properties serially, one after the other (with the exception of the Epsilon property which I would only consider if there were at least two different copies of me), but with the clear priority of becoming an Eta mind first. In any case, I think that the freedom to choose is of tantamount importance. I wouldn’t want to be forced to stay in one single class for the rest of my life. That’s one of the reasons why I propose the right of radical morphological freedom, the freedom to choose your own mind class, mental configuration, and body morphology. Another reason for that is granting this right to each upgraded mind would counteract abuse that is likely to happen in a world without class mobility.

So, what kind of mind would you want to be (first)?

8
  Related Posts

Comments

  1. Stephen Kagan  August 21, 2014

    Michael,

    My first response on reading your description of the Associative, Formal and Symbiotic Paradigms is to simply say: Bravo!
    Well done.

    I’ve been thinking for some time that there will inevitably emerge a diversity of A.I. and A.G.I. rather than a single monolithic abstract intelligence. Different purposes and methods will inevitably result in different minds. The Upgraded Mind Classes are an excellent step in that direction though the more I read the more I began to think that some of these could not exist well on their own but as dominant and recessive types of mind working in conjunction. What do you think?

    Anyway, I gotta run but wanted to briefly touch base. Your thoughts here will certainly help me with my second novel as it is progressing now. I look forward to re-reading this article and some of your speculative fiction. If you are at all interested I would be willing to share with you my own hybrid science fiction novel.
    http://www.amazon.com/dp/B00CY2D298

    Until next time.
    Stephen

    reply
    • RadiVis  August 21, 2014

      Thanks for your reply, Stephen.

      It is great to read that you value my thoughts on the different paradigms.

      On the ecology of different minds: Yeah, the different types of minds will have rather complex interactions with each other. What can reasonably be expected is that large minds form the core of large organisations and have many different capabilities to fulfil this role optimally, for example Alpha, Delta, Iota, Pi, Phi, and Psi. Around these core minds would be smaller minds for more specific tasks, mostly medium sized Phi minds perhaps who may be partially interlinked with the core mind. At the periphery, more regular mediator minds would handle the communication between the organisation and outside parties, because Phi minds think rather weirdly and have a hard time dealing with non-Phi minds directly – and vice versa.

      Perhaps the word “class” suggests too much that these minds would form their own class communities and stay within their confines. I don’t think that would usually happen since minds of different classes would profit immensely from working/playing together.

      I would be glad if you shared your science fiction novel with me. Why do you call it “hybrid”?

      Read you later,
      Michael

      reply
  2. Stephen Kagan  August 25, 2014

    In a second read some of the classifications seem like you’ve derived them from different capacities of our own mind-brains.
    Alpha it seems could be either a witness or a director. Wake is a good example.
    Rho & Lamda both are designed for following order. One set inscribed in their programming and another as servants to humans? These sound very similar. Could one be a subclass of the other.

    In further thought, some of these seem like they would be better as companions or enhancements to an existing mind while others might work better as full fledged minds. The Mu (pain reducer) and the Eta (pleasure) minds don’t seem capable as autonomous minds. To incorporate them into a Rho or Lamda for example makes sense. Some of the classes could work as A.G.I. while others as A.I. modules added to A.G.I. or our own brains.

    You might want to consider submitting this as a article to a couple of sites like:
    http://brighterbrains.org
    https://www.facebook.com/2045Initiative
    Kurzwelai
    Ieet.org

    Regarding my novel Augmrnted Dreams, it is a hybrid science fiction in that advanced technology is foundational but I take some liberties that lead to wild speculation of advanced quantum A.G.I. and neural implants and mythological themed virtual worlds. Here is the link on Amazon.
    http://www.amazon.com/dp/B00CY2D298
    If you are interested I’d be happy to send you a PDF or ePub. I’m trying to figure out how to incorporate some of your ideas above into my second novel, if they work within the narrative that has already been established.

    One of the reasons I mentioned brighter brains is that they are hosting an A.I. conference in September and it might be a good place to get your ideas into in some way.

    reply
    • RadiVis  August 27, 2014

      Your comments are really thoughtful. I really like them. Now that I think about it, the difference between Rho and Lambda minds is that Rho minds have a positive motivation to serve, while Lambda minds have “negative” boundary conditions (or motivations not to do something). Therefore, they are not subclasses of each other.

      It may seem that Mu and Eta minds are not good at acting autonomously, but that may be an error. The neural correlates for “wanting” and “liking” are not the same. What is different in Mu and Eta minds is the “liking” subsystem. This doesn’t need to strongly affect the motivating “wanting” subsystem. Thus, Eta and Mu minds should be compatible more or less with all other classes.

      Recently, I got the idea to bundle the classes into three groups:
      1. Motivation: Alpha, Epsilon, Phi, Rho, Lambda
      2. Hedonic setup: Eta, Mu
      3. Capabilities: Delta, Iota, Pi, Psi

      I think that kind of structure would help understand the different classes better. I do consider submitting this article after incorporating and explaining the new structure. Thanks for the site suggestions.

      Your novel seems very interesting. I would be honoured if you could send it to me as PDF to radivis@radivis.com. Perhaps I might help you to see how my ideas fit into the world of your novel.

      The AI conference looks interesting, but it’s on the other side of the planet for me.

      reply
  3. Stephen Kagan  August 30, 2014

    Your bundling of classes looks good and is helpful to understand your model better.
    Do you have some examples of the different classes of Intelligence?
    The monolith in 2001 I would imagine as a Lambda, Cmdr Data in Star Trek a mixture of Alpha, Pi and Iota at least.

    What would be the practical use of Hedonic Minds?
    Giving and feeling pleasure like Gigolo Joe in AI makes sense but to only focus on feeling pleasure baffles me.

    One thing I would caution is over simplifying animal intelligence.
    Hive mind insects I thin k would have a very distinct kind of intelligence as would higher mammals with complex social hierarchies such as dolphins, chimpanzees and wolves. Birds that flock might be closer to hive minds, at least temporarily. I’ve watched slugs and snails slide through the forest and they are very different than insects in attention and intelligence. I think environmental and social complexity shape intelligence and I suspect that we could learn a lot about those kinds of organization as we move forward.

    reply
    • RadiVis  August 30, 2014

      I can’t tell much about the monoliths in 2001 as their programming structure is not really clear.
      Commander Data is certainly an Alpha mind and relatively similar to the mind of a human apart from being able to think much faster and apparently not having emotions (until he gets the emotion chip). Without emotions, he classifies at least as Mu mind. He may or may not be classified as Iota and Psi, but I wouldn’t say he’s a Pi mind. Overall, I’d say he’s a rather conservative Alpha Mu mind.
      An example of a real Pi mind is the Andromeda ship AI in Gene Roddenberry’s Andromeda series. She’s able to use different avatars at once. So, she’s pretty much Alpha Pi.
      The AI from the story The Metamorphosis of Prime Intellect is a great example of a Delta mind, while the AI from Friendship is Optimal is a really good example of a Delta Lambda mind. I think both would classify as Pi minds, too.

      I currently can’t come up with popular examples of Iota or Psi minds. It seems to be commonly accepted that AIs and robots would have Iota capabilities, but I think that is a questionable assumption.

      So, what is the practical use of Hedonic Minds? Well, in Hedonic Utilitarianism pleasure is the central good in itself, so increasing it in a safe way would always be a good idea according to that ethical philosophy. David Pearce explains the need for hedonic augmentation in his Hedonistic Imperative manifesto. In “Paradise Engineering” he goes further with that idea. There’s also the idea of utilitronium that would basically consist of a very dense computational substrate optimized for simulating Eta_1 minds. It’s a rather radical idea that comes from taking Hedonic Utilitarianism to its absolute conclusion.

      Viewed from a more practical perspective, it’s typically a good idea to have a higher hedonic set point, because that typically creates higher emotional resilience and thus productivity. Depressed persons can sometimes be highly creative and productive, but that is rather the exception that the rule.

      In Greg Egan’s novel Schild’s Ladder there is a character who is an Eta_max mind. I forgot the name, though, and don’t feel like looking him up.

      Animal intelligence may really be worth studying. I assume that upgraded minds will have a lot of mental features that can be found in various different animal species – perhaps included as special mental modules. Hive minds are very interesting. I assume that the constituent individual minds of a hive mind would be Epsilon minds because they value the survival of the hive much higher than their own and deem themselves replaceable.

      reply
  4. Stephen Kagan  September 3, 2014

    I was thinking about the birds and the bees.
    Have you ever seen the movie Winged Migration?
    Some birds have extraordinary navigational skills, migrating thousands of miles across vast landscapes of varied terrain. And then there is in some cases their dramatic courtship rituals.
    Bees achieve navigation on a smaller scale across detailed terrain and have the added capacity to communicate direction.
    Dogs have evolved to understand significant parts of human language and carefully monitor and respond to human facial queues.

    I watched a documentary on the rainforest and it was extraordinary how chimpanzees formed tribes, gathered to war against another clan over territory. But what was most fascinating was how on their careful search through the jungle how they quietly but effectively communicated with each other. And how for that matter do wolves communicate strategy amongst the pack when they are hunting?

    How can we qualify and deploy those types of intelligence?

    reply
    • RadiVis  September 3, 2014

      Yeah, animals do have quite interesting skills. My view on how to learn more about their special kinds of intelligences is that the best approach would probably be to establish a global telepathic and empathic network with nanomachine based neural implants. With that kind of technology it should be feasable to add non-human animals to the network. At least if we augment these animals, so that they can interact with the network in a meaningful way.

      The nanomachines in their brains would provide us with plenty of real-time data about how animals think. Because they are in the network, we would know what they think and feel. We could also ask them to help us out in exchange for rewards. By interfacing with our technology and the telepathic network they would also increase their own possibilities and capabilities enormously. So, they could opt to become upgraded minds, too.

      reply

Add a Comment