Last year, I haven’t worked a lot on my stories. I was rather busy finding out what’s the problem with me and how to become more effective. It turned out that the main problem was that I didn’t get enough sleep. Really, I feel rather silly for not having realized that much earlier.
Anyway, I have been thinking about the main problems we (humanity, life on Earth) face in the near and distant future. There are some technical problems like the depletion of oil and other resources, global warming, how to generate enough clean energy, and so on. However, I think those problems aren’t very hard to solve. It would only take some more technical innovation to solve them. Very cheap solar power may solve the energy problem – and solar energy is making remarkable progress lately. Once the energy problem is solved and really cheap and clean energy is available, it would be easier to use some of that energy to synthesize or recycle scarce resources. So, I don’t worry too much about these issues.
The really hard problems are the political ones. How do we solve the following problems:
- Poverty
- Human rights infringements
- The absence of animal rights
- War
- Corruption
- Short-sightedness in politics and economy
- Violence
- Ostracism
- Indoctrination
- Ideologies in favor of stagnation (and hence decay)
- Intolerance and hate
Technical innovations? I don’t think so. Those are political and societal problems, ergo problems of human thinking and values. Technical innovation is relatively easy. Societal innovation is really hard, but desperately needed.
Some people argue that there are even more serious problems, like
- Involuntary death
- Existential risks
- The threat of nuclear, biological, chemical, cybernetic, nanotechnological warfare and terrorism
- Threats arising from possibly unfriendly artificial intelligence (AI)
- Similarly, threats arising from intelligence augmentation (IA) which has the potential to be very disruptive if the access to those technologies is restricted to a small minority
First of all, involuntary death has technical components and political components. It wouldn’t be enough to make everyone medically immortal, if they can still get killed by powerful entities. I guess it will be less of a challenge to defeat ageing than to establish world peace.
Existential risks arise mostly from the last three points. There is the unfortunate trend to get more and more deadly weapons with further advances in science. And additionally, with rising wealth and increasing access to information, it becomes easier and easier to gain access to weapons of mass destruction. With the power to destroy the world comes the responsibility to protect it from annihilation. Actually, we are really lucky that we haven’t bombed ourselves to oblivion, yet.
Now, let’s consider the issues of artificial intelligence (AI) and intelligence augmentation (IA). Apart from making it easier to create or gain access to weapons of mass destruction, the real problem with artificial intelligence augmentation (AIA), as I would call it, is that it’s very likely to create an unprecedented concentration of power within the entity which has the differential advantage of radically augmented intelligence. This difference in intelligence and power has the tendency to amplify itself recursively, because high levels of intelligence and power make it much easier to gain even more intelligence and power (at least once there are methods of improving intelligence which are much more effective than simple learning, for example). The end result is the almost absolute concentration of power within the first entity which gained a significant advance in intelligence or power (with enough power you can hire armies of researchers which work on AIA for you). In the generic case, such an end result would be bad. Very bad. Only a really sane, wise, and responsible entity should be allowed to possess an extremely high level of power.
If there was a great abundance of sanity, wisdom, and responsibility, this wouldn’t be a really overwhelming issue. Unfortunately, it is not known whether there is an entity with sufficient sanity, wisdom and responsibility to master the challenge of managing incredible amount of power without going mad – or how to create such an entity. After all, the generic dictator is not a very nice person. Entities in possession of great power tend to shift their priorities towards securing and increasing that power – which leaves less time and leeway for a generally benevolent use of that power.
An intuitive approach to solve that problem is not to allow great concentrations of power. Distribute it as evenly as possible. Sure, there may be ways to do that, but it still wouldn’t make the world a really safe place. After all, if everyone has a decent amount of power, it wouldn’t be terribly difficult for them to build or gain access to weapons of mass destruction. Once our level of technology and knowledge is so advanced that almost everyone can easily find out how to create deadly viruses, or aggressive self-replicating nanomachines, you really want an entity which is capable of stopping any crazy person from releasing those weapons into the environment. But once there is such a powerful entity, there’s the danger that it abuses its power.
This is quite a dilemma. I call it the Power Accumulation Problem: How to deal with ever increasing levels of power which are sufficient to cause enormous devastation?
Perhaps the most promising approach is to have a system of entities which are powerful enough to prevent rogue individuals from causing extreme harm. The entities of this system would also need to have the ability to keep each other in check, so that no one abuses its powers. The first real world approximation to such a system of entities is the set of the current national states. They are reasonably able to stop individuals from doing great harm. However, the accumulation of power of the different national states is very unequal. Nevertheless, it’s still possible that any nation can be defeated by a sufficient large alliance of other nations. Attacking a nuclear power might sound like madness, but it is not too unreasonable to expect that it won’t defend itself with nuclear weapons very enthusiastically. After all, attacks on other nations would provoke nuclear retaliation (which means annihilation instead of defeat), and using nuclear weapons on home territory is not a terribly attractive move.
When trying to solve the power accumulation problem with a balance between nation states, there are a couple of problems.
- It’s hard to balance the radical increases in power caused by an intelligence explosion due to AIA. It might be feasible if all nations agreed on a protocol of mutual transparency and shared all their knowledge with each other, but that’s quite far from how nations behave typically.
- If nation states need to prevent individuals from using weapons of mass destruction, they need to increase their levels of surveillance and probably also sousveillance, as it becomes more and more trivial to gain access to such weapons. This is certainly possible, but not convenient. The mechanisms of surveillance can easily be used for corrupt purposes, if they aren’t balanced by comparable levels of sousveillance. In reality, we are quite far from such a necessary balance.
- Nation states have a long history of not caring very much about human rights (or even animal rights). If the balance of power is achieved by a stalemate between different nation states, then some of them are probably rather unpleasant regimes.
A much more elegant approach is to create a supranational system of entities of about equal power which is strong enough to prevent the abuse of power of individuals even nation states. The different entities of that system would also need to keep each other in check. What I’m talking about is a kind of world police consisting of different units which all look alike and do not have any allegiance except for a common codex. This codex could be something like the charter of human rights, together with the mandate to actually enforce them. Of course, it wouldn’t be the responsibility of that world police to take over all mundane tasks that regular police forces usually take care of. Only issues of global security and cases of human rights infringements regular police forces are unable or unwilling to take care of would fall into the responsibility of that world police. However, it’s of utmost priority to keep up the balance between the different units and to correct any unit which shows signs of corruption.
That’s actually the best idea I can come up with. Perhaps it would be better to have a single, extremely sane, wise, and benevolent superintelligence that does the job of the world police. But even that might end in a global dictatorship of that superintelligence. Maybe it would be a great benevolent dictatorship, but as in every dictatorship almost all political power would be concentrated in one single entity. If people want to change world politics, they would need to convince their superintelligent world dictator. That’s probably not impossible, but certainly not as easy as writing a new codex for a distributed world police.
In fact, those two options of a world police system or a world police singleton would be very stable. Alternative distributions of power would most likely be less suited for granting both stability and universal security. And in a world that becomes increasingly prone to devastation by aggressive individuals or groups, security is extremely important. The alternative is living with a high risk of extinction. You might wonder how that is connected with the observation that we seem to be pretty alone in this huge cosmos…
ShareJAN
2012