Superintelligence
By Nick Bostrom
Date Read: Jan 6, 2020
The book itself is hard to read as it’s dense with examples, but is a good read. The book has a lot of great snippets, but these are the parts I enjoyed the most.
Bostrom claims there are a few ways to achieve superintelligence, and achieving any of them will lead to any other. One can use artificial intelligence, emulate the entire brain, develop the human brain to be a weak form of super intelligence (which could then create stronger forms), and a few other methods of creation. Once a superintelligence is created, it could be sped up with excess computational power, expanded using many smaller intelligent units to collectively make decisions, or be exactly like the human brain in terms of speed, but be able to make better decisions/calculations than humans can.
He explains that having one single superintelligence that’s friendly towards humans is good towards us, but if it ever turns against humanity or makes a decision not in our favor, it could have catastrophic outcomes. Bostrom discusses numerous potential protections for such an issue.
Bostrom asks what would happen if the development of superintelligent systems was multipolar; where many superintelligent systems could exist at any point (perhaps developed by different countries or companies), thus a single agent doesn’t take over the world. Rather, a bunch of agents can exist at once. Through this Bostrom explains a potential algorithmic economy, where the laborers can be superintelligent human-like agents that can exist for days at a time in the digital world. Bostrom here is slightly optimistic I think on this view of what would happen in this case, as the rich who own significant capital can take advantage of the effects of the economy to become wealthier, but those who don’t have any capital would likely need to be helped through philanthropic means from those who became wealthy (I think this is unlikely because look at the billionaires today). Bostrom talks about Malthusian conditions, where in the past, the growth of societies were to an extent limited by the amount of resources one had, but we’ve never reached a point to use all our resources at a maximum consumption rate. Theoretically, catastrophes and plagues won’t have to help limit population in a society that’s more or less controlled by AI. The issues with a multipolar system is outcomes can be uncertain, as game-theoretic approaches might be needed to make the best decisions given all the other agents that exist at the same time.
He also discusses the strategic development of AI for the greater good. It essentially says that the development of a singleton is the best given the proper solutions to the control problem exist. It also discusses the issue of collaboration to build a singleton rather than have many different organizations build their own superintelligences which create the multipolar problem. At a certain point, Bostrom becomes very optimistic, saying that a rule such that after a certain amount of money is made, any excess can be distributed across all of humanity in order to keep the rich from getting richer from having full access to the computational powers of AI. It does bring up an interesting thing of how much of a superintelligence you own can define how much you are worth. But Bostrom’s idea of democratizing parts of a superintelligence to benefit humanity, even though I find the idea great, I feel as if it would never happen due to the greediness of those who own the superintelligence.
Bostrom gives a nice analogy for AI today: humans messing with AI is the same as giving a child a bomb to play with. Any sensible person would simply put down the bomb, but being children that’s not always the case. Now, instead of one child, there are many, and if any one of them blows up the bomb, it messes everything up.
AI ethics is a real issue, and many companies are slowly trying to look into it (except when companies like Google terminating members of their ethical AI teams). One issue that Bostrom talks about is which we solve first, the control problem or superintelligence. The fact is, we are going to advance towards the latter as fast as possible, thus the control problem may not be solved first, causing potentially disastrous results. For this reason we must look into this problem now.