Superintelligence:
Paths, Dangers, Strategies by is a thought-provoking book. The
author is a philosopher who has a strong technical background in information technology
and science. This work is devoted to the future development of machine
intelligence and the potential dangers relating to its development.
First,
since this book goes into territory that might initially seem far-fetched, and
because “doomsday is coming” type books often lack credibility, I think it is
important to discuss some reasons why I believe that Bostrom’s ideas hold some authority.
This book is extremely well researched and the author has a strong technical grasp
of the relevant subjects. He does not make unsupported contentions. Though a
philosopher, Bostrom seems to have a scientific mind. He often speaks in terms
of probabilities, not predictions or certainties. He does do a lot of
speculation here, but he is careful to point out when it is speculation.
Furthermore,
I have done a little research around this book. This work, as well as some
other books and essays that have been published in the past several years, has
garnered serious attention from a host of scientific, technical and industry
experts. Stephen Hawking and Bill Gates are some of the more famous and prominent
people who are looking at these issues as credible. In the past several years,
these concerns have sparked a conversation among those in the know as it relates
to this topic. Some are planning for what may be momentous events related to
the possible coming of super intelligence.
Parts
of this book are technical, and some of it was bit difficult for me to follow.
It helped that I have a passing interest in this topic and in related issues
and that I have occasionally read articles on the subject. In addition, the
author references various writings and ideas relating to human consciousness
and brain function. I have also done a little reading on those subjects, which proved
helpful. Nevertheless, a patient lay reader without much background will mostly
understand and get a lot out of this book.
In
order to comprehend what this work is about, it is important that one
understands several important concepts that Bostrom explores. First, this book
is mostly concerned with what Bostrom calls general or strong artificial intelligence,
or in layman’s terms, computers that will be able to think like humans in
multiple ways. This is as opposed to systems that are currently labeled as artificial
intelligence that are very good at accomplishing specific tasks, but only
specific tasks. This type of less
sophisticated technology is already in use in all sorts of applications,
including Internet search engines, navigational aides, medical application,
etc.
General
artificial intelligence, by definition, will initially be roughly on par with,
or slightly superior to, human intelligence. As Bostrom points out, most of the
film and book depictions of machine intelligences fall into this category.
Artificial
intelligence is not synonymous with “Suprintelligence.”
Though Bostrom spends many pages explaining what he means by Suprintelligence
and offers several definitions, it is basically intelligence that is far
advanced of human thinking in every important area.
The
author writes,
“The magnitudes of the advantages are such
as to suggest that rather than thinking of a superintelligent AI as smart in
the sense that a scientific genius is smart compared with the average human
being, it might be closer to the mark to think of such an AI as smart in the
sense that an average human being is smart compared with a beetle or a worm. “
Bostrom’s
contentions are as follows: sometime in the next fifteen to ninety years,
researchers will likely produce strong artificial intelligence. There will be constant
improvement of this artificial intelligence, either through the efforts of its programmers
or, more likely, it will be a self-improving system. This improvement will eventually
create what Bostrom describes as an “intelligence explosion.” This will be a leap
in intelligence of unimaginable magnitude. Suprintelligence will result.
This
Suprintelligence will have an overwhelming advantage over the whole of human
civilization. Bostrom explains how it might gain easy access to enormous financial
resources and manufacturing faculties. Such an entity might be powerful beyond
human comprehension. It will likely be driven to expand its intelligence further
and further. Such expansion efforts could possibly bring about human
extinction. If the results do not end up as dire as the end of humanity, Suprintelligence
will at least have an enormous impact on the future of our species and
civilization.
Bostrom
speculates about many scenarios. Many of the most likely involve a
Suprintelligence with seemingly godlike powers. In some of these scenarios, in
its drive to become smarter and bigger, the Suprintelligence might begin changing
the ecosystem of the Earth as to make human life impossible. Of one of these
horrifying possibilities Bostrom writes,
“if the AI is sure of its invincibility to
human interference, our species may not be targeted directly. Our demise may
instead result from the habitat destruction that ensues when the AI begins
massive global construction projects using nanotech factories and assemblers—
construction projects which quickly, perhaps within days or weeks, tile all of
the Earth’s surface with solar panels, nuclear reactors, supercomputing
facilities with protruding cooling towers, space rocket launchers, or other
installations whereby the AI intends to maximize the long-term cumulative
realization of its values. Human brains, if they contain information relevant
to the AI’s goals, could be disassembled and scanned, and the extracted data
transferred to some more efficient and secure storage format. “
Though
the above seems fantastic, the author bases his speculations upon what are some
educated guesses of modern scientific and technical minds as to what
technologies are likely to be developed in the future.
Bostrom
discusses many possibilities. Some involve human extinction. However, there is an
entire range of eventualities are explored. Some involve less maleficent ills,
such as a Suprintelligence dominating humanity in more benign but still
stiffening ways. Other scenarios are bright, with a passive Suprintelligence
helping humanity to avoid extinction and promoting human improvement.
But
the author warns,
“Before the prospect of an intelligence
explosion, we humans are like small children playing with a bomb.”
The
author digs into a lot of detail involving the current state of artificial intelligence,
its technical evolution, the revolution that is likely to occur after strong
artificial intelligence is developed, as well as post Suprintelligence scenarios.
Bostrom
does devote a lot of thought to solutions. He concedes that the development of Suprintelligence
is likely inevitable. However, he explores numerous possibilities as to how it
can be developed in order to avoid pernicious outcomes. The author digs deep
into both technological as well as philosophical issues as they relate to
creating favorable outcomes.
There
is a lot more to this book than my summary does justice. Bostrom has a keen
mind and takes the reader down all sorts of interesting scientific,
technological and philosophic paths.
I
think that it is important to remember that those who have attempted to forecast
the future, even those who are knowledgeable, have often been proven wrong. However,
based upon the serious and hardheaded way that this topic is explored, and
based upon the fact that these concerns are being given serious consideration
by bright people who understand these subjects, these ideas need to be
carefully considered. There is much to ponder here.
I
found both the hard technology as well as the predictive and philosophical musings
contained within these pages fascinating. At the very least it is an excellent
primer on the state of artificial intelligence research and its future
development. I strongly recommend this book for anyone interested in this type
of technology, the future of humanity, or the state of the world in general.
Nick Bostrom’s website is full of
interesting thoughts on the ethics, science, philosophy, the future of humanity
and all sorts of other topics. It is here.
Alas this sounds like a book that could go way over my head. I'm sure that it would appeal to Mr T though.
ReplyDeleteHi Tracy - If Mr. T did read it I would love it if he put up commentary on your blog.
ReplyDeleteHi Brian. What an interesting book and a very good review on it. While reading your post I wondered if these "super-intelligent" beings would decide that they weren't created but simply "happened" and then evolved over several millennia?
ReplyDeleteHave a great week and thanks for letting me know about all these books out there!
Hi Sharon - Though I disagree with you on the origins of humanity, that was very funny :)
ReplyDeleteBrian Joseph, this sounds fascinating though complex. Excellent commentary!
ReplyDeleteThanks Suko - At times this was a bit complex but it was almost always readable,
ReplyDeleteFascinating post, Brina. But also scary. I studied psycholinguistics at the uni, so I might be able to follow but I don't think this is for me. I'm glad I have a very good impression thanks to your post.
ReplyDeleteIt sounds as if he was a very careful thinker.
This sounds like a fascinating book and your review of it is excellent. I appreciate your observation that those who attempt to predict the future "have often been proven wrong". I think of the "Limits to Growth" of 1972 and its complete failure to provide a scientific basis for predicting future growth; but citing that as an example is unfair. There are certainly good reasons to be cautious when dealing with the advent of AI and the potential of strong AI, much less "superintelligence".
ReplyDeletePerhaps a better comparison would be to remember the critics of the development of the atomic bomb and the concomitant ability of the self-destruction of humanity (which fortunately has not yet occurred). In our day it is reassuring to hear that there are serious scientists and technological leaders who are concerned about the potential dangers of strong AI (Elon Musk, an entrepreneur whom I admire is among them). I would hope that with appropriate safeguards we can develop AI and beyond in a way that will benefit mankind.
Hi Caroline - I think that the technical aspects of this are not so bad, even if some goes over a readers head (as a little bit went over mine), the main points are fairly easy to grasp.
ReplyDeleteThis book is concerning, but at least the author does suggest solutions.
I find him to be both a careful and a lively thinker.
Thanks James - Elon Musk is indeed another smart and prominent person who is very concerned about this issue.
ReplyDeleteOne thing that I like about Bostrom is that he considers multiple possibilities. I could picture him writing a book at the beginning of the Nuclear Age where is maps out one possibility where the world powers successfully avoid a nuclear war, while also describing the nightmare scenarios.
I need to read this. I think it's fairly inevitable that humans will be so stupid as to push the envelope far enough to allow superintelligent 'entities' to gain a foothold and then, maybe, take over. It seems to me that a lot of the 'probabilities' haven't happened - yet! -, but a lot of them have. Who in the early 20th century would have thought that we'd be using the internet and mobile phones and GPS controlled cars and taking pictures of Pluto and cloning animals? It's inevitable that science will push the envelope as far as it can, because that's what scientists do. I just hope that people are smart enough to listen to the warnings. When Hawking and Gates register their concern, I think we should all be concerned. Thanks for writing about the book. I'll have to try and track it down.
ReplyDeleteHi Violet - Indeed anyone who knows anything about history knows about technology leading to great harm. On the bright side sometimes we figure out ways to prevent it from destroying us as is evidenced by our not using nuclear weapons since 1945.
ReplyDeleteI did not go too deeply into Bostrom's solutions as they are complex both philosophically and technically, but at least he offers them.
Hi Brian, this book sounds interesting and like food for thought. The idea of artificial intelligence turning into super intelligence is definitely a creepy one. This reminds me a bit of the film iRobot.
ReplyDeleteIn this digital era we really depend on things like our computers, smart phones, gps...etc....on a daily basis. I no longer take printed out map directions from map quest like I used to, now my phone has gps that directs me if I need it. We depend on AI without even giving it a second thought, to the point where it can make us lazy.
Great review as usual.
I agree this book seems to provide a lot of food for thought and seems timely with all of the discussion (or panic) in the media about intelligent robots taking over human jobs. I haven't thought too much on the subject, but have always assumed that AI would eventually play a role because technology has so rapidly become integrated into our daily lives.
ReplyDeleteHi CJ - Computers and robots are definitely having an impact on society.
ReplyDeleteBostrom does point out however that a Superintelligence is likely to be something totally different from anything that we have ever seen before. He does discuss possible scenarios where a profusion of AIs take the vast majority of human jobs. Interestingly these scenarios vary from catastrophic to very positive.
Hi Naida - I Robot and lots of other films and television shows have explored the concept of Artificial Intelligence. Bostom points out however that most of these are not depictions of Superintelligence.
ReplyDeleteSuch an entity is actually very difficult to imagine.
Ouch. A robot or computer will be running my life. Don't think I would want that to happen. I'd rather stick with human frailty.
ReplyDeletehi brian
ReplyDeleteThis sounds fascinating.
Hi Gary - It is fascinating from both a technical and philosophical way.
ReplyDeleteHi Harvee - Not just that, but the book covers certain scenarios where even a "benign" AI might run the future of humanity on an undesirable way.
ReplyDeleteWhat a fascinating, yet totally frightening, concept this is, Brian! Your excellent(as always!) review of this book has given us, your readers, a broad scope on the subject, and it has been very enlightening, as well.
ReplyDeleteAs you know, I'm an ardent science fiction fan, and a diehard Trekkie. Still, I must confess that some of the bleak future scenarios proposed by Bostrom make me nostalgically wish for simpler times..... Ironic, I know. But I remember that, even on "Star Trek", I disliked the idea of books being available only on computers, and lo, and behold....that was incredibly prophetic! I also found Al, the computer in the movie "2001: A Space Odyssey", chilling in the extreme.
So, while all this technology, and the very idea of Superintelligence, does indeed fascinate and intrigue me, it also frightens me. I find it highly ironic that there's even the POSSIBILITY that humanity may contribute to its own demise through its own brainchild -- an intelligence that surpasses its own. Although, according to one of the quotes from the book that you included with your review, the Superintelligence will be equivalent to that of an average human in relation to a beetle or a worm, this is hardly comforting news. After all, beetles and worms are very primitive forms of life. So this means that we humans might become like bugs to this Superintelligence, easily crushed beneath the weight of its superior capacities.
We definitely will need "The Terminator"! Lol.
I wonder if there's any sort of "underground movement" among scientists and philosophers to sabotage the development of this Superintelligence. If I were a scientist, or had some sort of technological expertise, I would most definitely join such a movement! I'm all for progress, but, when the survival of the human race is at stake, I draw a very hard and definite line.
Another issue that arises -- perhaps not even considered by the author -- is this: what becomes of human creativity, then? Would this Superintelligence have the ability to write sublime poetry, or create artistic and/or musical masterpieces?
And yet another consideration: what, then, becomes of the human soul? I firmly believe we all have one. But this artificial "creature" can't possibly have a SOUL. And thus, as a Christian, I would also be opposed to its development. To me, this would be highly unethical, and even blasphemous.
Of course, you have mentioned other, more benevolent scenarios in your review, but I don't think developing such a superior, monolithic, intelligence is worth the risks -- at all!
I'm adding this book to my Goodreads TBR. Now that I've "revived" my literary fiction/nonfiction blog, I will certainly be wanting to read and review it, in spite of my lack of knowledge of technological and scientific matters.
Thank you so much for such a thought-provoking review!! : )
Thanks Maria. Bostrom is a thoughtful philosopher and he does touch upon the issues involving human creativity and art.
ReplyDeleteThere are some folks talking about not developing AI at all because of the dangers, but as Bostrom points out, trying not to develop a technology is tough to do. Also, if it does get developed, it might be better that ethical scientists and leaders are involved so as to guide the technology towards a safe outcome.
In terms of the human soul, I tend to believe that "consciousness"is really the only thing at play. On related matter, Bostrom's "safe" outcomes involve developing an AI with a strong moral and ethical sense based on human consensus. Also important is to endow it with a strong belief that not to interfere with human moral development.
Hi Brian --
ReplyDeleteNot sure what to think about AI but seems scary. Wondering what the timeframe might be till they're everywhere. I know that companies are hoping to use robots to replace drivers at mines north of here around 2020. I'm sure replacing human jobs would be major. This topic reminds me a bit of the movie Ex-Machina. Have you seen it? You might like it. Enjoy your week.
Hi Susan - I saw Ex-Machina. Like many other films I think that it was about artificial intelligence. As Bostrom points out, Superintelligence will be very different.
ReplyDeleteThroughout recent history machines have rep[laced human labor. The results have been painful displacement for some workers but an overall increase in employment and an overall increase in standards of living. With that, Bostrom makes an interesting argument in this book how that might be different with the coming of Artificial Intelligence.
After reading your review, I read a news article about Gates and Hawking sounding their alarm. !!!!???? I'm not sure I would understand it all, but seriously would want some ethical inquiry into whether we would want to proceed with militarizing this technology.
ReplyDeleteHi Heidi - If it is the article that I saw it was about weapons systems and what Bostrom would label as weak artificial intelligence. There does need to be both ethics and common sense applied to this.
ReplyDeleteBostrom does explain how a Superintelligence could easily become a threat to humanity even without human designed weapons systems.
This sounds so intriguing. I'm afraid that some of it would be way over my head though LOL Thanks for sharing.
ReplyDeleteThanks for stopping by Diane.
ReplyDeleteMoist of the book is accessible even if one is non technical. The main points are easy to get. The technical areas can be skimmed over.