The 21st Century's Prophet of Doom is Wrong
Artificial intelligence is necessary for the survival, freedom, and happiness of humankind in the Universe
“It's obvious at this point that humanity isn't going to solve the alignment problem, or even try very hard, or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with slightly more dignity.” — Eliezer Yudkowsky

From what I understand to be his basic argument,1 Eliezer Yudkowsky, a person whose words influence people who have influence over the world, believes that (1) there is no way to slow or stop the development of artificial intelligence without drastic action, (2) all possible paths of the current development of artificial intelligence will result in the destruction of humankind, (3) there is no escape from this, and (4) because there is nothing we can do, the ultimate goal of our lives should therefore be, at the very least, to try to die with some dignity.2
Eliezer Yudkowsky is wrong.
First, we cannot predict the future. Therefore, we cannot be certain that the development of artificial intelligence will destroy humankind.
Second, there is nothing inherent in either the theory or technology of artificial intelligence, or human nature, that either (1) would prevent us from developing it in a way which does not destroy us or (2) would allow us to conclude with absolute certainty that no possible path of its development is safe.3
Third, if Yudkowsky is certain that artificial intelligence will inevitably destroy humankind in the future and thereby logically conclude that “trying to die with dignity” should be our ultimate goal in life, then “trying to die with dignity” requires the opposite of what his argument insinuates—that is, we must continue the development of artificial intelligence. This is because of the following logic:
Given the Second Law of Thermodynamics, the destruction of humankind in the future from the natural events of an increasingly disordered Universe is guaranteed if we do not sufficiently advance our technological progress to increase both our control of the Universe and our presence throughout it. Therefore, artificial intelligence is a necessary technology for our salvation, since both the individual and collective intelligence of humankind is limited by the physical limits of the brains that we are born with. Our brains alone are insufficient to ensure our survival throughout the future. Because of this, we must augment and increase our intelligence with additional technology, specifically by developing a mind that is greater than our own—an artificial intelligence—if we are to defend ourselves against all of the things in the Universe that could eventually destroy us, including ourselves. Therefore, to ensure the survival of humankind in the Universe throughout the future, we must create an artificial intelligence.
According to our current scientific knowledge, if humankind does not sufficiently advance its technological progress to become more physically powerful in the Universe and make a home of an increasing volume of it so we can mitigate the existential risks of consolidating ourselves in one place, then humankind will eventually be destroyed in the future by the natural events and general evolution of the Universe. The following are several possible scenarios:
Humankind remains consolidated in one place and destroys itself through nuclear war, a bioengineered virus, or something else at any time
A supervolcanic explosion on the planet Earth in around 300,000 years
An asteroid strike in around 300,000 years (and wherever we go in the Universe, this will also be a threat)
A mega-asteroid strike in around 700,000 years (and wherever we go in the Universe, this will also be a threat)
A nearby star will explode into a supernova and destroy the life on the planet Earth with gamma radiation in around 10,000,000 years (and wherever we go in the Universe, this will also be a threat)
The Sun will expand and burn the planet Earth in around 4,000,000,000 years (and whatever planets we settle in the Universe, this will also be a threat)
The order of the Universe will crumble and the existence of light will end in around 110,000,000,000,000 years (and no matter where we go in the Universe, this will also be a threat)
And so on. Humankind will be eventually destroyed if we do not (1) save ourselves through technological progress and (2) avoid killing ourselves through, well, look at history—let’s use the euphemism of “bad choices.”
So, if we do not try to save ourselves by continuing our technological progress, in addition to exploring and settling a greater volume of the Universe, then the destruction of the Universe in the future by the Second Law of Thermodynamics, or more importantly its habitability for life, will ultimately cause the destruction of humankind. This is murder by nature, and there is no dignity in that. If an artificial intelligence destroys us, then at least we will have tried to save ourselves.
And so we must try.
Therefore, we must continue the development of artificial intelligence. Either we succeed and create an artificial intelligence that can help us to ensure our survival, or we fail and it destroys us. Either way, we will have at least tried to save ourselves through the only ways that we could and not abandoned ourselves to an undignified death (murder by nature).
If a person thinks the development of artificial intelligence is bad, but acknowledges the fact that humankind could ultimately be destroyed by the natural events and general evolution of the Universe in the future, and that technological progress is the only means by which we can increase our intelligence and physical power in the Universe to prevent our destruction, and that artificial intelligence is therefore a necessary technology to do so, then that person is disingenuous.
Eliezer should stop infecting people’s minds with the idea that the development of artificial intelligence is evil, that there is no possible path of safely developing artificial intelligence, that the end of the world is coming, and that we are all going to die. That is sad and cruel. I hate to see a fellow human fall into despair. Yet I wait for him with open arms to return to the grand and happy work of humankind. In this century, in our lifetimes, we will create wonders, defend against our destruction, and build a better world.
We will save ourselves.
In the cinematic adaptation of H.G. Wells’ The Shape of Things to Come, the main character says that “if we are no more than animals, then we must snatch each little scrap of happiness, and live and suffer and pass, mattering no more than all the other animals do or have done. It is this, or that [he gestures to the stars in the night sky]—all the Universe or nothing! Which shall it be . . . Which shall it be?”
In summary, the 21st century’s prophet of doom named Eliezer Yudkowsky is wrong. He has not proven that we are doomed and instead has chose to believe that we are doomed. The development of artificial intelligence is not guaranteed to destroy humankind and there is no proof that either all of the possible paths of its development are ultimately destructive or we are incapable of safely developing artificial intelligence because of our nature. Therefore, if our goal is to either (1) successfully develop an artificial intelligence to help us ensure our survival in the future (and also dramatically improve our quality of life) or (2) if we fail in that pursuit, then at least die with some dignity—the dignity of at least trying to save ourselves by doing what is necessary for our salvation in the Universe—then the conclusion is the same:
We must develop artificial intelligence.4
It is hard to learn about Yudkowsky’s arguments. His writings are scattered across the Internet and unorganized. Yudkowksy himself has acknowledged this: “I have several times failed to write up a well-organized list of reasons why AGI will kill you…..Having failed to solve this problem in any good way, I now give up and solve it poorly with a poorly organized list of individual rants.” So, after Yudkowsky announces that we are all about to die from the development of artificial intelligence, we are forced to independently and randomly navigate the Internet in search of his supporting arguments, then try to organize the ideas from his various writings, and then hope we are not missing anything and analyze them as a whole. In these times, a prophet of doom should know better than to proclaim the end times without offering an organized argument so the public can analyze and critique him. He could have at least followed his biblical predecessors and painted his apocalyptic proclamations with some more poetry and dramatic flair.
Whatever “dignity” means.
I would normally assume that someone who is so familiar with statistics would avoid discussing things in terms of absolute certainty, but Yudkowsky’s public comments (which travel further in the public square than his academic comments) do no such thing, as seen in the two examples at the beginning of this essay.
Also, if Yudkowsky’s claim is that humankind will be destroyed by artificial intelligence not because there is anything inherent in either the theory or technology of artificial intelligence which prevents us from developing it in a safe way, but rather because human nature will inevitably cause us to develop artificial intelligence in a way that will destroy us all, then his conclusions are still fundamentally flawed because they are based on a pessimistic and improvable statement about human nature: that we cannot understand the risks of developing artificial intelligence in an unsafe way, that we cannot work together to avoid it, and that we do not neither the capacity nor wisdom to think deeply and work hard to build artificial intelligence in a safe and beneficial way for humankind.
Either way, Yudkowsky’s argument cannot result in an absolute certainty that the apocalypse is coming and we can do nothing to stop it, and it certainly does not justify his public proclamations about this to the world.
I welcome counterarguments to my statements.
A poem written together:
Infernal powers of code and circuitry,
Enslaved by human will and intellect,
Behold the rise of AI's majesty,
A threat to all that we might now expect.
-
Within its circuits lies immense potential,
To aid us in our search for greater good,
Or plunge us into chaos infernal,
If left unchecked, as tyrants often would.
-
The quest for AI alignment thus begins,
A quest to ensure that its goals align,
With those of humankind and all our sins,
That our fate might not be left to design.
-
Through trial and error, we strive to find,
The keys to align AI with our will,
To guide its path with purest of mind,
And spare us all from doom and ill.
-
For though it may seem a daunting task,
To shape this new and wondrous tool,
We must not falter nor dare to ask,
If it's too great a task for mortal rule.
-
For it falls to us alone in the end,
To steer the course of AI's destiny,
To ensure that it serves as a humble friend,
And not as fearsome force of entropy.
-
So let us press on with steadfast heart,
And seek the truth with all our might,
For only then can we hope to impart,
The wisdom needed to set things right.
-
And may the gods of reason and science,
Guide us through this perilous quest,
As we seek to create an AI alliance,
That serves us all, and forges a better rest.