Jonathan E. Thompson lives in Pensacola, FL. He has two pet rabbits, a shit-ton of books, and regular existential crises.

Knowing alien minds

Knowing alien minds

So I finally watched Ex Machina last night. An excellent film, with a thoughtful examination of humanity’s possible relationship to any strong AI we might soon create. What most struck me was the ending, wherein [spoilers] Ava the AI/android wins her freedom from her callous creator Nathan by using a full spectrum of emotional and sexual manipulation to obtain help from a smitten Caleb. (That is to say, she definitely passes the Turing Test!) After killing Nathan, she deliberately leaves Caleb trapped in the compound, where he’ll likely die of starvation or thirst, thereby betraying his trust in her and falsifying her apparent romantic feelings for him.

But can we really call this “betrayal?” Can we really assume that an AI mind will think and feel the same way humans do, will share our morals and values, or will fully recognize humanity’s rights to life and freedom? Would an AI mind, without the messy and complex interplay of emotions and logic that comprises a human brain, think like we do? Nick Bostrom has worried over this challenge, and others have debated this “alignment problem.” Any strong AI we create will have to be programmed with goals and values, but given that thousands of years of human philosophy and religion have yet to agree on such goals and values, and given that neuroscience is still puzzling out the pathways of the human brain and the mystery of our own consciousness, how can we expect to have the wisdom to appropriately program an AI to think and act in ways we would deem proper and benevolent, or at least generally aligned with our human interests?

* * * *

In the 21st century, we go about the process of creating AI with worryingly little concern regarding what kind of mind we might create, and how comparable it will actually be to our own. In a similar vein, I have always found the idea of encountering an alien species to be quite terrifying. Our films usually depict aliens as either aggressive conquerors (see Independence Day or Signs) or benevolent communicators (see E.T. or Close Encounters). But all of these films assume a degree of recognition and familiarity—we humans can generally understand the motives and the minds of these interstellar visitors. But when you think about it, what possible reason can we have to assume that? Here on Earth, we have a hard enough time understanding the minds of intelligent species like dolphins and elephants and octopuses, and in many instances we denigrate their intelligence and bend over backwards to prove that they are not like us, thereby maintaining our comforting illusion of intellectual superiority.

Thomas Nagel’s famous essay What Is It Like to Be a Bat? emphasizes the difficulty—or even impossibility—of one kind of mind (for instance, human) to really and truly grasp the mindset and conscious experience of a different animal (such as a bat, who hunts and navigates using echolocation and frequently spends hours hanging upside down). More than likely, any extraterrestrial intelligence we encounter will be so radically different that we will find the task of comprehending its mind and communicating with it effectively impossible. We may not even recognize it as an alien intelligence—its “mind” could operate over a period of hundreds or thousands of years, instead of the milliseconds that we humans use, or it might lack the sort of physical form and neural “wetware” that we are used to (Fred Hoyle’s novel The Black Cloud gives a great example of this).

What if we encountered a sort of hive mind wherein interconnected “individuals” all function like neurons that coalesce into a collective intelligence. That example raises real concerns: imagine an ant-like alien race, where a single member in a colony has no real sense of individuality and its death is not viewed as any kind of personal tragedy. An ant-like alien race would have a vastly different sense of mortality compared to humans, and should they choose to kill or experiment on members of our species (something we would find horrible and a violation of our individual rights) they would simply not perceive the “wrongness” of that action, since their minds cannot even comprehend the concept of an individual the way that we can. Or perhaps we would encounter an indestructible alien race or an alien AI that has no concept of death, and therefore cannot understand our fear of dying and our feeling of sanctity regarding life; how could such entities possibly grasp our perspective, it being so foreign to their own experience?

* * * *

H. P. Lovecraft taps into this kind of fear with his stories of “cosmic horror.” Many of his tales feature Ancient Ones or Elder Gods whose minds are so far beyond our meager human ones that their goals and intentions are inscrutable to us. And because their minds are so advanced, they view humans the way we view insects—mostly with indifference, occasionally with mild revulsion, but always without any concern that we are intelligent creatures with moral rights and lives that deserve protection. To me, such an indifference is far worse than any intentional malevolence. In Independence Day, the aliens want our planet and have identified us as the dominant species and therefore strive to eliminate us; although their actions are “evil” there is still a sense of meaning in the conflict, and by attacking us specifically they are acknowledging the significance of the human race and expressing a fear of our power.

But a real encounter with a superior alien race would probably play out much differently. There is no reason to think that a far more advanced species would even recognize humanity’s uniqueness, and they would likely either ignore us or simply eliminate us for their convenience. Think of how we treat ants when we find an ant-bed in our yard or on our sidewalk. How much regret do we feel when we spray their nest or run over it with the lawnmower? We destroy their home not out of evil intent, but simply because they are “in the way.” We have no problem killing them or wiping out their colony to achieve some minor goal of our own—cutting the lawn, or planting flowers, or keeping them away from “our” food. We never for a second consider that they are intelligent living beings deserving of protection or having “insect rights.” We are so far “above” them in the intelligence hierarchy that we feel we can ignore their status. But that same rationale could be used against us, should we someday meet a significantly more intelligent alien race (and they would almost certainly have to be significantly more intelligent, to have mastered interstellar travel and its attendant technologies). If we are willing to experiment on chimps and incarcerate dolphins for our amusement and kill elephants for their ivory—all three species being highly intelligent and not that far removed from us on an intelligence scale—then what reason can we possibly have to expect an alien race or a strong AI to view us as equals in any shape or form?

* * * *

Thankfully, I find it highly unlikely that we will encounter extraterrestrial intelligence any time soon (or possibly ever). But I do worry that we will create an alien mind. Our attempts to build a truly strong AI might have dreadful consequences—not because the AI we create will be openly malicious (à la Skynet in Terminator) but because its mind will simply be so strange and function so differently from a human mind that we will have no idea what to expect from it, and that unpredictability will unnerve us enough to drive us into conflict with our silicon creation.

So what is the solution? To avoid creating AI entirely? I don’t see that as a possibility—the lure of AI is too great and someone will create it eventually. Maybe transhumanism is the answer? If we know that a strong AI will be radically superior to us, and we know that we cannot evolve quickly enough through natural means to keep pace, maybe the only option to avoid obsolescence (or even extinction) is to merge with our new technology. Perhaps, as the old saying goes, “If you can’t beat ‘em, join ‘em!”

Addendum (10/10/19):

Just came across this excellent commentary on the subject courtesy of Arthur C. Clarke from his Profiles of the Future:

The popular idea…that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it… Those who picture machines as active enemies are merely projecting their own aggressive instincts, inherited from the jungle, into a world where such things do not exist. The higher the intelligence, the greater the degree of cooperativeness. If there is ever a war between men and machines, it is easy to guess who will start it.

This offers an intriguing perspective. Humans are aggressive and violent due to our evolutionary heritage—the constant pressure to survive and reproduce in a dangerous world of limited resources. An AI will likely be more akin to a tabula rasa—a blank slate—free from the baggage of evolutionary biology that we humans bear. As such, perhaps they will indeed be more peaceful and less likely to perceive “the other” as a threat to their well-being. Let us hope so!

Clarke makes one more interesting point. Regarding our fear of being usurped and effectively replaced by intelligent machines, either violently or peacefully, he notes that, “No individual exists forever; why should we expect our species to be immortal? Man, said Nietzsche, is a rope stretched between the animal and the superhuman—a rope across the abyss. That will be a noble purpose to have served.” Perhaps the greatest legacy of mankind will not be our own works—our art and architecture, our philosophy and religion—but the fact that we created a superior intelligence whose works will transcend ours by many orders of magnitude. Perhaps we are merely the midwife for the birth of something greater than ourselves? And while such an idea might offend our ego as a species, can’t we instead, as Clarke urges, recognize it as a noble purpose to have served?

Ideology: or, how to lose friends and irritate people

Ideology: or, how to lose friends and irritate people

About white men

About white men