The
Aura of Mechanical Enlightenment
Matthias
Guenther
Artificial neural networks are a method of optimizing
complex problems by mimicking the human brain. They have been used for
applications such as language processing; image classification, including
specializations such as facial recognition; and more. The technology has
advanced to become extremely effective for specific tasks, but it has its
limitations. For example, all known methods of machine learning—including, but
not limited to, neural networks—are constrained by the specifications of the
specific problems they are designed to solve. Optimization for arbitrary
problems is, therefore, currently impossible. To illustrate this, consider a
calculator: it is much more efficient than a human brain for performing a
specific set of operations, but it can’t, say, write poetry. Neural
networks—and other methods of machine learning—operate on a much higher level
of abstraction than calculators, but nowhere near that of the human brain.
The field of artificial intelligence is not new, but it
is arguably still young. That is to say that, although it has captured the
minds of researchers and writers alike for some time now, it nevertheless has a
long way to go. Algorithms can interpret natural language with impressive
accuracy. They can verbally describe images—they can even generate images based on verbal descriptions. However, continuing with
the analogy of youth, these are all tasks that humans can perform at a very early
age. A toddler, when shown a picture of a bicycle, might say “bike!” much to
his or her parents’ delight. In much the same way, when an algorithm correctly
classifies an image of a bike, the developers of said algorithm will be filled
with joy and excitement. Granted, the leading AI platforms can do a great deal
more than this, but they are still limited by their intended purposes; functionality
hardly constitutes originality.
Before we delve further into the personality of an
algorithm, we must first consider a prerequisite not yet present in any extant artificial
intelligence: that of a mind flexible enough to support a psyche. The ideal
form of artificial intelligence is one capable of interpreting arbitrary
problems in order to optimize their solutions; this is the type of flexibility
necessary for free thought. However, such an advancement would bring with it
its own set of complications, both practical and philosophical. The practical issues,
in short, relate to the tradeoff between abstraction and efficiency: having
more flexible functionality, as a general rule, means sacrificing some speed in
any given task. It is the philosophical concerns, however, that especially
intrigue me: if a system can solve—or at least attempt to solve—any type of
problem, it must have some way of deciding what problems to attempt. The
capacity for free thought is meaningless without the desire to exercise that
capacity. Therefore, any mind—artificial or otherwise—must have what amounts to
a personality. When this is taken into account, the topic of psychological
development may finally be addressed.
What constitutes psychological development? Well, in this
case, the ability not only to think, but to decide how and what to think. To,
as Immanuel Kant put it, “use one’s own understanding without another’s
guidance.” If you ask an AI a question, it will likely answer you correctly; only
when the AI can ask the question of itself, however, will it have reached
enlightenment. That day may soon be upon us, but a machine cannot reach
enlightenment on its own. Unlike humanity, algorithms do not mature of their
own accord; while the nonage of artificial intelligence resembles that of unenlightened
humans, it is not, in fact, self-imposed. An algorithm does not simply lack “the
courage to use [its] own understanding” (Kant), it lacks the capacity to
understand the very concept of courage. Therefore, until a system is developed
that can simulate subjective thoughts and emotions, artificial intelligence will
not—cannot—reach enlightenment.
Would an enlightened machine necessarily be a good thing?
Well, it’s hard to say. What can be said with near certainty, however, is that
human thinkers—enlightened as they are—will not stop trying until they have
created just such a machine. This presents a conflict between accelerationism
and a more Thoreau-like worldview. I, personally, am drawn to these
technologies not as much for their utility as for the intellectual pursuits
themselves—and for the future advancements they may bring. My perspective as an
aspiring software developer is thus markedly accelerationist: advancement for
the sake of advancement. Consumers of new technology, however, may feel
differently; I read an article just this morning discussing how only 28% of Windows
users make use of Cortana, Microsoft’s digital assistant (Hachman). As Thoreau
put it, “We are in great haste to construct a magnetic telegraph from Maine to
Texas; but Maine and Texas, it may be, have nothing important to communicate.”
Perhaps, at least for the time being, there is less of a market for AI than
developers assume. I doubt that will stop anyone, though. It’s not likely to
stop me.
If artificial enlightenment is inevitable, then we should
ask not whether it will benefit society, but how best to approach its
development so as to ensure that it does. Most of the terrible, science-fiction
fears of AI are based on the idea of a purely logical, emotionless mind. As we
have established, however, subjectivity is crucial for enlightenment; an
enlightened machine would be the opposite of such a fearful entity. The natural
way to proceed, therefore, is to base an AI’s system of free thought on a set
of fundamental values shared by the majority of humankind. Once such a set of
unobjectionable values has been established, a mind based on those values and
trained by exposure to the works of humans cannot help but think as a human
would.
Who, though, could claim the authority to dictate a
sentient entity’s values and motivations? Putting aside the question of fairness
to the rest of society, would that be fair to the entity in question? If not,
is there an alternative? When I first considered these questions, I wondered
how I, theoretically, would go about designing an artificial mind. I wondered
how best to circumvent the problems, or at least allow such important decisions
to be changed after the fact. In time, I came to a decision. Unable to help
myself, I began programming my interpretation of, for lack of a better term, an
artificial personality. It wasn’t an artificial intelligence: it wasn’t
intended to solve problems, but rather to direct the attention and motivations
of another system capable of such things. I had no idea whether or not my idea
could ever work—indeed, I still have no idea, as remains primarily in my
head—but I was too curious not to pursue the issue.
I modeled my idea after the interaction between the
nervous system and the endocrine system, whereby the body responds to stimuli
with chemical feedback to encourage or discourage the repetition of a given
experience. When the brain makes a decision that harms the body, for example,
the brain is trained to avoid the same situation via a pain response. This type
of feedback is, in fact, reminiscent of a neural network designed to discover
and act on patterns in the endocrine response to its own behavior. In essence,
my idea was this: an artificial intelligence could theoretically direct its own
actions if it were connected to another system designed to evaluate how well
its own actions aligned with its core values. If its actions yielded results in
violation of those values, the system would begin to associate the actions
themselves with a negative “emotional” response. While this system does not
entirely avoid the aforementioned ethical dilemmas, it does at least provide a
system for defining—and, if necessary, altering—the basic values governing an
artificial intelligence’s behavior. The values may yet have to be defined by
individuals, but at least there is a relatively simple and transparent way of
defining and recalibrating them.
Unfortunately, my paradigm introduces yet another moral quandary.
This new consideration, distinct from the issue of defining how another being
thinks, concerns what happens when one redefines
a being’s values. Two possibilities arise: either the being in question effectively
dies, or it continues to exist with a fundamentally altered worldview. This,
under normal circumstances, is all but impossible; it runs contrary to the
nature of every sentient being. When psychologist Carol Tavris was asked what
would cause someone to change his or her mind about something fundamental to
his or her identity, her response was “Probably nothing. I mean that seriously”
(Beck).
Forcibly altering the fundamental values of any intelligence—be
that intelligence natural or artificial—would effectively result in a new mind
with the memories of another. Walter Benjamin coins a useful term—or rather a useful
interpretation of an old term—in his essay “The Work of Art in the Age of
Mechanical Reproduction.” He uses the word “aura” to refer to the sense of being
created by the unique physical manifestation of a person, place, or thing. Using
his terminology, a mind whose values suddenly changed would find its aura fundamentally
altered—perhaps even replaced altogether.
The effects of an altered aura are addressed quite appropriately
in the movie Blade Runner, wherein artificial
beings called “replicants” are given the memories of humans. The replicants
are, of course, new individuals—though it may take them time to fully embrace their
individuality—but their memories transition seamlessly from one life to another.
Anshu references the splitting of an aura in a blog post, saying “The replicant
Rachael also has her own aura, created by her choices. . . .” Indeed, whether
this new aura gradually separates from the old or immediately takes on its own
identity, it is a new individual. As such, when one’s values are forcibly
altered, the old identity effectively ceases to exist. An AI subjected to this
fate would be little better off than one that had been shut down altogether.
Even if an artificial intelligence were allowed to
continue its existence unhindered by human intervention, its aura would still
be in peril. Benjamin describes the tendency for mechanical reproduction to
reduce aura, pointing out that an object’s unique existence is diluted when
reproductions are made. In the context of artificial intelligence, however, the
damage goes beyond that: since software can be copied without loss an arbitrary
number of times, many instances of a given AI could exist simultaneously.
Imagine having an identical twin; where you might struggle to differentiate yourself
as a unique individual, an artificial intelligence could experience the same difficulty
a thousandfold. Due to the lossless duplication, none of them would be any more
or less authentic than any other—except in terms of age—meaning that each of
them would suffer a great reduction in aura, in individuality, in externally perceived
existence.
It seems clear that, while there is little reason to fear
the enlightened machine, there are myriad reasons for it to fear us; the
emergence of a Blade Runner scenario,
where sentient machines are deprived of aura and treated as slaves, would be
all too likely an outcome. Many humans would of course balk at such a system,
but many others would embrace it. Eventually, however, Kant’s concluding remark
might resonate with renewed wisdom: “At last free thought acts even on the
fundamentals of government and the state finds it agreeable to treat man, who
is now more than a machine, in accord with his dignity.”
Works
Cited
Beck,
Julie. “This Article Won’t Change Your Mind.” The Atlantic, 13 Mar. 2017. Web. 10 May 2017.
Benjamin,
Walter. “The Work of Art in the Age of Mechanical Reproduction.” The Atlantic, 13 Mar. 2017. Web. 10 May
2017.
Blade Runner.
Dir. Ridley Scott. Warner Bros, 1982. Web.
Goyal,
Anshu. “The Aura of Replicants.” Thinking
Critically About New Media. Blogger, 9 May 2017. Web. 10 May 2017.
Hachman,
Mark. “Windows 10's 500 million devices snub Cortana, impacting Microsoft's AI
push.” PCWorld. IDG Communications.
10 May 2017. Web. 10 May 2017.
Kant,
Immanuel. “What Is Enlightenment?” Trans. Mary C. Smith. Columbia University,
n.d. Web. 10 May 2017.
Thoreau,
Henry David. Walden. Digital Thoreau. State University of New
York at Genesco, The Thoreau Society, and The Walden Woods Project, n.d. Web.
10 May 2017.