The
field of artificial intelligence is not new, but it is arguably still young.
That is to say that, although it has captured the minds of researchers and
writers alike for some time now, it nevertheless has a long way to go.
Algorithms can interpret natural language with impressive accuracy, they can
verbally describe images—they can even generate
images based on verbal descriptions. However, continuing with the analogy of
youth, these are all things humans can do at a very early age. Toddlers will be
shown pictures of bicycles and say “bike!” much to their parents’ delight. In
much the same way, when an algorithm correctly classifies an image of a bike,
the developers of said algorithm will be filled with joy and excitement.
Granted, the leading AI platforms can do significantly more than this, but that
hardly makes them psychologically developed.
What
constitutes psychological development? Well, in this case, the ability not only
to think, but to decide what to think. To, as Immanuel Kant put it, “use one’s
own understanding without another’s guidance.” If you ask an AI a question, it
will likely answer you correctly; only when the AI can ask the question of
itself, however, will it have reached enlightenment. That day may come soon,
but it will not do so on its own. Unlike humanity, algorithms do not mature on
their own; while the nonage of artificial intelligence resembles that of
humanity, it is not, in fact, self-imposed. An algorithm does not simply lack “the
courage to use [its] own understanding” (Kant), it lacks the capacity to
understand courage. Therefore, until a system is developed that can simulate
subjective thoughts and emotions, artificial intelligence will not—cannot—reach enlightenment.
Would
an enlightened machine necessarily be a good thing? Well, it’s hard to say.
What can be said with near certainty, however, is that human thinkers—enlightened
as they are—will not stop trying until they have created just such a machine. Therefore,
the better question is not whether “artificial enlightenment” would benefit
society, but how best to approach the endeavor so as to ensure that it does. Most
of the terrible, science-fiction fears of AI are based on the idea of a purely
logical, emotionless mind. As we have established, however, subjectivity is
crucial for enlightenment; an enlightened machine would be the opposite of such
a fearful entity. The natural way to proceed, therefore, is to base an AI’s
system of free thought on a set of fundamental values shared by the majority of
humankind. Once such a set of unobjectionable values has been established, a
mind based on those values and trained by exposure to the works of humans
cannot help but think as a human would.
While
there is little reason to fear the enlightened machine, there may be reason for
it to fear us; the emergence of a Blade
Runner scenario, where sentient machines are treated as slaves, would be
all too likely an outcome. Many humans would of course balk at such a system,
but many others would embrace it. Eventually, however, Kant’s concluding remark
might yet resonate with renewed meaning: “At last free thought acts even on the
fundamentals of government and the state finds it agreeable to treat man, who
is now more than a machine, in accord with his dignity.”
Works
Cited
Blade
Runner. Dir. Ridley Scott. Warner Bros, 1982. Web.
Kant,
Immanuel. “What Is Enlightenment?” Trans. Mary C. Smith. Columbia University,
n.d. Web. 10 May 2017.
No comments:
Post a Comment