Artificial
neural networks are a method of optimizing complex problems by mimicking the
human brain. They have been used for applications such as language processing;
image classification, including specializations such as facial recognition; and
more. The technology has advanced to become extremely effective for specific
tasks, but it has its limitations. For example, all known methods of machine
learning—including, but not limited to, neural networks—are constrained by the specifications
of the specific problems they are designed to solve. Optimization for arbitrary
problems is, therefore, currently impossible. To illustrate this, consider a
calculator: it is much more efficient than a human brain for performing a
specific set of operations, but it can’t, say, write poetry. Neural networks—and
other methods of machine learning—operate on a much higher level of abstraction
than calculators, but nowhere near that of the human brain.
The
ideal form of artificial intelligence would be capable of interpreting
arbitrary problems in order to optimize their solutions. However, such an
advancement would bring with it its own set of problems, both practical and
ethical. The practical problems largely relate to the tradeoff between
abstraction and efficiency; it is the ethical problems, though, that interest
me more: if a complex system can solve—or at least attempt to solve—any type of
problem, it must have some way of deciding what problems to attempt. It must
have, essentially, a personality. Who gets to decide the values and motivations
of an optimization tool? Is that fair to society? How about to the tool itself?
When
I first considered this conundrum, I wondered how I would go about designing an
artificial mind. In time, I came to a decision and began programming my
interpretation of, for lack of a better word, a personality. Now, it should be
noted that my idea was never intended to be an artificial intelligence per se: the
whole idea was to design a control system for directing an artificial
intelligence’s motivations. I had no idea whether or not my idea could ever
work—indeed, I still have no idea, as it’s still mostly in my head—but I was
too curious not to pursue the issue.
I modeled
my idea after the interaction between the nervous system and the endocrine
system, whereby the body responds to stimuli with chemical feedback to
encourage or discourage repeating a given experience. When the brain makes a
decision that harms the body, for example, the brain is trained to avoid the
same situation via a pain response. This type of feedback is, in fact, reminiscent
of a neural network designed to discover and act on patterns in the endocrine
response to its own behavior. Therefore, an artificial intelligence could theoretically
direct its own actions if it were connected to a system designed specifically
to learn from its own behavior.
This
system does not entirely avoid the aforementioned ethical dilemma: the behavioral
control system must have certain parameters governing what kind of feedback it
provides for various actions. However, this does at least provide a system for
defining the basic values governing an artificial intelligence’s behavior. The
values may yet have to be defined by individuals, but at least there is a
relatively simple and transparent way of defining them and, if necessary,
changing them. It is the difference between “hard-coded” functionality and a
more easily maintainable interface that can be adapted to specific needs.
No comments:
Post a Comment