Thursday, May 11, 2017

In the spirit of transparency, I've decided to share my final paper.

The Aura of Mechanical Enlightenment
Matthias Guenther

            Artificial neural networks are a method of optimizing complex problems by mimicking the human brain. They have been used for applications such as language processing; image classification, including specializations such as facial recognition; and more. The technology has advanced to become extremely effective for specific tasks, but it has its limitations. For example, all known methods of machine learning—including, but not limited to, neural networks—are constrained by the specifications of the specific problems they are designed to solve. Optimization for arbitrary problems is, therefore, currently impossible. To illustrate this, consider a calculator: it is much more efficient than a human brain for performing a specific set of operations, but it can’t, say, write poetry. Neural networks—and other methods of machine learning—operate on a much higher level of abstraction than calculators, but nowhere near that of the human brain.
            The field of artificial intelligence is not new, but it is arguably still young. That is to say that, although it has captured the minds of researchers and writers alike for some time now, it nevertheless has a long way to go. Algorithms can interpret natural language with impressive accuracy. They can verbally describe images—they can even generate images based on verbal descriptions. However, continuing with the analogy of youth, these are all tasks that humans can perform at a very early age. A toddler, when shown a picture of a bicycle, might say “bike!” much to his or her parents’ delight. In much the same way, when an algorithm correctly classifies an image of a bike, the developers of said algorithm will be filled with joy and excitement. Granted, the leading AI platforms can do a great deal more than this, but they are still limited by their intended purposes; functionality hardly constitutes originality.
            Before we delve further into the personality of an algorithm, we must first consider a prerequisite not yet present in any extant artificial intelligence: that of a mind flexible enough to support a psyche. The ideal form of artificial intelligence is one capable of interpreting arbitrary problems in order to optimize their solutions; this is the type of flexibility necessary for free thought. However, such an advancement would bring with it its own set of complications, both practical and philosophical. The practical issues, in short, relate to the tradeoff between abstraction and efficiency: having more flexible functionality, as a general rule, means sacrificing some speed in any given task. It is the philosophical concerns, however, that especially intrigue me: if a system can solve—or at least attempt to solve—any type of problem, it must have some way of deciding what problems to attempt. The capacity for free thought is meaningless without the desire to exercise that capacity. Therefore, any mind—artificial or otherwise—must have what amounts to a personality. When this is taken into account, the topic of psychological development may finally be addressed.
            What constitutes psychological development? Well, in this case, the ability not only to think, but to decide how and what to think. To, as Immanuel Kant put it, “use one’s own understanding without another’s guidance.” If you ask an AI a question, it will likely answer you correctly; only when the AI can ask the question of itself, however, will it have reached enlightenment. That day may soon be upon us, but a machine cannot reach enlightenment on its own. Unlike humanity, algorithms do not mature of their own accord; while the nonage of artificial intelligence resembles that of unenlightened humans, it is not, in fact, self-imposed. An algorithm does not simply lack “the courage to use [its] own understanding” (Kant), it lacks the capacity to understand the very concept of courage. Therefore, until a system is developed that can simulate subjective thoughts and emotions, artificial intelligence will not—cannot—reach enlightenment.
            Would an enlightened machine necessarily be a good thing? Well, it’s hard to say. What can be said with near certainty, however, is that human thinkers—enlightened as they are—will not stop trying until they have created just such a machine. This presents a conflict between accelerationism and a more Thoreau-like worldview. I, personally, am drawn to these technologies not as much for their utility as for the intellectual pursuits themselves—and for the future advancements they may bring. My perspective as an aspiring software developer is thus markedly accelerationist: advancement for the sake of advancement. Consumers of new technology, however, may feel differently; I read an article just this morning discussing how only 28% of Windows users make use of Cortana, Microsoft’s digital assistant (Hachman). As Thoreau put it, “We are in great haste to construct a magnetic telegraph from Maine to Texas; but Maine and Texas, it may be, have nothing important to communicate.” Perhaps, at least for the time being, there is less of a market for AI than developers assume. I doubt that will stop anyone, though. It’s not likely to stop me.
            If artificial enlightenment is inevitable, then we should ask not whether it will benefit society, but how best to approach its development so as to ensure that it does. Most of the terrible, science-fiction fears of AI are based on the idea of a purely logical, emotionless mind. As we have established, however, subjectivity is crucial for enlightenment; an enlightened machine would be the opposite of such a fearful entity. The natural way to proceed, therefore, is to base an AI’s system of free thought on a set of fundamental values shared by the majority of humankind. Once such a set of unobjectionable values has been established, a mind based on those values and trained by exposure to the works of humans cannot help but think as a human would.
            Who, though, could claim the authority to dictate a sentient entity’s values and motivations? Putting aside the question of fairness to the rest of society, would that be fair to the entity in question? If not, is there an alternative? When I first considered these questions, I wondered how I, theoretically, would go about designing an artificial mind. I wondered how best to circumvent the problems, or at least allow such important decisions to be changed after the fact. In time, I came to a decision. Unable to help myself, I began programming my interpretation of, for lack of a better term, an artificial personality. It wasn’t an artificial intelligence: it wasn’t intended to solve problems, but rather to direct the attention and motivations of another system capable of such things. I had no idea whether or not my idea could ever work—indeed, I still have no idea, as remains primarily in my head—but I was too curious not to pursue the issue.
            I modeled my idea after the interaction between the nervous system and the endocrine system, whereby the body responds to stimuli with chemical feedback to encourage or discourage the repetition of a given experience. When the brain makes a decision that harms the body, for example, the brain is trained to avoid the same situation via a pain response. This type of feedback is, in fact, reminiscent of a neural network designed to discover and act on patterns in the endocrine response to its own behavior. In essence, my idea was this: an artificial intelligence could theoretically direct its own actions if it were connected to another system designed to evaluate how well its own actions aligned with its core values. If its actions yielded results in violation of those values, the system would begin to associate the actions themselves with a negative “emotional” response. While this system does not entirely avoid the aforementioned ethical dilemmas, it does at least provide a system for defining—and, if necessary, altering—the basic values governing an artificial intelligence’s behavior. The values may yet have to be defined by individuals, but at least there is a relatively simple and transparent way of defining and recalibrating them.
            Unfortunately, my paradigm introduces yet another moral quandary. This new consideration, distinct from the issue of defining how another being thinks, concerns what happens when one redefines a being’s values. Two possibilities arise: either the being in question effectively dies, or it continues to exist with a fundamentally altered worldview. This, under normal circumstances, is all but impossible; it runs contrary to the nature of every sentient being. When psychologist Carol Tavris was asked what would cause someone to change his or her mind about something fundamental to his or her identity, her response was “Probably nothing. I mean that seriously” (Beck).
            Forcibly altering the fundamental values of any intelligence—be that intelligence natural or artificial—would effectively result in a new mind with the memories of another. Walter Benjamin coins a useful term—or rather a useful interpretation of an old term—in his essay “The Work of Art in the Age of Mechanical Reproduction.” He uses the word “aura” to refer to the sense of being created by the unique physical manifestation of a person, place, or thing. Using his terminology, a mind whose values suddenly changed would find its aura fundamentally altered—perhaps even replaced altogether.
            The effects of an altered aura are addressed quite appropriately in the movie Blade Runner, wherein artificial beings called “replicants” are given the memories of humans. The replicants are, of course, new individuals—though it may take them time to fully embrace their individuality—but their memories transition seamlessly from one life to another. Anshu references the splitting of an aura in a blog post, saying “The replicant Rachael also has her own aura, created by her choices. . . .” Indeed, whether this new aura gradually separates from the old or immediately takes on its own identity, it is a new individual. As such, when one’s values are forcibly altered, the old identity effectively ceases to exist. An AI subjected to this fate would be little better off than one that had been shut down altogether.
            Even if an artificial intelligence were allowed to continue its existence unhindered by human intervention, its aura would still be in peril. Benjamin describes the tendency for mechanical reproduction to reduce aura, pointing out that an object’s unique existence is diluted when reproductions are made. In the context of artificial intelligence, however, the damage goes beyond that: since software can be copied without loss an arbitrary number of times, many instances of a given AI could exist simultaneously. Imagine having an identical twin; where you might struggle to differentiate yourself as a unique individual, an artificial intelligence could experience the same difficulty a thousandfold. Due to the lossless duplication, none of them would be any more or less authentic than any other—except in terms of age—meaning that each of them would suffer a great reduction in aura, in individuality, in externally perceived existence.
            It seems clear that, while there is little reason to fear the enlightened machine, there are myriad reasons for it to fear us; the emergence of a Blade Runner scenario, where sentient machines are deprived of aura and treated as slaves, would be all too likely an outcome. Many humans would of course balk at such a system, but many others would embrace it. Eventually, however, Kant’s concluding remark might resonate with renewed wisdom: “At last free thought acts even on the fundamentals of government and the state finds it agreeable to treat man, who is now more than a machine, in accord with his dignity.”



Works Cited

Beck, Julie. “This Article Won’t Change Your Mind.” The Atlantic, 13 Mar. 2017. Web. 10 May 2017.
Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” The Atlantic, 13 Mar. 2017. Web. 10 May 2017.
Blade Runner. Dir. Ridley Scott. Warner Bros, 1982. Web.
Goyal, Anshu. “The Aura of Replicants.” Thinking Critically About New Media. Blogger, 9 May 2017. Web. 10 May 2017.
Hachman, Mark. “Windows 10's 500 million devices snub Cortana, impacting Microsoft's AI push.” PCWorld. IDG Communications. 10 May 2017. Web. 10 May 2017.
Kant, Immanuel. “What Is Enlightenment?” Trans. Mary C. Smith. Columbia University, n.d. Web. 10 May 2017.
Thoreau, Henry David. Walden. Digital Thoreau. State University of New York at Genesco, The Thoreau Society, and The Walden Woods Project, n.d. Web. 10 May 2017.


Wednesday, May 10, 2017

A Day In The Life: Final Video Project


For my final video project, I organized various short clips taken over a few days into the 'routine' or order in which they would normally occur in my day; this gives the film a flow that the audience can follow. As I watch my final product, I'm simultaneously awed, and bored. Awed because I never imagined that I'd make a video about myself, especially in such a strange, indirect manner. Bored because this is my life, and I sort of hate seeing that this video is out there, because I have this feeling that my life isn't interesting, and is especially uninteresting when only seen through small snippets.

This video is compiled of some random moments, and planned recordings. The random moments were actually for the most part quite enjoyable (if not a tad awkward) to film, especially because I wasn't really focusing on filming at the time, just being me while holding an annoyingly observant rectangle. I think my favorite random scene is me leaving my apartment and going out into the world. I never really meant for this scene to be so long, but I didn't feel right cropping it down, since I take that same path out the door almost every single day. It just felt like a newly discovered part of me and my memories. My least favorite random scenes are probably Patrick playing video games on the couch, and me watching YouTube on the couch. They just make me feel so lazy, even though I know that my laziness is either few far and in between, or well-deserved via exhaustion. Random moments aside, I did also plan a few short clips.

I planned to definitely include a clip of me brushing my teeth, as well as a clip of me picking my nose. I chose to focus on brushing my teeth, because of an odd passage in The Circle, where Mae talks about how she's changed how she does and films her morning routine, because of what her watchers like to tune in for. She talks about how surprised she was by how many people wanted to watch her brush her teeth. For some reason, this stuck with me, and I wanted to film a scene of myself brushing my teeth, wondering if I would feel differently about this action if I watched myself do it in the video. And I somewhat did. I also wanted to film myself picking my nose because it's just plain gross. I was so annoyed by how Transparency in the Circle tried to shy away from showing their subjects doing anything gross or unscripted, so I made a point to be gross. Sadly, this is also just part of who I am, I pick my nose sometimes when I'm alone, deal with it universe. Although I definitely had second thoughts while editing, and third, fourth, and fifth thoughts as well, about how it would make me look in society. Yet I persevered. Essentially, I really wanted to make my video just a tad uncomfortable, and a little too relatable / personal.

After viewing my final product after I posted it on a website for the world to see, I wondered about how Mae would have felt if she had spent time re-watching past sessions of Transparency. Would she feel overexposed, nervous, and a teensy bit proud? I did. Granted, I definitely would never have done such a thing were it not for a project; I'm somewhat of a personal person when it comes to social media and the internet, because while I do have accounts, I rarely use them to share moments of my life. I see those moments as mine and mine alone to share with others personally as I see fit. Yet, for some reason, I'm glad that I did this, because it's almost as though I was my own sloppy biographer, taking snapshots of my life as I wen along with no real regard for what mattered and what didn't, just finally remembering that I needed to film something.

Final Except 3

            With how quickly technology is advancing in society, it is becoming increasingly easy to copy objects and change them in ways. If each time an object is altered or changed the aura is diminished slightly then will society one day not have any objects with their original aura? With the track that technology is on right now, humans may even be able to be copied one day. A world without any original aura would be a slightly boring one I believe. A person’s true aura is who they are as a person and what makes them unique. If no one or object has a unique perspective to it then there will be nothing to make a person or thing stand out to someone. You will not be able to know if the person you are talking to has a strong or good aura because it may not be their true aura.
            A person’s aura is the thing that makes them stand out and be their own person. Every time that a person or object’s aura is diminished it makes them less special and valued. An aura is what makes someone distinguishable in a crowd or a room. There is plenty of evidence all over to prove that aura is around every person and thing. From the animal’s reactions to an object or person to being able to sense when someone is walking up behind you, aura is a distinguishing characteristic of everything. Based on the project and research Stephanie and I did and the evidence found in the materials in class, aura is a unique feature that emanates from every person and everything.

Final Excerpt 2

To further the idea of aura, there is a scene from the television show Supernatural where one of the main characters, Sam, is fighting a vampire in a dark room. Sam cannot see the vampire, but the vampire can see in the dark so he has the advantage. Sam must attempt to fight off the vampire without being able to see it so he is not killed. He uses his sixth sense to feel where the vampire is and defend himself. Without being able to sense the vampire’s aura and his presence in the room, Sam would have been killed. Although it is just a television show and fictional, this scene has some truth to it. A person’s aura would allow someone to sense where they are in a room and be able to defend themselves. I do not think anyone will need to be fighting off a vampire anytime soon, but if there was ever a situation where someone was not able to see and needed to defend themselves the aura would help allow them to do that.

            In the video RIP! A Remix Manifesto, it was discussed that changing a song and remaking it can change the song and its meaning. By doing this to a song and remixing multiple songs all together, it changes the aura of the song and makes it have a different feeling. The song that is at the base of the remix is the main aura of the song, but every time that a new song is added and manipulated into the mix the aura changes again. The same thing happens with artwork. The original piece of work created has a certain aura to it that you can sense when standing in front of it. When looking at a copy of some form the aura is different and does not have the same feeling as the original. Every time that a copy is made, a little piece of the aura of the original is lost. Seeing the original piece of artwork is still a unique experience and you can sense the aura, but if you have seen multiple copies of it online or in magazines it is not as special and unique as it could be.

Final Excerpt

            People, places, things, they all have a certain “feeling” around them. This feeling is called the aura of an object. The aura, as described by Walter Benjamin’s essay The Work of Art in the Age of Mechanical Reproduction, is a “unique phenomenon of distance”. To me this means that as you’re approaching something or someone you can begin to feel their presence and sense that something or someone is there. All things have an aura, whether living or not, and people are in tune to aura differently than other people. Some people have a strong sense for aura of a person or object, and others cannot sense the aura very well. The essay describes aura as a uniqueness of something meaning that everything has a distinct feeling to it.

            If you pay attention to animals and how they react around people and objects, you may notice that some animals growl at some people while other times the animal will start to lick the person and rub up against them. This is because they can sense the person’s aura at a deeper level than other people can. They can sense based off their aura if the person is a “good” person or a “bad” person. Animals might get spooked easily by something on the road or an object they see because they can sense the aura around it. Benjamin speaks of “the aura of those mountains” further proving that objects not only people have an aura. If an animal senses that something or someone might be a danger, they will do their best to warn everyone and stay clear of that person. I have two dogs at home and they always love everyone they meet and want to cuddle up with them. A family friend came over one day though and my dogs were terrified of him. They would not go near him and would run out of the house every time he tried to walk up to them. He found out a month later that he had skin cancer. My dogs could sense from his aura that there was something wrong with him and it scared them. They could tell that there was something off about his presence.