Sunday, April 30, 2017

The Circle and Technology


Having just finished reading The Circle, it is time for me to incorporate some of the ideas in the book into my end of the year project. The book provides numerous helpful examples for both sides of the debate on whether the continual progression of technology is more helpful or detrimental. Nothing emulates the two-sided sword that is technology better than the SoulSearch program. The first time the technology was implemented, the Circle managed to capture a wanted fugitive in just over ten minutes. The only way to describe the first run was as an outstanding success. The second run of the program led to a suicide. A horrible result, due to a program that was far too intrusive. SoulSearch demonstrated the incredibly great helpfulness of technology, but also the totalitarian power it has the potential to possess. The great horror of The Circle is that it becomes a totalitarian entity without anyone other than the founders noticing. Even Mae, whose knowledge and power within the company are rivaled by few, failed to see the destructive nature of the Circle, even when Ty explained it straight to her face. Mae represents the human being we often see today, someone so absorbed in social media and the technological world that they fail to see what is really going on. What was particularly disturbing to myself (and should be disturbing to everyone) was the point Ty made about the unrivaled power of this type of totalitarian power. At least in past examples of totalitarianism, there was the possibility of an uprising or a rebel force that could take down the ruler. The Circle would be like no other power the world has ever seen. Hitler’s Germany was only defeated because the world became aware of the great atrocities being committed by the Nazis. This information was required for the Allies to agree to attack the Axis Powers. The Circle, however, controls all information. How could a rebellion arise if news of that rebellion is either deleted or distorted? It couldn’t. Whoever controls the information controls the power. Citizens of the world would be unaware of the control the Circle has over them. It is a very frightening concept. The odds of someone outside the Circle discovering the truth is equivalent to the odds of someone discovering they are in The Matrix. In other words, it would be nearly impossible. The Circle would devour all possible insurgences just like the shark devoured everything in its domain.

Nest Cam Show and Tell

What happens when we aren't looking? I think it's a question that we've all asked at one point or another. On Tuesday, I gave my show and tell on a piece of technology, Dropcam's Nest Cam. The Nest Cam is an indoor security camera that records the happenings of one's house. Nest Cam first debuted in 2014 and has released products that basically control your house when you are not home. Dropcam was subsequently bought by Google and has been operating ever since. It connects to an app which allows you to view your home as well as play back up to 30 days of footage. I wanted to bring this up for show and tell because I felt it related closely to class in relation to The Circle as well as Panopticism.

In The Circle, Mae gives up her personal freedom in a way by subjecting herself to wearing a webcam at all time. While this may not be the idea of the Nest Cam, a security camera at heart, there is an important connection to make when it comes to technologically aware societies. Much like Mae's experiment, the Nest Cam records everything and even lets you experience it back again. Its this idea of knowledge, and having that information that makes us feel safer and more secure. This is a reflection of Bailey's thought "knowing everything is better," but as the book tries to out to us knowing everything may not be better. Knowledge is a powerful tool, often times life-saving, but at what cost does this knowledge make us stupider.

On the idea of continuous surveillance, it relates closely to Panopticism. Panopticism is this idea of continuous surveillance or the illusion of continuous surveillance. This continued watching would then lead the subjects to behave in their most proper way. Nest Cam is essentially modern day Panopticism. Having a camera that records everything is essentially the central tower Foucault describes, while the cells surrounding that tower is essentially one's home. Inhabitants of that home are more likely to behave accordingly in fear of the thought of being caught. It puts the "watcher" in control.

I'm not trying to paint the Nest Cam badly or calling for a systematic witch hunt of every Dropcam product. In fact, I believe products like Nest Cam are a brilliant idea and do have a place in society. But its interesting to look at how our society is developing and our crave to have everyhting under control. Who knows maybe one day we will be living in The Circle.

Nest Cam Link: https://nest.com/camera/meet-nest-cam/

Saturday, April 29, 2017

Digital Panopticons

            In this discussion post, I am going to explore uses of social media in the context of Panopticism. Drawing a comparison between anything and Panopticism has immediate and somewhat troublesome connotations for the object of comparison. First of all, Panopticism, as it was originally formulated, involved the surveillance of unwilling participants. However, in the digital age, participation in these “surveillance” programs are voluntary, and users participate because they receive some sort of personal convenience or benefit. Of course, this is ignoring government programs like the NSA, which is perhaps an even more appropriate object of comparison but will not be discussed in this post.
            Instead, I am going to focus on the voluntary forfeiture of privacy that social media coerces out of the individual. What I find even more fascinating in today’s social climate is that many (perhaps most) use social media not to connect with old friends, but to persuade others of their self-worth. From what I can tell, the primary use of social media is ego gratification. I do not mean this in a Freudian sense, but instead, I am simply suggesting that the goal of many social media users is to convince followers or friends that they are living valuable lives. Although this is not inherently bad, it may be the case that in practice this detracts from the actual value of one’s life. I plan to expand upon this further in my final essay but will get back on topic for now.
            Another connotation that Panopticism brings is that there is some group of people in power purposefully imprisoning their subordinates in the Panopticon. However, interestingly, this is not the case with social media. One of my main critiques of The Circle, besides its complete lack of subtlety, is what I believe is a complete mischaracterization of modern tech companies and capitalism as a whole. For a book that is so obvious and pointed in its criticisms, The Circle as a company seems much too unlikely. Instead, companies like Google and Facebook are driven, like all participants in capitalism, by money. It was advertisement revenue, not fringe ideology, which ultimately created the digital Panopticon. It just so happens that personal information has monetary value, whether companies literally sell the information or use it to increase revenue through other means, like by using the information to present more relevant content to the user, thus increasing time on site.
            On the surface, collecting information in this way seems relatively harmless, especially compared to the power dynamic described in Discipline and Punish. There is nothing intrinsically wrong with using information to better cater content to the user, and at first glance, this seems like a mutually beneficial relationship. The collection of data is even made less sinister by the fact that the data is usually just being fed into algorithms and not actually seen on an individual level by human eyes.

            One could go the route of The Circle and present a dystopian future in which these companies turn evil and begin to use this data against users. In fact, given the current political climate, the seizure of this information by the government to use to control citizens seems less ridiculous every day. However, a much more reasonable fear is simply that these companies will succeed in personalization of highest possible degree. If these algorithms become so sophisticated that they can find exactly the right content to present to you so that you will stay on their app, no matter the actual value that this content will bring you, it is easy to imagine a future where we spend all our free time looking at our own personal equivalent to cat videos. Additionally, if automation continues to expand at the rate it has been, our free time could soon be endless. Given the amount of data and money being funneled into the tech industry, this possible future may not be as far off as it seems.

Friday, April 28, 2017

Social Media Makes Us "Mini-Celebrities"

           When I met with Dr. Bertsch to discuss my final project, we talked about the concept of the celebrity. This concept relates to the ideas I had for my project. Walter Benjamin argues in The Work of Art in the Age of Mechanical Reproduction that every reproduction is “lacking in one element: its presence in time and space.” If this is true, the “aura” should not be able to exist in social media. However there is something like an aura that is created in social media. This “fake aura” may compare to the “aura” that surrounds celebrities. Through media, celebrities are presented to the public in a very particular way. Often, publicists present only the best features of celebrities; therefore the public gets a skewed image of what that person is really like. The aura we perceive from “reproductions” of the person is not the person’s true aura. For example, I recently read an article about Julia Roberts, and how she often throws huge fits on the sets of her movies. However, I think the image that most people have of Julia Roberts is very different; we all tend to think she is kind, graceful, and considerate because that is the aura that her publicists present. Furthermore, with the advancement of social media and technology, ordinary people are now able to make themselves “celebrities,” which leads to the dissipation of this fake aura. This is the topic I wish to explore in my final project.
               The idea for my project stemmed from some thoughts I had when we watched the Black Mirror episode about Ash. After watching this episode, I concluded that one of the main reasons that the fake Ash wasn’t the same as the real Ash was because he was based on information from online accounts and interactions. When we post online, we often do not give an accurate representation of who we really are or what our lives are really like. We choose to present only certain parts of ourselves that we want others to see. For example, in the “Second a Day” video that Alex and Alyssa presented in their show and tell, the creator chose to compile only the positive parts of every day. We really didn’t see the sad parts of her day or the times when she was just sitting doing nothing, even though I’m sure that she regularly had those kinds of experiences. She chose to share the parts of her life that would make herself look the best.

               For my project, I will be exploring this concept to try to learn more about why we post what we do on social media. In addition, I want to examine how people may misinterpret a person’s true aura based on what that person posts on social media. To do that, I will be conducting a series of interviews about people’s social media experiences. To begin, I will ask people who don’t know me to look at my Instagram account and answer a series of questions about me. I will also answer those same questions about myself. That way, I can compare my true aura (or at least what I think it is) to what people think is my true aura based on my social media. I will also interview about people’s experiences with social media and how they choose the things they post. I’m looking forward to seeing what I find!

Technological timeline and Disney (of course)

Hi everyone,
We're getting towards the end of the semester, and as I reflect back on what we've learned, I kind of get overwhelmed honestly. This class always felt like we were just talking about things you know? Not like a lecture or anything serious, but I actually feel like we've learned a lot? So I was just thinking about my final project, about the movie iRobot, and exactly how far technology has come, and in what capacity. I don't know if anyone's been to EPCOT in Walt Disney World, but there's a ride there called Spaceship Earth, and it's a ride through of the human technological advances, mostly about communication, in history. I know it sounds boring as hell but its actually pretty cool. Here's a video of a ride through in case you haven't been there to experience it. Also there's a ride in Magic Kingdom called The Carousel of Progress, which was like Walt's brainchild. Its a rotating-theatre type ride that has four rooms in it that are supposed to represent the average American life in the 1900s, the 1920s, the 1940s, and 21st century (now), respectively. It made its debut at the 1964 New York World's fair until 1973 when it moved into Disneyland and then Walt Disney World in 1975. It's another example of how the future always seems way more futuristic in the past than it actually is now? If that makes sense. Here's a link to that ride-through if you haven't seen it. I know this post wasn't supposed to be about Disney at all, but its me, so what can we really expect?

So, I got curious, as I do often and wondered exactly how fast are we advancing in technology? Is it too fast? What would have happened if the dark ages didn't happen? So I found this that kind of helped me out. Its wild, honestly. I think its funny that something I take for granted, a fire extinguisher, was such a technological advancement, that it was big news when it was invented in the 1860s. I also feel like Thomas Edison is on here a lot, which is cool. I didn't know about half of the stuff he invented on this list. Also the microwave was created on accident? I feel like maybe a lot of things were created on accident, and no one admitted it to the public. Check out the list, because I'm not gonna bore you by listing off my favorites.

The main point of this is that technology (and our obsession with it and how it changes) is fascinating. I always think about how there can't possibly be anything else to advance or improve can there? There's a line in the Carousel of Progress that has really resonated with me since I first went on the ride. The main character is named John and in every scene he says, "things can't possible get better than this" which is funny because in the very next scene it gets better and easier for their family to go on with their every day lives. I hate sounding like a baby boomer, but things are advancing so fast even my younger sister (who is only 3 years younger than me) is better at technology than I am. I'm just very overwhelmed I'm sorry :(

Thursday, April 27, 2017

Artificial Intelligence and Free Thought

            Artificial neural networks are a method of optimizing complex problems by mimicking the human brain. They have been used for applications such as language processing; image classification, including specializations such as facial recognition; and more. The technology has advanced to become extremely effective for specific tasks, but it has its limitations. For example, all known methods of machine learning—including, but not limited to, neural networks—are constrained by the specifications of the specific problems they are designed to solve. Optimization for arbitrary problems is, therefore, currently impossible. To illustrate this, consider a calculator: it is much more efficient than a human brain for performing a specific set of operations, but it can’t, say, write poetry. Neural networks—and other methods of machine learning—operate on a much higher level of abstraction than calculators, but nowhere near that of the human brain.
            The ideal form of artificial intelligence would be capable of interpreting arbitrary problems in order to optimize their solutions. However, such an advancement would bring with it its own set of problems, both practical and ethical. The practical problems largely relate to the tradeoff between abstraction and efficiency; it is the ethical problems, though, that interest me more: if a complex system can solve—or at least attempt to solve—any type of problem, it must have some way of deciding what problems to attempt. It must have, essentially, a personality. Who gets to decide the values and motivations of an optimization tool? Is that fair to society? How about to the tool itself?
            When I first considered this conundrum, I wondered how I would go about designing an artificial mind. In time, I came to a decision and began programming my interpretation of, for lack of a better word, a personality. Now, it should be noted that my idea was never intended to be an artificial intelligence per se: the whole idea was to design a control system for directing an artificial intelligence’s motivations. I had no idea whether or not my idea could ever work—indeed, I still have no idea, as it’s still mostly in my head—but I was too curious not to pursue the issue.
            I modeled my idea after the interaction between the nervous system and the endocrine system, whereby the body responds to stimuli with chemical feedback to encourage or discourage repeating a given experience. When the brain makes a decision that harms the body, for example, the brain is trained to avoid the same situation via a pain response. This type of feedback is, in fact, reminiscent of a neural network designed to discover and act on patterns in the endocrine response to its own behavior. Therefore, an artificial intelligence could theoretically direct its own actions if it were connected to a system designed specifically to learn from its own behavior.
            This system does not entirely avoid the aforementioned ethical dilemma: the behavioral control system must have certain parameters governing what kind of feedback it provides for various actions. However, this does at least provide a system for defining the basic values governing an artificial intelligence’s behavior. The values may yet have to be defined by individuals, but at least there is a relatively simple and transparent way of defining them and, if necessary, changing them. It is the difference between “hard-coded” functionality and a more easily maintainable interface that can be adapted to specific needs.

The Circle and Potential Final Ideas

After reading The Circle, I definitely want to look more closely at it as part of my final project.  So, I thought I'd examine a passage near the end, in which Mae states something kind of contradictory. She tells Kalden/Ty, "Most people would trade everything they know, everyone they know--they'd trade it all to know they've been seen, and acknowledged, that they might even be remembered.  We all know we die.  We all know the world is too big for us to be significant.  So all we have is the hope of being seen, or heard, even for a moment" (490).  So much of what goes on at the Circle revolves around sharing ideas and opinions, and more importantly, that your opinion matters.  Therefore, what the Circle as a corporation does well is tell everyone that they're important and they matter, which seems kind of juvenile--it's like when kindergartners are told how special they are.  I don't want to sound mean, but I'm not someone who believes what everyone has to say is important, myself included.  I don't expect people to pay attention to everything I say, and I think that's for the best most of the time, as I'll most likely end up embarrassing myself.  What I find so intriguing about what Mae says here is that she acknowledges that she is insignificant in the grand scheme of things, and her way of remedying that is to be seen and heard by as many people as possible.  If she accepts the idea what the world is too big for her to be significant, I find it kind of strange she would go to such lengths to combat this notion.  I think most people would take this idea and decide to try to make an impact in their own communities, rather than by reaching out to as many people as possible for no particular reason.

This goes against a lot of what Thoreau says and his reasoning for being alone, as Mae slowly decides to never be alone.  Mae wants to be seen by as many people as possible, while Thoreau stresses the importance of solitude and experiencing life for what it is.  In a way Mae is experiencing life in that she is experiencing more people, but superficially.  This then brings into question whether Mae's experiences, which happen in front of millions, differ from how she might otherwise experience them if done alone, or at least not in front of a constant live audience.

Show and Tell: Brain Print

On Tuesday I did a show-and-tell presentation about the brain print, a new technology that scans your brain activity to identify you.  I thought this was not only interesting, but ties into our class discussions since we talk so much about new technology.  It also ties into our discussions of taking technology too far, leading to scenarios like The Circle.  The brain print seems, to me, rather unnecessary.  It looks like some type of technology that we could perfect and use for fun, but at the moment doesn't have much of a purpose.  We seem to be pretty good at identifying people, and I know I don't know how much security is needed for someplace like the Pentagon, as mentioned in the video, but the whole idea of a brain print seems a little extravagant.  It looks like technology for the sake of technology, and just because we can create something doesn't mean we should.

A few interesting points were brought up in class, the first of which being what happens if your opinions change, or what happens when you grow up, seeing as we change as we grow older.  In the video they said they were measuring reactions in our brain that are unconscious--we can't control them.  This confuses me because I don't know why they would need to show us a bunch of random pictures if they're not looking at our opinions, but something deeper.  As the brain print is a fairly new idea, there isn't a lot of information out about it yet.  The second concern was that, if the technology is perfected, it could be used on us without our knowledge, identifying us before we even know what's going on.  This reminds me of The Circle in that so much information is kept about people, information they're not even aware of.  Having your brain scanned without your permission, even to identify you, seems like an invasion of of privacy.  This relates to the ending of The Circle, where Mae realizes that she wants to know what is going on inside Annie's head and actually feels as though she has a right to know.  In the book, the one safe place Mae had was her own brain, and this is a perfect example of the slippery slope, where one idea just spirals out of control.

Link: https://www.youtube.com/watch?v=5oe9bZuZOJc

Wednesday, April 26, 2017

Social Media and Psychology Part 2

            In my last discussion post, I talked about the examination of the self through conventional methods, and argued that social media imposes complexities on this concept which redefine the act of observing one’s self and others. In hopes to iron out some ideas for my final project, I will continue along this same path, and now turn towards the implications the aforementioned impositions.
            As I mentioned in my previous post, most humans have a hard time understanding how other people perceive them. Perhaps this is because the self is so entangled in the way that we think, it is impossible to completely remove our own view of ourselves when thinking about how others view us. However, by delegating some of the burden of self-awareness to social media, we may actually be able to move closer towards some sort of objectivity. Once someone creates a Facebook, they no longer have to rely completely on their internal self-image in order to judge themselves. In a way, this person can now approach the self with a faux sense of objectivity. Social media can remediate our ideas of our selves into a medium which is detached from all the convolution of self-consciousness. However, as with any medium, social media distorts the information which it relays.
            In some ways, social media could be seen as nothing more than a game. Users strive to earn some sort of digital currency – likes, hearts, etc. – and enjoy the brief moments of contentment that they provide. The fact that the content is a direct representation of one’s ego could be just an afterthought. However, this fact has massive ramifications for all involved.
            If someone views social media as a popularity contest, it is highly unlikely that their Instagram page will be an accurate representation of themselves. At the very least, the user will only post content which depicts them in a positive light, no matter how relevant the content is to their objective reality. The end result is confusion and perhaps even deception, for those interacting with the user online and even for the user themselves.
            This brings me to the concept of Objective Self-Awareness. This psychological theory was first defined in 1972 by two psychologists, Duval and Wicklund. The basis of the theory is that when humans look inwards and begin evaluating themselves, they judge themselves based on standards that they have formed throughout their lives which “define what a ‘correct’ person is.”1 If, upon reflection, someone decided that they were not living up to that standard, they would experience a host of negative consequences. At this point, Duval and Wicklund proposed the person would either work to close the discrepancy, or enter a state of avoidance.
            When using social media, it is almost impossible to not look inward. To attract the most likes, one would have to determine their ideal self, and compare themselves with that ideal every time they posted. Interestingly, this individual would actively try to bring their social media presence closer to their standard, but most people would not accomplish this by behaving more like their ideal self. Instead, it is much easier to simply stretch the truth on social media pages and ignore the discrepancies in real life.
However, one must face their denial every time they interact with social media, because during this time they must ensure that their interactions live up to their manufactured image. In this way, some of the negative emotions associated with using social media could be described using the OSA theorem.

1Silva, Paul J., and T. Shelley Duval. "Objective Self-Awareness Theory: Recent Progress and Enduring Problems." Personality and Social Psychology Review 5.3 (2001): 230-41. Sagepub. Web. 15 Apr. 2017.