User:Jonas Lund/Trimester1Text

From XPUB & Lens-Based wiki
< User:Jonas Lund
Revision as of 18:23, 16 January 2012 by Jonas Lund (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Abstract
An observation of opposing views concerning online identity in relation to privacy within the relatively new social media services.



To be or not to be yourself
The social-media industry has just begun, and it’s quickly spreading to every part of our lives. Services such as Facebook, Twitter, Foursquare, Latitude, Blippy, Klout, Spotify, Soundcloud enable us to share everything from our daily events, our location (in real time), all of our credit card purchases to our favourite music, either in front of a computer or on the go with our, always online, smartphones. In 2010 John Doerr invested $250 million dollars into a venture capital fund dedicated to putting money into social, he argues, as outlined by Andrew Keen, that “‘social’ represents the ‘third great wave’ of technological innovation, after the invention of the personal computer and the internet.” (Keen) As the web transformed from a one directional information highway, in which the users were mere consumers of content into a bi-directional, read-write, peer to peer social one, so too have we changed our behaviour. When surfing the web in the mid 90s you could be anyone, there was no incentive, other than goodwill, to be yourself. With the move into the social sharing model – the reasons and benefits of acting as your own self increased.

Rushkoff argues in ‘Program or be Programmed’ that we should strive for being oneself online, as the benefits largely outweigh the downsides. He writes, “By resisting the temptation to engage from the apparent safety of anonymity, we remain accountable and present—and much more likely to bring our humanity with us into the digital realm.” (p.80 Rushkoff)

Rushkoff believes that by allowing anonymity online we open up a dangerous zone of non-accountability, a model, in which we can act without any consequences, and that that is negative for the future of the web and its users, as it allows us to be non-human, disconnected for our real selves. If Google and Facebook, largely funded by advertisement, have a clear picture of who you are, they can use that, not only to target you with ads more accurately, but also to alter their services to a personalized version of their services.

Brad Troemel writes, “The capacity to manipulate and construct an identity online separate from our everyday existence is an expression of freedom from totalizing surveillance that would automatically provide information of our behalf [...] A true conception of freedom of expression must include the possibility of lying or abstaining from expression altogether” (p.104, Troemel)

What Troemel wishes for is getting more difficult in the online sphere, as websites and apps are moving to using Facebook login exclusively to access their services, it’s becoming the de facto standard for user logins and thus your online identity. Facebook in turn encourages you to be yourself, and enforces a real name policy. When Google+ launched they also included a real name identity policy, in which they required you to use your real name to use the service. As a reply to the critique towards that model, Google’s ex CEO and current chairman Eric Schmidt said “G+ is completely optional. No one is forcing you to use it. It's obvious for people at risk if they use their real names, they shouldn't use G+”. The dominant business of each and one of the myriads of social sharing services is advertising sales, “it is inevitable that all this data will end up in the hands of our corporate advertising ‘friends’”. (Keen) Our own private data will end up being a memory lost, as it’s constantly being handed out to the highest bidder. Keen believes that we’re not naturally social beings–that sharing doesn’t make us happy, being left alone does.

Keen’s pessimistic believe symbolizes the older generation of the online population, a generation forced into the social sharing model to stay relevant with the upcoming youth, but at the same struggling to adapt and find positive experiences.

According to Keen, by giving up our own identity and funnel more and more data into the social services, we’re not only giving up our happiness but at the same time we’re slowly surrendering our own privacy, as it’s the end point of a business model driven by advertisement sales. Services that offer great possibilities and an endless pool of feedback, slowly accumulating more data, which in turn will be used to profile you in a multiple of ways.

In the article ‘Your Life Torn Open’ (2011) Jeff Jarvis compares the current privacy crisis with that of the of time when Gutenberg invented the printing press. Authors of that time were worried about “having their thoughts and identities recorded permanently and distributed widely” (Jarvis). New technology often brings new fears and discussions about privacy. Jarvis is not concerned with privacy and argues that the discussion thereof could cloud the potentially huge benefits that the non-anonymous web can offer. As an example he mentions Germany’s ‘Verpixelungsrecht’ which granted the German citizens the right to blur their houses from Google Streetview – but to what end? Could this right extend to other areas, what might be obfuscated next? “If Google can be told not to take pictures of public places, will citizens be censored next?” (Jarvis)

Jarvis extends his point that we’re sharing because it brings us benefit, and by being yourself in open publicness, you build real connections with similar people. His assumptions is based on his own experience blogging, sharing and talking about his own erectile dysfunctions due to prostate cancer and how much help and comfort he gained from the online community.

It seems as if the immediate results of your online identity in connection with your privacy are quite transparent. There’s an awareness that by using a myriad of social services you are surrendering a small degree of your online privacy.

Eli Praiser noticed, by looking at his Facebook news feed, that at a certain moment all of his republican friends didn’t appear anymore. The algorithm, Edgerank, Facebook used to select what content was displayed had analyzed his recent activity and deemed the republican friends as less interesting and replaced them with his democratic friends, which shared his political views.

Praiser describes this phenomena as being in a filter bubble, a state in which the content you see is personalized and tailored based on your previous actions, such as location, web history, comments and Facebook likes. As a user you’re not aware of it and you don’t see what’s outside the bubble. It can be described as a type of online censorship. The risks of letting an algorithm dictate what content a user is exposed to are, according to Praiser, great as they keep users from seeing opposing views in a political and social context. As an example Praiser uses two screenshots from two of his friends, both performing a Google search for ‘Egypt’ during the times of the riots. One friends sees holiday recommendations, the other information about the ongoing riots.

Praiser solution to the ‘problem’ is first to inform the online community of its existence, and later to encourage the users to take steps to avoid being trapped in the filter bubble, the most extreme being to go completely anonymous by using projects such as TOR. (Praiser)

In contrast to Praiser, Paul Boutin believes that the notion of the filter bubble is largely exaggerated and that most online users prefer efficient filters whilst browsing the web. He writes, “People like them [Filters]. The Internet long ago became overwhelming. Filters help make it manageable without our having to do the work of sorting through its content entirely by ourselves.” (Boutin)

The filter bubble phenomena relies on a different type of online identity, the one which you are rarely aware of as it’s happening ‘behind the scenes’. This means that what you do online also makes up how you are, by tracking you across websites relying on your IP and a set of cookies, online advertisement corporations and social networks can profile you. 
An IP address is assigned to each network capable device and is used in network communications as an identifier and for location addressing. The IPv4, currently used, consists of 32 bits, 4 blocks of number ranging from 0–255, giving it 2^32 variations. Since the introduction of IPv4 the Internet has grown tremendously and in February 2011 all the IPv4 addresses had been allocated. The solution to this predicament is to move to the new version of the IP protocol, version 6. IPv6 uses 128-bit addresses, thus giving it 2^128 variations, a seemingly infinite number.

With the IPv4 protocol, a public online IP address can change and is often shared among a set of computers, and is not directly linkable to you as a user. With IPv6 the mac address, a device specific string given to every network capable device, of your device is encoded into the IP number, by using the mac address as the last 64 bits of the IPv6 address, also known as the “interface identifier” (Beijnum), thus making it possible to directly identify you based on your IP address. (This can be turned off to aid privacy, but requires additional steps and is not implemented by default)

This will take the online tracking to a whole new level, and will give a whole new meaning to what it means to be yourself online. The IPv6 address becomes a finger print of your device and could potentially be linked, at the time of purchase to your persona.

If Keen is right in his assumption that private data will end up being lost in the ever growing social sphere, that we will all move to an online model which is based on sharing every detail of our lives, the separation between online and offline identity will too become irrelevant. The person you control online is you.



The first time I went online was in the mid 1990s. I was eleven years old and what amazed me the most was the possibility of freedom–you could become anyone online. As a kid your taught that everything you do has consequences, and that you’re accountable for each of your actions. The internet and the web however, offered a space in which there was hardly any consequences to your actions, one of my dear hobbies back then was to enter online chat rooms and create havoc, to push my new found freedom to its limits. This behaviour also isolates the problem Rushkoff describes, without accountability it’s easy to ignore responsibility towards your peers.

As we moved into the social web, I slowly adapted to the notion that I ought to be myself online–how else could a social network be valuable? The power of the social web is based on real connections, with real people. I have countless examples of positive interactions and exchanges purely based on online social connections, much like Boutin argues. So I moved into becoming myself online within social networks, whilst maintaing a clear privacy for other acts. I can be myself on Facebook, Twitter and Google+ as long as I can retain my anonymity elsewhere. The concerns of privacy within social services is less of a problem to me, as I never assumed that the data I was producing was private to begin with. However, I prefer it if no one knows what I’m doing whilst browsing in incognito mode. In this regard I agree with Troemel, in the sense that we should always have the right to be anonymous persons online.

The collected information flow I’m exposing myself to every day is increasing, the sum of the Facebook timeline, the Twitter feed, and the collection of 300+ relevant RSS feeds aggregated by Google Reader, are slowly piling up to an unmanageable amount. The concept of the filter bubble Praiser outlines is something I’ve been searching for, for years, an efficient personalized filter, one that can sort out the relevant and interesting posts from the vast endless pool of content in my feeds. The danger of being left in the dark, blind to what’s going on outside your interests, I believe to be rather exaggerated. It’s based on an assumption that all information is retrieved and received by a single source of input, in Praiser's example Google.

I think Pariser’s and Rushkoff’s approaches of raising awareness about the consequences of your online behaviour, your privacy and identity is encouraging. If you know what is happening with the data you are actively or passively producing, and you have an idea of how that data is used, you are much better equipped to make an informed decision concerning your online life.



Bibliography