User:Laurier Rochon/readingnotes/lessig codev2
< User:Laurier Rochon
Revision as of 12:05, 16 October 2011 by Laurier Rochon (talk | contribs)
Lawrence Lessig, code V2
P1
---
- At the internet's birth, there was a great sense of joy, liberation. Many thought we finally had a system that would allow complete freedom from regulation - government of otherwise. The author believes this faith in this new system was very misguided. His view is actually that the infrastructure of the net will become increasingly controlled and regulable through its own technology.
- Commerce and government fare much better in a world that is regulated. Commerce provides the tools/means for government to regulate. This is exactly the way the net has evolved in the last years - the two working hand in hand (ex: facebook as the private industry - providing data for the governments). The author states that a large part of the net will be run by commerce (nothing wrong there), but when it works with governments to produce infrastructure that facilitates control, something else emerges. "Control exercised by technologies of commerce, backed by the rule of law".
- From a manifersto that defined the ideals of cyberspace : "We reject kings, presidents and voting. We believe in : rough consensus and running code". The idea was not that government would not regulate cyberspace, it couldn't. Because of its structural nature, it would not be regulable.
- The author belives that liberty does not come with an absence of state. It comes from a state of a certain kind. It needs a set of guidelines, a constitution. This consistitution would simply be a definition of the values that need to be defended in this space. It's an architecture, a way of life - a "lighthouse to guide and anchor fundamental values". A constitution envisions an environment, in which we protect values which we decide are important. Values can be substantive (privacy, free speech, etc.) or structural (making sure a certain part of a regulator is not too powerful)
- The inherent behaviour (invisible hand) of cyberspace is to become a perfect tool for control. This is why we cannot let it unregulated and unwatched. It must be framed and overlooked, but made to protect the values we want it to hold. "The struggle will be to assure that essential liberties are preserved in this environment of perfect control"
- The #1 regulator is code. It is which gives or refuses access to something. It is both the cooperating and discriminating force. The hardware and software that make up cyberspace is also what regulates it. We always choose what code will do (it is never found, but made), as it produces a binary product - yes or no.
- The first version of the architecture was built by researchers and hackers. The second by private industry. The third? Who knows...To build this third version, we need to make choices (about what we want to defend, and not). The author thinks we are not up to these choices, and that our state is not ready to be trusted with the regulation of such a system (for example, it has already criminalized the core ethic of this movement, the hacker).
P2
---
- the "internet" and "cyberspace" are two different things. Internet refers to the practical side of the technology, but cyberspace includes the community, the life, the feeling that draws so many of us inside it.
- 4 stories about 4 themes/characteristics of cyberspace : ambiguities, regulations by code, regulability, sovereignty
- norms are different online - because of the code, which enables and disables certain features, by informing the architecture of the space. In real life we have choices, but we cannot choose to escape the consequences of these choices. It is different in cyberspace, these things can be coded in or out or an environment. What happens in an online space is simply a statement of logic. Simply put, we are faced with choices - what do we want to prevail in these environments.
- another property of cyberspace - it allows for multiple personas. Anyone can be an author AND publisher all rolled into one, and the network allows for publication without filtering, editing and too much responsibility. When things cross from cyberspace into the real, we face a different situation, but we are often confronted with grey zones here.
- story of the computer worm created by government : “Is freedom inversely related to the efficiency of the available means of surveillance?” For if it is, as Boyle puts it, then “we have much to fear.”. Technology allows for highly efficient searches that leave barely any footprint at all. It is now possible to increase significantly the number of searching, without increasing the burden of the people being searched. This puts clearly into question how a search should be interpreted in the constitution - meaning that we have to decide which values the constitution was meant to protect in this instance (the burden of being searched, or one's privacy?). To resolve these ambiguities, we need to turn to different institutions than the legals ones we have now - the author claims such a thing currently does not exist. We have real-life tools to guide us, but they will falter as we progress in time, and these ambiguities can eventually not be related to real-world problems anymore. (Here is the exact text of the 4th amendment : "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.")
- "When 50 people from 25 jurisdictions around the world spend 2,000 hours building a virtual community in Second Life that is housed on servers in San Francisco, what claim should real world jurisdictions have over that activity? Which of the 25 jurisdictions matters most? Which sovereign should govern?" The author believes in regulation, if properly done. But the right government is not the one we have at the moment (written during the bush years!)
- great example to illustrate this
"I take it we all believe in the potential of medicine. But imagine your attitude if you were confronted with a “doctor” carrying a vial of leeches. There’s much we could do in this context, or at least, that is my view. But there’s a very good reason not to want to do anything with this par- ticular doctor."
P3
---
regulability (is the way it is, the way it must be?)
- The author vehemently attacks the idea that cyberspace is a space that cannot be controlled. He takes exception to the idea that some people believe that the way things are - are the way things have to be. He concludes that this thinking stems from the fact that most people are not technologists, and therefore cannot imagine the plasticity of the internet. Thing can be changed, reprogrammed and molded in such a way to reflect any set of values. "The burden should be on the technologists to show us why what we demand can't be met". And so the regulability of cyberspace depends on its architecture, which is malleable. We are moving (I agree) from an internet that is hard to control (1st "version"), to one that is only about control. This "nature" of it, that people deem is "inherent" is changing pretty rapidly to something new.
- As its core, the protocols that define the internet are open-source and do not require authorization. When you connect to the internet you get an IP address from the system to identify your space on the network (not your physical space), and that's it. This type of internet does not exist, in its "pure" form. There are always layers of control that are added on top, the first one being always the provider. This can be a school (originally the universities were the "ISPs", I assume), a private company (usually nowadays), or something else. By nature, the internet was not designed to know 1)who someone is 2)what they are doing 3)where they are. But we can add an infinite amount of control layers on top, to change this. Sadly, it is quite clear that we've come to add many of these layers over the years. The choice is not between Internet and no Internet, but rather how mediated it will be, and by what means.
P4
---
(architectures of control)
- The internet has no "inherent nature". It is built, code is made, and decisions are taken in this direction. Policies are voted for, which are supposed to reflect our values. The code which supports the architecture of our networks should be impregnated by these values.
- In order to regulate, the authority needs to know who, what where. If we consider the internet in its bare state, it is rather difficult to achieve this. The internet was simply based on a set of protocols that define transport of data from a point to another (endpoints, or end-to-end). It was not designed to gather this type of data in its original implementation. It is still the case today, but the extra layers of regulation that are being built atop of this "plain vanilla" internet are multiplying. For example, is data needs to be encrypted, it will be at one of the endpoints, not during transport. Because there can be no justice without accountability, authentification is the largest source of problems in regards to regulation of the internet. Better technologies of identification enable more distant authentification, and they will be more present in the near future.
- Back to WHO. The TCP/IP protocols don't have any sort of tracing technology to determine where a user is situatied at a certain time. On the other hand, it's not very difficult to follow packets or reverse engineer the allocation of IP addresses, since ISP's are the ones distributing them. These ISP's keep logs of what IP was attributed at what time, and so any activity that logs an IP on the Web could probably be traced back to a physical location. Cookies also allow for web sites (this was the idea originally) to have a memory (servers are state-less). It also makes a user very traceable. Workarounds : go to an internet cafe, or other location where you share the same internet access point.
- WHAT : The next generation of the TCP/IP protocol will mark each packet with an encryption key, that will identify the origin of the packet. There is a also a flurry of programs that now exist to filter packets and attempt DPI (deep packet inspection). Often the original motivation for this comes from commerce, where employers want to keep a tab on their employees (ex: porn, sports, etc.). Once again, this is not built into the TCP/IP layer, but can be added to an access node. Workarounds : encrypt your packets, which makes it impossible to know what the contents are.
- WHERE : Once again, because of businesses, there are a lot of different pieces of software written to reverse geocode an IP address. This came out of the need mainly to identity shoppers and offer them a more customized experience (prices in your currency), serve relevant ads depending on your location, as well as offer better services altogether (serve your weather data more accurately). Another important point is to restrict access to content if users are not from a certain geographical area to respect the jurisdiction and particular laws of countries. In brief, commerce has aptly filled the gap in terms of locating users physically in space, something the government is likely to be happy about (their laws can be more easily enforced) and wont have to invest in this type of technology. It is also possible to evade this type of tracking, by using a proxy, for example.
P5
---
regulating code
- Technologies that make the web more efficient usually make it also more regulable. We're living in a political climate where the resistance to empowering government with more control over digital matters is weakening, in the name of our security - especially after 9/11. Thus, they have influenced directly or indirectly the development of an architecture/design that is favorable to regulation.
- Ex1 : before internet telephones, it was relatively easy to know which wires to tap. With VOip, the packets just take the fastest route, and so it becomes very hard to 'wiretap' those calls. Well in 2005, the FCC (Federal trade commission) ruled that "VOIP" services "must be designed so as to make government wiretapping easier". Control over an indutry like the phone industry if fairly simple and effective, there are only a few and it's easy to verify if they actually implement this "architectural feature". This is an example of govvernment intervening directly to make the design of the internet more regulable - and having a company that goes against federal laws is never a good idea.
- Ex2 : phones pt2. cell phone companies need to know which user made a call from where in order to bill him/her properly, and to assure seamless service. The FBI has lobbied to have the amount of data that is collected from users go up - initially to know where emergency calls came from (very fair). Unfortunately, they have since required that cell phone companies keep all of this data about all their users as such as they can use this data if they believe that if could be of use for them. This data is meaningless to companies, but they must keep it in case government decide they need to have a look.
- Ex3 : Cryptography : Clipper chip - a technology that government wanted to have as a default - it left a back-door open for the government. Their strategy to make this happen was to to heavily subsidize the development of this chip. If they could make the chip much cheaper than all other chips, then you effectively stat regulating and controling the market for a certain technology by simply flooding the market with a new standard. This strategy actually failed, so the government started pushing for all encryption technology manufacturers to include this back-door in their design.
- Ex4 : in 1998 cisco made a router that had a switch - allowing to encrypt data. however, gov had the ability to switch if off, on demand.
- In all of these examples, governement regulates people's behavior indirectly, by affecting the development of technology directly. By regulating code, you can regulate people. In the end, governments don't even need to regulate the internet (even if they wanted to), all they need to keep in eye out for is the intermediates. All in all, if governements have enough control over ISPs, they can regulate the net pretty much however they want. Code writes therefore decide how much the net will be regulable - but as this business becomes more commercial (as it is), the gov has increasing power over this activity. The net was originally programmed by people that were very idealistic, and didn't belong to a particular company or group with commercial interests. This is not the case anymore - as big companies currently rule cyberspace. If gov can put pressure on these companies to mold the technology the way they want it, we are going towards a very regulated net. (a short list in the US : microsoft, apple, HP, facebook, twitter, IBM, google, adobe, cisco, ebay, intel, yahoo). Companies don't want to be making software that breaks the law. Mainly, government acts within the realm of software development by creating rules and purchasing or funding certain products.
- Ex5 : google censoring its search results in China (whatever is too sensitive politically). The fact that google complies with the government's demands shows us that the value of china's market is well worth sacrificing their neutral search principle. To extend this a little - people in China get 10 years in prison for critisizing the gov - to do this, often ISPs have to cooperate with govs, including Yahoo and Microsoft.
- East coast code VS west coast code (washington vs silicon valley)
- z-theory (zittrain) : the internet has allowed us to do great things in the last few years (there are many, many myths surrounding tech companies and their founders), and we tend to focus on this mythical aspect of tech. We like to think of Steve Jobs or the Google guys starting in their garages to eventually become billionnaires. We like to always refer back to these positive achievements, but rarely try to figure out what the flip side of this is. The fact is, the power of code can do just as many bad things as it can do good. According to Zittrain's reseach, there is no apparent reason why nothing like this (an "internet armageddon") has not happened yet - which is very worrysome. We have not yet experienced a digital 9/11 fallout, but Zittrain asserts that it is definitively within the realm of possible things. He argues that when it DOES happen, this will be spark needed for gervenment to step in and "complete the work of transforming the net into a regulable space". "Terror motivates radical change". What will be the equivalent of the digital Patriot Act?
- This very odd situation then currently exists where the interests of commerce indirectly serve the interests of governement - who in turn give them the OK to decide on whatever they think is "best". If commercial interests govern how the net is built, it is then reasonable to ask - are the these interests compatible with those of the public? Is whatever is good for Facebook also good for all of us?
P6
---
Cyberspaces
- Multiple identities. Just a thought...if we combine the email verification techniques for accounts with the phone/address verification procedures for companies like Paypal, etc., what happens to multiple identities on the net? Most accounts for services online are tied to an email address - and most people can simply create as many email addresses as they desire to have identities. But if authentification toughens up, how will this still be possible?
- Different spaces afford different possibilities. In its first "iteration", cyberspace was mainly text-based due to technical restraints. This enabled and disabled certain types of behavior. People who well literate and expressed themselves well usually thrived in this sort of environment. Blind and deaf people could easily integrate without anyone knowing of their limitations. As audio, video and graphics slowly took over the Internet, these things have changed. What is possible and what is not in these spaces depend on the architecture. To make a comparison : AOL restricts the amount of users in a chat room to 23. How can crowds gather in this space then? Perhaps this is still why physical protests happen very often - there are certain limitations to these cyber spaces when it comes to gathering.
- Imagine that one of these service proiders has a problem with one or many of its features (it wants to disable something, or discourage a certain use of X or Y). It basically has 4 venues it can explore : 1)rules 2)values 3)price 4)architecture. In the very most of cases, architectural changes are favored, as these kind of changes usually tap into the idea that "this is how things are", and they are inherent to the nature of the system. They can also be made less apparent (than pricing, for example).