Seda Guerses's lecture: Difference between revisions

From XPUB & Lens-Based wiki
(Created page with "__NOTOC__ __NOEDITSECTION__ <div style="width: 600px; font-family:Arial"> =<span style="color:#0B0080">Anonymity</span>= <hr style="height:5px; margin-top:-15px; background-colo...")
 
No edit summary
 
Line 7: Line 7:
<br />
<br />


In her evening talk after the workshop she gave at PZI, Seda Gürses presented a critical review of some of the proposals for technology as "privacy research". In order to define those privacy technologies, she explored the different kind of translations through their potentials and limitations to see how successful those technologies are.
In her evening talk after the workshop she gave at PZI, Seda Gürses presented a critical review of some of the proposals for technology as ''privacy research''. In order to define those privacy technologies, she explored the different kind of translations through their potentials and limitations to see how successful those technologies are.
<br />
<br />


Line 37: Line 37:
1969: first introduction of the term of privacy to security engineers.
1969: first introduction of the term of privacy to security engineers.


1980s: not only you should hid the content of your information but also who is communicating with who. Mainly cryptographers where interested about it at this time. David Chaum introduced the "blind signature", the ability to sign a digital document without identifying yourself.  Later came the "single/multiple show selective disclosure potential". The idea is to only show one aspect of your identity to prove you are right, without telling entirely who you are. Those concepts were aimed at minimizing the data revealed.
1980s: not only you should hid the content of your information but also who is communicating with who. Mainly cryptographers where interested about it at this time. David Chaum introduced the ''blind signature'', the ability to sign a digital document without identifying yourself.  Later came the ''single/multiple show selective disclosure potential''. The idea is to only show one aspect of your identity to prove you are right, without telling entirely who you are. Those concepts were aimed at minimizing the data revealed.


2000s: the community has grown. Proposals of "anonymizers". Based on erasing your traces, you should prevent a third part to find out who is communicating with who. The question is to find a mathematical model to define the decision point where anonymity is strong enough. Later, Sweeney brought up the idea of "Database anonymization" where you can still use the database for calculations but you are no able to identify the individuals. Then came the "k-anonymity" concept where some part of your data are hidden, like the last digits of your birth date. Those proposals among others have been systematically rejected.
2000s: the community has grown. Proposals of ''anonymizers''. Based on erasing your traces, you should prevent a third part to find out who is communicating with who. The question is to find a mathematical model to define the decision point where anonymity is strong enough. Later, Sweeney brought up the idea of ''Database anonymization'' where you can still use the database for calculations but you are no able to identify the individuals. Then came the ''k-anonymity'' concept where some part of your data are hidden, like the last digits of your birth date. Those proposals among others have been systematically rejected.


Research says, if you have anonymized databases, you can still merge two of them and identify the peoples. Anonymization is impossible. Any data is a personal data, it can be liked to an individual. Should it then be subject to data protection? If one has data about ourselves, can he still pretend knowing us? It also forgets it contains economic logic and social sorting can still be done even if we are not personally identified.
Research says, if you have anonymized databases, you can still merge two of them and identify the peoples. Anonymization is impossible. Any data is a personal data, it can be liked to an individual. Should it then be subject to data protection? If one has data about ourselves, can he still pretend knowing us? It also forgets it contains economic logic and social sorting can still be done even if we are not personally identified.

Latest revision as of 10:59, 7 April 2011


Anonymity



In her evening talk after the workshop she gave at PZI, Seda Gürses presented a critical review of some of the proposals for technology as privacy research. In order to define those privacy technologies, she explored the different kind of translations through their potentials and limitations to see how successful those technologies are.

Definitions


Privacy, data-protection and surveillance
  • Privacy as similar to Solove's concept: non-universal. A vague definition that makes privacy powerful because it becomes an issue that you can raise like freedom. It mainly deals with the opacity of the individual and the protection of his rights.
  • Data-protection as a procedural set of cards, closer to a European definition. The goal is to increase the accountability of the organizations that are processing information. It is reached by fulfilling certain conditions (transparency, asking permissions...). Personal data is a central concept of it, defining anything that can be used to identify someone.
  • Surveillance, contrary to the first concepts which are based on the individual, is focused on the community. It is inspired by the concept of the panopticon and its self-discipline aspect. It refers to any kind of data gathering which decide who fits in the norms. Its a statistical and sorting tool.
Privacy research paradigms: assumptions in computer science
  • Privacy as confidentiality: Anonymity or "the right to be left alone". You should not reveal data and hide in a digital sphere.
  • Privacy as control : Privacy is a right of an individual to decide what information about himself should be communicated to the others and under which circumstances. It involves separation of identities. The question is how to implement data protection in information systems ?
  • Privacy as practice : "Freedom for unreasonable constraints on a construction of one's own identity". In other words, we are not born with an identity but it develops as we interact with others. It implies to develop feedback tools for the community.

We need all those three paradigms.

Privacy as confidentiality


1969: first introduction of the term of privacy to security engineers.

1980s: not only you should hid the content of your information but also who is communicating with who. Mainly cryptographers where interested about it at this time. David Chaum introduced the blind signature, the ability to sign a digital document without identifying yourself. Later came the single/multiple show selective disclosure potential. The idea is to only show one aspect of your identity to prove you are right, without telling entirely who you are. Those concepts were aimed at minimizing the data revealed.

2000s: the community has grown. Proposals of anonymizers. Based on erasing your traces, you should prevent a third part to find out who is communicating with who. The question is to find a mathematical model to define the decision point where anonymity is strong enough. Later, Sweeney brought up the idea of Database anonymization where you can still use the database for calculations but you are no able to identify the individuals. Then came the k-anonymity concept where some part of your data are hidden, like the last digits of your birth date. Those proposals among others have been systematically rejected.

Research says, if you have anonymized databases, you can still merge two of them and identify the peoples. Anonymization is impossible. Any data is a personal data, it can be liked to an individual. Should it then be subject to data protection? If one has data about ourselves, can he still pretend knowing us? It also forgets it contains economic logic and social sorting can still be done even if we are not personally identified.

So does it mean we can't anonymyze data ?

Assumptions
  • there is no trust on the internet.
  • you are individually responsible for the spreading of your data
  • if someone knows your data, he knows you

In that case, technical solutions appear to be better than legal ones.

Problems
  • We have to control ourselves to preserve privacy, it is paradoxical because control is what we want to avoid in the first place. Also the idea of anonymity is problematic in the network world because it is not so much about proving things about ourselves but more our connectiveness and it's a big burden to act different than the society.
  • Anonymity gives a false sense of privacy because it makes you feel you can make a difference but you are still accepting surveillance. By being anonymous you don't have the power to define how the database about yourself work.
  • Not only privacy is disappearing but the definition is problematic. Its absolutism is anchored in the past. Who gets to decide what should be private and what should be public ? We assist to an erosion of the private and the public.
  • From information perspective, when I don't want to reveal data I belong to the category “non-revealing data” and my pattern can be understood. It doesn't deal well with persistence. If you use anonymous tools all the time, you have a pattern and then no protection anymore.
  • Database protection use anonymity, but because you are out of the field of data exchange, it could go that you cannot call the law in case of anything goes wrong. You are free but you have no protection.

However, we sometimes want to step out and still reveal information (in the case of wikileaks for example), so we still need anonymity.

Privacy as practice


Assumption

People would say giving information is an exchange to have something back and it becomes almost an economic argument.

Problems
  • Information about others reveal things about me. My personal choices have an effect on the others
  • We don't only act on economics. A lot of our practices are social, how we use sms, how we use the internet, etc.
  • Data-mining are here to stay so we should make it accessible. What tools would allow users to affect the flows of information? For example, what is known about me in the social networks? Only governments and companies might not want to develop those practices, so we have to ask how to do it.

Conclusion


We should avoid monopolizing what privacy technically is. It is not just data that you own, neither a product that can be sold to you. Privacy technology is very difficult to put at work, but we should follow it as closely as possible because privacy is not dead and we are in the process of creating it.