User:Michaela/thesis: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
Line 21: Line 21:
The problematic aspect of the recovered data: who is the actual author of the final work? <br>
The problematic aspect of the recovered data: who is the actual author of the final work? <br>


==INTRO==
==== '''<ownership''' ====


How do we leave a trace of an online presence?
'data erasure' term - or the only way to erase data permanently is physically to destroy a hard disk drive. In the past shredders were used to destroy confidential, secret papers now there are replaces by data shredders.  
The amount of traces/ digital traces in form of caches, cookies, footprint of the browser is stored, tracked and dumped in the system. Various tools – PGP (encrypting your email), browser plug- ins, private networks (VPN) – are used to prevent our location to be tracked, our data to be exposed. All of which limits the traces we leave but it doesn't completely erase them. Driven by my personal motivation to explore the limitations of technology and different methods of resistance I became sceptic about the possibility of being able to not leave traces or in other words the (im)possibility to erase the digital footprint.
Questions about e-waste and digital recycling, digital trash and data recycling. Except the physical implementation of data storage of digital trash camps outrageously sent to remote areas in Africa nevertheless there is the ethical issue or who has the right to intervene with someone's data.  
The common methods to erase files from systems do not delete them permanently. Those files remain hidden, abandoned in hard drives. The traces left remain present within time in the medium, allowing to be retrieved and becoming a subject of study.  


==THECHNICAL ASPECT==


<DATA RECOVERY / DATA ERASURE / CLUSTERS/PARTITIONS><BR>
The discovered cluster must be treated as an ideal representation of data that could be used as a way of recovering the original data back from the “ideal” format. This is the idea of data recovery approach: not only to use the data for finding clusters but also to use the clusters for recovering data. In a general situation, the data recovered from the sumarized custers can not fit the original data exactly, which can be due to various factors such as present of noise in the data.
[[File:Linear-svm-scatterplot.svg|thumbnail]]
Data Entry: research on how to completely erase your data from the operating system. rmv command does not delete the file permanently" or non-restorative. I found indulging the fact that there is a command line "shred" ( available only on linux ), poorly used because of its damaging consequences. Shred erases all the information, by overwriting 20 times or so. There is no "shred" command in OS x instead there is a replacement or security remove command called srm. In comparison with Mac OS x, based on Unix system, a file could be overwritten only three times at the time. In relationship to that I find interesting approach of how to defeat the privace issue by information overflow or noising the channel, by overwriting something infinitive times instead of trying to denoise it.


'data erasure' term - or the only way to erase data permanently is physically to destroy a hard disk drive. In the past shredders were used to destroy confidential, secret papers now there are replaces by data shredders.
<<The story of Ghana digital trash camps 
<<< e-waste, data remanence data thief
or how the data can be recovered and serve as an evidence


Questions about e-waste and digital recycling, digital trash and data recycling. Except the physical implementation of data storage of digital trash camps outrageously sent to remote areas in Africa nevertheless there is the ethical issue or who has the right  to intervene with someone's data.
comment: to find more examples and stories


=PROJECT=
===='''<memory'''====
==DESCRIPTION==
This chapter will be about the technical aspect of data recovery process 
I have started a process of retrieving old hard drives disks (data storage) restoring the data out of them, in order to examine abandoned data and leftover traces. By doing this I am exploring the problematic aspect of data erasure.
more specifically:  clusters, partitions, Gutmann method, modern methods of data recovery
The hard drive, in this case a rather obsolete object, serves as an ultimate storage of data, container of past and present, which could be also invaded and investigated further.
The traces of information are exponential they contain various sources...they travel, scattered being trade -transacted or serve as a found footage for artistic intervention. This project is trigger by the idea of how the data trace is permanent, could not be destroyed or erased completely. It has been encapsulated within the time, the code and the medium itself.


==METHODOLOGY==
links:
I am working on a series of projects related to the topic of these found traces, which will end up leading to the final graduation project. I would like to incorporate a poetic aspect of the retrieve data and the idea of the erasure as the main motive in the final work. <br>
Legend of multiple passes of overwriting:<br>
I built a simple methodology of data recovery. The data storage is the approached collection of ten hard drives followed by information about the hard drive: the source of origin()<br>
 
[[File:Diagram dataRec-01.png|diagram_dataRecovery | right | 400px]]
http://grot.com/wordpress/?p=154
The documentation or the process of recovery consists of video capture of the used software and sound from a spinning hard drive disk (unable to boot). In order to organize this accidentally found archive I simply described the size/ model of each hard drive, the process of the remain time and amount of the restored data.  
 
'''Data collection''' consists of rich content of personal data: img files, videos files, audio files, code or text logs and trash (or unrecoverable files or parasite files etc.)
http://www.pcworld.com/article/209418/how_do_i_permanently_delete_files_from_my_hard_disk.html
Sample of found material:<br>
 
*'''The factory set''' is a 23 min long video documentation of factory for pineapples in Ghana.
https://ssd.eff.org/tech/deletion
The camera follows the production line of workers assembling pineapples, scene by scene revealing every detail of the process.  
 
The moving images are rich by their source of origin, drawing highly graphical, dense scene of workers in factory motionless in their every day basics.<br>
http://security.stackexchange.com/questions/10464/why-is-writing-zeros-or-random-data-over-a-hard-drive-multiple-times-better-th
The images serve as a data trails, conveying the trace of the failure of the recovery process, signed for being "broken" or interrupted. Sometimes the freeze of the frame creates conscious break from the meditation, but also suggest that the footage has been manipulated.
 
==PRACTICE – BASED==
---> more over
Descriptions of previous projects
There is a well-known reference article by Peter Gutmann on the subject. However, that article is a bit old (15 years) and newer harddisks might not operate as is described.
Some data may fail to be totally obliterated by a single write due to two phenomena:
We want to write a bit (0 or 1) but the physical signal is analog. Data is stored by manipulating the orientation of groups of atoms within the ferromagnetic medium; when read back, the head yields an analog signal, which is then decoded with a threshold: e.g., if the signal goes above 3.2 (fictitious unit), it is a 1, otherwise, it is a 0. But the medium may have some remanence: possibly, writing a 1 over what was previously a 0 yields 4.5, while writing a 1 over what was already a 1 pumps up the signal to 4.8. By opening the disk and using a more precise sensor, it is conceivable that the difference could be measured with enough reliability to recover the old data.
Data is organized by tracks on the disk. When writing over existing data, the head is roughly positioned over the previous track, but almost never exactly over that track. Each write operation may have a bit of "lateral jitter". Hence, part of the previous data could possibly still be readable "on the side".
 
===='''<code''' ====
This chapter is about the performative aspect of the code.
Code as Language
 
 
==== '''<space'''  ====
This chapter will be about the final work. Description/ set up etc.

Revision as of 13:13, 11 February 2014

Useful links:
A Guide to Essay Writing

Thesis Guidelines

Draft_ thesis plan:

ABSTRACT

QUESTIONS I WANT TO ADDRESS IN MY ESSAY

#problem_aspect #1

THEME: OWNERSHIP/ DIGITAL PROPERTY/ PROPERTY RIGHTS

#problem_aspect #2
SUB_THEME: PRIVACY IN DIGITAL REALM
The ethical issue I want to address is: who has the right to withdrawal someone's data and how this data could be used, reused or misused?

#problem_aspect #3
SUB_THEME: AUTHORSHIP/CO-AUTHORSHIP/ MULTIPLE - AUTHORS
The problematic aspect of the recovered data: who is the actual author of the final work?

<ownership

'data erasure' term - or the only way to erase data permanently is physically to destroy a hard disk drive. In the past shredders were used to destroy confidential, secret papers now there are replaces by data shredders. Questions about e-waste and digital recycling, digital trash and data recycling. Except the physical implementation of data storage of digital trash camps outrageously sent to remote areas in Africa nevertheless there is the ethical issue or who has the right to intervene with someone's data.


<<The story of Ghana digital trash camps <<< e-waste, data remanence data thief or how the data can be recovered and serve as an evidence

comment: to find more examples and stories

<memory

This chapter will be about the technical aspect of data recovery process more specifically: clusters, partitions, Gutmann method, modern methods of data recovery

links: Legend of multiple passes of overwriting:

http://grot.com/wordpress/?p=154

http://www.pcworld.com/article/209418/how_do_i_permanently_delete_files_from_my_hard_disk.html

https://ssd.eff.org/tech/deletion

http://security.stackexchange.com/questions/10464/why-is-writing-zeros-or-random-data-over-a-hard-drive-multiple-times-better-th

---> more over There is a well-known reference article by Peter Gutmann on the subject. However, that article is a bit old (15 years) and newer harddisks might not operate as is described. Some data may fail to be totally obliterated by a single write due to two phenomena: We want to write a bit (0 or 1) but the physical signal is analog. Data is stored by manipulating the orientation of groups of atoms within the ferromagnetic medium; when read back, the head yields an analog signal, which is then decoded with a threshold: e.g., if the signal goes above 3.2 (fictitious unit), it is a 1, otherwise, it is a 0. But the medium may have some remanence: possibly, writing a 1 over what was previously a 0 yields 4.5, while writing a 1 over what was already a 1 pumps up the signal to 4.8. By opening the disk and using a more precise sensor, it is conceivable that the difference could be measured with enough reliability to recover the old data. Data is organized by tracks on the disk. When writing over existing data, the head is roughly positioned over the previous track, but almost never exactly over that track. Each write operation may have a bit of "lateral jitter". Hence, part of the previous data could possibly still be readable "on the side".

<code

This chapter is about the performative aspect of the code. Code as Language


<space

This chapter will be about the final work. Description/ set up etc.