User:Francg/expub/thesis/thesis-outline: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 3: Line 3:
<center>
<center>
<br>
<br>
'''Thesis Outline'''
'''Thesis Outline 5.10.17'''


Screen-scrapping technology for data exposure
Screen-scrapping technology for data exposure
Line 11: Line 11:


<br><br>This project began with the need to find resourceful workflows for more efficient research, data collection and data exposure, in relation to an existing socio-political event of some sort that could be seen as an opportunity for data-scrapping.
<br><br>This project began with the need to find resourceful workflows for more efficient research, data collection and data exposure, in relation to an existing socio-political event of some sort that could be seen as an opportunity for data-scrapping.


With current socio-political issues of great significance internationally, such as the territorial conflict between Catalonia and Spain, information medias create a huge amount of data that is constantly updated and potentially spreadable and morphing. This data reaches an online user, who instantaneously becomes an important network-actor by sharing this content to another user, while hitting uncountable websites and news headers. The information here is subjected among different views and therefore is subjective, not neutral and sometimes highly speculative.
With current socio-political issues of great significance internationally, such as the territorial conflict between Catalonia and Spain, information medias create a huge amount of data that is constantly updated and potentially spreadable and morphing. This data reaches an online user, who instantaneously becomes an important network-actor by sharing this content to another user, while hitting uncountable websites and news headers. The information here is subjected among different views and therefore is subjective, not neutral and sometimes highly speculative.


In order to get as much data as possible out of sources continuously updating material, I want to employ the so-called “generative techniques”. To do this, I will work with “Beautiful Soup”, which is a tool that allows to screen-scrap data from the Internet through generated code in Python, which will allow me to dissect and extract what’s important from a document. That is to say, there will be an important technological challenge in my research that will lead to new tools and working environments, in which programming languages will take place.  
In order to get as much data as possible out of sources continuously updating material, I want to employ the so-called “generative techniques”. To do this, I will work with “Beautiful Soup”, which is a tool that allows to screen-scrap data from the Internet through generated code in Python, which will allow me to dissect and extract what’s important from a document. That is to say, there will be an important technological challenge in my research that will lead to new tools and working environments, in which programming languages will take place.  


Ideally, I will be running a script that will fetch all the needed web pages, update them onto a server (pzwart) and screen-scrap the updated HTMLs to get the results (Unicode standard text encoding). This could be e.g. a whole article or just news headers.  
Ideally, I will be running a script that will fetch all the needed web pages, update them onto a server (pzwart) and screen-scrap the updated HTMLs to get the results (Unicode standard text encoding). This could be e.g. a whole article or just news headers.  
The next step will be formatting all this info in a layout that can be printed in form of book. Furthermore, this data could even be screen-scrap, selected and split within two opposite groups automatically, which would employ some sort of of complex syntax recognition. This means I could interestingly end up with two opposite data bodies focused on one identical issue.
The next step will be formatting all this info in a layout that can be printed in form of book. Furthermore, this data could even be screen-scrap, selected and split within two opposite groups automatically, which would employ some sort of of complex syntax recognition. This means I could interestingly end up with two opposite data bodies focused on one identical issue.


This could also work as a conscious live streaming by updating every new data modification into a website (ideally hosted by pzwart server), where users could track data, read information and go to the original source if needed. Perhaps there could be data visualization, where all updated data can be illustrated graphically and also counted, which could lively create an interesting infographic pattern. Even better, new data could be updated as a new single page of this ongoing book. This would easily allow interested users; whether designers, non-designers, activists, politicians, writers or people with complete different profiles and levels of specialization, to select, download or print just what they want.
This could also work as a conscious live streaming by updating every new data modification into a website (ideally hosted by pzwart server), where users could track data, read information and go to the original source if needed. Perhaps there could be data visualization, where all updated data can be illustrated graphically and also counted, which could lively create an interesting infographic pattern. Even better, new data could be updated as a new single page of this ongoing book. This would easily allow interested users; whether designers, non-designers, activists, politicians, writers or people with complete different profiles and levels of specialization, to select, download or print just what they want.


What is exciting about this is the formatting from web based data, to live-stream and printed matter, which transforms completely the way we experience information, being aware from what you might have seen or heard (or not), to what is really out there being published online. May this form of data exposure bring out a dialogue between man and machine, highlighting the potential of using code without loosing the quality and craft of a handmade work.
What is exciting about this is the formatting from web based data, to live-stream and printed matter, which transforms completely the way we experience information, being aware from what you might have seen or heard (or not), to what is really out there being published online. May this form of data exposure bring out a dialogue between man and machine, highlighting the potential of using code without loosing the quality and craft of a handmade work.
Line 40: Line 35:
<br>http://www.b-list.org/weblog/2010/nov/02/news-done-broke/
<br>http://www.b-list.org/weblog/2010/nov/02/news-done-broke/
<br>http://la3.org/~kilburn/blog/catalan-government-bypass-ipfs/
<br>http://la3.org/~kilburn/blog/catalan-government-bypass-ipfs/
<br>- - -


[https://pzwiki.wdka.nl/mw-mediadesign/index.php?title=Upload_thesis_outlines_2017-18_here&action=edit&redlink=1| Session 2 thesis outline + prototype]
 
<br><br><br>
<center>'''*Thesis Outline after group review 5.10.17'''*
<br>Screen-scrapping technology for data change exposure</center>
 
<br>This project began with the need to find resourceful workflows for more efficient research, data collection and data exposure, in relation to an existing socio-political event of some sort that could be seen as an opportunity for data-scrapping. With current socio-political issues of great significance internationally, such as the territorial conflict between Catalonia and Spain, information medias create a huge amount of data that is constantly updated and potentially spreadable and morphing. The information here is subjected amongst different views and therefore is subjective, not neutral and sometimes highly speculative. In order to get as much data as possible out of sources continuously updating material, I want to employ the so-called “generative techniques”. To do this, I will work with “Beautiful Soup”, which is a tool that allows to screen-scrap data from the Internet through generated code in Python, which will allow me to dissect and extract what’s important from a document. That is to say, there will be an important technological challenge in my research that will lead to new tools and working environments, in which programming languages will take place. Ideally, I will be running a script that will fetch all the needed web pages, screen-scrap the updated HTMLs to get the results, in form of content articles, and finally update this content to a website (which will function as an online archive or database). Simultaneously, I will also be working with “diffengine”, another tool that tracks RSS web feeds in a computer readable way, which will allow me to to see when content changes. When new content is found a snapshot can be saved to the website (feeds archive) that I will be using to lively store & track news. This way of experiencing information can help on drawing attention on data transformation and how news are constantly being morphed, without being aware of it, which can be quite useful for researching. In a way, this can work as a sort of conscious live streaming, updating every targeted news change. This data could also be updated and formatted as PDF documents. This would easily allow interested users; whether designers, non-designers, activists, politicians, writers or people with complete different profiles and levels of specialization, to select, download or print just what they want. A book (or series of diff books arranged chronologically or by web sources) could be printed by converting all this ongoing updated data into an pdf, epub or other format file.
 
<br>
other notes: Thesis: 7000 – 8000 words. What is it? Description What is the aim of it? Can be transmitted through different mediums or publishing formats? Which articles, references are used to write it? Refer back to the project. How it relates to your actual research? Conclusion?
 
Qian: will u choose one way to show the project or multi? You want to transfer the online info to a more subjective perspective?
 
Catalina's comments: 1. How do you want to present the final result? That would be a website, a book, an installation? 2. Do you want to demonstrate or analyze how the new media is used or how it is manipulated in this particular case? 3. What move you to work on this political issue, why this is interesting for you and the audience?
 
</div>
</div>

Latest revision as of 17:27, 15 October 2017


Thesis Outline 5.10.17

Screen-scrapping technology for data exposure



This project began with the need to find resourceful workflows for more efficient research, data collection and data exposure, in relation to an existing socio-political event of some sort that could be seen as an opportunity for data-scrapping.

With current socio-political issues of great significance internationally, such as the territorial conflict between Catalonia and Spain, information medias create a huge amount of data that is constantly updated and potentially spreadable and morphing. This data reaches an online user, who instantaneously becomes an important network-actor by sharing this content to another user, while hitting uncountable websites and news headers. The information here is subjected among different views and therefore is subjective, not neutral and sometimes highly speculative.

In order to get as much data as possible out of sources continuously updating material, I want to employ the so-called “generative techniques”. To do this, I will work with “Beautiful Soup”, which is a tool that allows to screen-scrap data from the Internet through generated code in Python, which will allow me to dissect and extract what’s important from a document. That is to say, there will be an important technological challenge in my research that will lead to new tools and working environments, in which programming languages will take place.

Ideally, I will be running a script that will fetch all the needed web pages, update them onto a server (pzwart) and screen-scrap the updated HTMLs to get the results (Unicode standard text encoding). This could be e.g. a whole article or just news headers. The next step will be formatting all this info in a layout that can be printed in form of book. Furthermore, this data could even be screen-scrap, selected and split within two opposite groups automatically, which would employ some sort of of complex syntax recognition. This means I could interestingly end up with two opposite data bodies focused on one identical issue.

This could also work as a conscious live streaming by updating every new data modification into a website (ideally hosted by pzwart server), where users could track data, read information and go to the original source if needed. Perhaps there could be data visualization, where all updated data can be illustrated graphically and also counted, which could lively create an interesting infographic pattern. Even better, new data could be updated as a new single page of this ongoing book. This would easily allow interested users; whether designers, non-designers, activists, politicians, writers or people with complete different profiles and levels of specialization, to select, download or print just what they want.

What is exciting about this is the formatting from web based data, to live-stream and printed matter, which transforms completely the way we experience information, being aware from what you might have seen or heard (or not), to what is really out there being published online. May this form of data exposure bring out a dialogue between man and machine, highlighting the potential of using code without loosing the quality and craft of a handmade work.


- - -
Reading sources:
Read Where I am - Exploring New Information Cultures
Networks without a cause - A critique of Social Media
Cyburbia - The Dangerous Idea that's changing how we live and Who we are
Pandora's Hope - Essays on the reality of Science Studies
- - -
Websites:
https://twitter.com/guardian_diff
http://www.b-list.org/weblog/2010/nov/02/news-done-broke/
http://la3.org/~kilburn/blog/catalan-government-bypass-ipfs/





*Thesis Outline after group review 5.10.17*
Screen-scrapping technology for data change exposure


This project began with the need to find resourceful workflows for more efficient research, data collection and data exposure, in relation to an existing socio-political event of some sort that could be seen as an opportunity for data-scrapping. With current socio-political issues of great significance internationally, such as the territorial conflict between Catalonia and Spain, information medias create a huge amount of data that is constantly updated and potentially spreadable and morphing. The information here is subjected amongst different views and therefore is subjective, not neutral and sometimes highly speculative. In order to get as much data as possible out of sources continuously updating material, I want to employ the so-called “generative techniques”. To do this, I will work with “Beautiful Soup”, which is a tool that allows to screen-scrap data from the Internet through generated code in Python, which will allow me to dissect and extract what’s important from a document. That is to say, there will be an important technological challenge in my research that will lead to new tools and working environments, in which programming languages will take place. Ideally, I will be running a script that will fetch all the needed web pages, screen-scrap the updated HTMLs to get the results, in form of content articles, and finally update this content to a website (which will function as an online archive or database). Simultaneously, I will also be working with “diffengine”, another tool that tracks RSS web feeds in a computer readable way, which will allow me to to see when content changes. When new content is found a snapshot can be saved to the website (feeds archive) that I will be using to lively store & track news. This way of experiencing information can help on drawing attention on data transformation and how news are constantly being morphed, without being aware of it, which can be quite useful for researching. In a way, this can work as a sort of conscious live streaming, updating every targeted news change. This data could also be updated and formatted as PDF documents. This would easily allow interested users; whether designers, non-designers, activists, politicians, writers or people with complete different profiles and levels of specialization, to select, download or print just what they want. A book (or series of diff books arranged chronologically or by web sources) could be printed by converting all this ongoing updated data into an pdf, epub or other format file.


other notes: Thesis: 7000 – 8000 words. What is it? Description What is the aim of it? Can be transmitted through different mediums or publishing formats? Which articles, references are used to write it? Refer back to the project. How it relates to your actual research? Conclusion?

Qian: will u choose one way to show the project or multi? You want to transfer the online info to a more subjective perspective?

Catalina's comments: 1. How do you want to present the final result? That would be a website, a book, an installation? 2. Do you want to demonstrate or analyze how the new media is used or how it is manipulated in this particular case? 3. What move you to work on this political issue, why this is interesting for you and the audience?