User:Francg/expub/thesis/thesis-outline: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
No edit summary
Line 8: Line 8:
</center>
</center>


<br>'''What you want the thesis to be about?'''
<br>


I want the thesis to be a bridge between the digital and the handmade, showing a research focused on the study of the growth of techno-dependency in the evolution of medias, how our daily online environment triggers our senses, how far information can potentially be spread, how deeply embedded is our digital persona on us, or which future scenarios can be speculated from this ongoing issues; e.g.: "Who would be able to design a book in a post-apocalyptic digital era where Adobe no longer exists (neither other similar software replacements)? maybe only coders." This could be an interesting discussion that could be further explored and so interpreted by suggesting new possible directions, which can bring attention to the close connections between technology, politics, economy and design.  
Screen-scrapping technology for data exposure.


For instance, the book "Conversations" shows how a book can be designed using markdown languages and to still keep a beautiful layout with code-based imagery. It offers a good example of a workflow (based on existing platforms + tools; namely etherpad (web based text editor; I personally like its numeric aesthetic and usability), latex (specific mdown reader) or bash (shell scripting), which in this case involves "sociality" with a group of participants. That is to say, the social aspect should be an important factor in the development of my thesis, deepening into a more concrete study case.
<br>This project began with the need to find resourceful workflows for more efficient research, data collection and data exposure, in relation to an existing socio-political event of some sort that could be seen as an opportunity for data-scrapping.  
With current socio-political issues of great significance internationally, such as the territorial conflict between Catalonia and Spain, information medias create a huge amount of data that is constantly updated and potentially spreadable and morphing. This data reaches an online user, who instantaneously becomes an important network-actor by sharing this content to another user, while hitting uncountable websites and news headers. The information here is subjected among different views and therefore is subjective, not neutral and sometimes highly speculative.


Building a book through markup languages would be an inspiring challenge for people with complete different profiles and levels of specialization, such as writers, artists, activists, etc., but also a way to encourage myself personally to be a more self-sufficient designer, not to become a developer, but a more multidisciplinary designer by integrating code. Most importantly, by using these tools we will be questioning the process of creation, our active roles with technology, and their social significance as well:
In order to get as much data as possible out of sources continuously updating material, I want to employ the so-called “generative techniques”. To do this, I will work with “Beautiful Soup”, which is a tool that allows to screen-scrap data from the Internet through generated code in Python, which will allow me to dissect and extract what’s important from a document. That is to say, there will be an important technological challenge in my research that will lead to new tools and working environments, in which programming languages will take place.


<br>Will there be data visualization?
Ideally, I will be running a script that will fetch all the needed web pages, update them onto a server (pzwart) and screen-scrap the updated HTMLs to get the results (Unicode standard text encoding). This could be e.g. a whole article or just news headers.
<br>Which are the digital mediums employed to produce such work?
The next step will be formatting all this info in a layout that can be printed in form of book. Furthermore, this data could even be screen-scrap, selected and split within two opposite groups automatically, which would employ some sort of of complex syntax recognition. This means I could interestingly end up with two opposite data bodies focused on one identical issue.
<br>What information sources (study case) are going to be researched?
<br>Is there any particular digital culture type behind it?
<br>What experimental publishing forms or collaborative spaces can this body incorporate?
<br>Would this material be aimed for designers, non-designers, youth, politicians (what is the audience?)?
<br>What is the pedagogical value of using such tools? Is it technological freedom?
<br>Can documentation bring out a dialogue between man and machine, highlighting the potential of using code without loosing the quality and craft of a handmade work?


<br>Maybe this is a way to not become obsolete for me, having the need to find other ways to find more interesting and resourceful workflows that make us feel more useful.  
This could also work as a conscious live streaming by updating every new data modification into a website (ideally hosted by pzwart server), where users could track data, read information and go to the original source if needed. Perhaps there could be data visualization, where all updated data can be illustrated graphically and also counted, which could lively create an interesting infographic pattern. Even better, new data could be updated as a new single page of this ongoing book. This would easily allow interested users; whether designers, non-designers, activists, politicians, writers or people with complete different profiles and levels of specialization, to select, download or print just what they want.


What is exciting about this is that the transition and separation of the digital and analogue, or the digital and physical realms, can also highlight the results of this collide, whether digitally or as some sort of printed matter.
What is exciting about this is the formatting from web based data, to live-stream and printed matter, which transforms completely the way we experience information, being aware from what you might have seen or heard (or not), to what is really out there being published online. May this form of data exposure bring out a dialogue between man and machine, highlighting the potential of using code without loosing the quality and craft of a handmade work.
This could be an initial connection to some previews essays on cybernetics and technology as human extensions.


<br>'''Bibliography'''
sarah garcin: the PJ machine (Publishing Jockey) -> https://www.youtube.com/watch?v=mvL6N168Dg4
<br>Ricardo Lafuente -> https://pzwiki.wdka.nl/mediadesign/Lettersoup
<br>http://conversations.tools
<br>https://www.forkable.eu/generators/dit/o/free/A3/dit-A3-001.pdf
<br>https://archive.org/details/designforbrain00ashb
<br>Some tools:
<br>https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
<br>http://www.latex-project.org/
<br>http://pandoc.org/
<br>https://en.wikipedia.org/wiki/epub
<br>https://www.scribus.net/
<br>Tech blogs:
<br>[https://www.lifewire.com/big-ways-to-track-viral-trends-online-3486303 Five ways to track viral things Online]
https://www.newswhip.com/
https://techcrunch.com/
https://thenextweb.com/


- - -
- - -

Revision as of 23:25, 4 October 2017


Thesis Outline




Screen-scrapping technology for data exposure.


This project began with the need to find resourceful workflows for more efficient research, data collection and data exposure, in relation to an existing socio-political event of some sort that could be seen as an opportunity for data-scrapping. With current socio-political issues of great significance internationally, such as the territorial conflict between Catalonia and Spain, information medias create a huge amount of data that is constantly updated and potentially spreadable and morphing. This data reaches an online user, who instantaneously becomes an important network-actor by sharing this content to another user, while hitting uncountable websites and news headers. The information here is subjected among different views and therefore is subjective, not neutral and sometimes highly speculative.

In order to get as much data as possible out of sources continuously updating material, I want to employ the so-called “generative techniques”. To do this, I will work with “Beautiful Soup”, which is a tool that allows to screen-scrap data from the Internet through generated code in Python, which will allow me to dissect and extract what’s important from a document. That is to say, there will be an important technological challenge in my research that will lead to new tools and working environments, in which programming languages will take place.

Ideally, I will be running a script that will fetch all the needed web pages, update them onto a server (pzwart) and screen-scrap the updated HTMLs to get the results (Unicode standard text encoding). This could be e.g. a whole article or just news headers. The next step will be formatting all this info in a layout that can be printed in form of book. Furthermore, this data could even be screen-scrap, selected and split within two opposite groups automatically, which would employ some sort of of complex syntax recognition. This means I could interestingly end up with two opposite data bodies focused on one identical issue.

This could also work as a conscious live streaming by updating every new data modification into a website (ideally hosted by pzwart server), where users could track data, read information and go to the original source if needed. Perhaps there could be data visualization, where all updated data can be illustrated graphically and also counted, which could lively create an interesting infographic pattern. Even better, new data could be updated as a new single page of this ongoing book. This would easily allow interested users; whether designers, non-designers, activists, politicians, writers or people with complete different profiles and levels of specialization, to select, download or print just what they want.

What is exciting about this is the formatting from web based data, to live-stream and printed matter, which transforms completely the way we experience information, being aware from what you might have seen or heard (or not), to what is really out there being published online. May this form of data exposure bring out a dialogue between man and machine, highlighting the potential of using code without loosing the quality and craft of a handmade work.


- - -

Session 2 thesis outline + prototype