User:Francg/expub/thesis/draft2: Difference between revisions
No edit summary |
No edit summary |
||
Line 15: | Line 15: | ||
In order to find out, I want to extract the data from the Spanish sociopolitical issue that is currently generating a lot of controversy and information overload in the digital medias. To do this, I will make a wide selection of regional and international RSS feeds related to the topic. Combining diffengine and python programming tools, these news feeds will be tracked and when information changes occurs, a snapshot of these changes will be created (highlighting in red or green whether the text was removed or added). This creates 2 jpg and 1 html files, which includes other metadata like time, location and news webpage. Using web-scrapers such as Beautiful Soup or Twarc might be useful to scrape specific accounts or groups in social medias like Twitter, Instagram or Facebook that don’t use RSS technology. Web scraping can also be used to create a specific word count from a text file, which can be interesting for comparing syntactical strategies from different sources. | In order to find out, I want to extract the data from the Spanish sociopolitical issue that is currently generating a lot of controversy and information overload in the digital medias. To do this, I will make a wide selection of regional and international RSS feeds related to the topic. Combining diffengine and python programming tools, these news feeds will be tracked and when information changes occurs, a snapshot of these changes will be created (highlighting in red or green whether the text was removed or added). This creates 2 jpg and 1 html files, which includes other metadata like time, location and news webpage. Using web-scrapers such as Beautiful Soup or Twarc might be useful to scrape specific accounts or groups in social medias like Twitter, Instagram or Facebook that don’t use RSS technology. Web scraping can also be used to create a specific word count from a text file, which can be interesting for comparing syntactical strategies from different sources. | ||
This data will | This data will automatically be updated and hosted in a web server from the local network of a raspberry pi, as a way to reinforce and be critical with the methods used by the group of hackers during the network surveillance against free will of the 1-October in Barcelona, which made possible a registered universal census system, despite it’s non validity. Nevertheless, this work would remain as a neutral figure / eyewitness of the actual demographic issue. | ||
The web server will function as a live streaming of a news archive, which data may be useful to be handled by people with different profiles interested in this issue. This data can also be used to produce a series of monthly books compiling some of this “epic” data changes. | The web server will function as a live streaming of a news archive, which data may be useful to be handled by people with different profiles interested in this issue. This data can also be used to produce a series of monthly books compiling some of this “epic” data changes. | ||
Revision as of 00:30, 19 October 2017
Draft Project Proposal
19.10.17
We live in an era where information technology is unceasingly creating large amounts of data, causing an information overload which sometimes exceed our capacity for processing it and understand it. This large amount of data isn’t only available but it’s also communicated, reproduced and spread almost instantaneously from all over the world. Regardless this expansion and accumulation of data might be producing an abundance of knowledge on one hand, it is nevertheless affecting our daily performance by exposing us to a lot of change in a very short time. That is to say, our entire society and each of its individuals (online users) are taking part in this never-ending process of generating knowledge in which social medias, digital journalism, RSS feeds or other instant messaging tools, are significantly stressing up this phenomena. We live in a mass production, mass distribution, mass consumption, mass education and mass entertainment society that is simultaneously functioning as a weapon for mass misinformation, which ranges from useful to inaccurate or unverified content.
Then it becomes difficult to analyze critically an issue while having this information anxiety stablished and standardized in our lives. Therefore, I wonder what’s the amount of information that is changed and produced in relation to one specific topic of some significance internationally?
In order to find out, I want to extract the data from the Spanish sociopolitical issue that is currently generating a lot of controversy and information overload in the digital medias. To do this, I will make a wide selection of regional and international RSS feeds related to the topic. Combining diffengine and python programming tools, these news feeds will be tracked and when information changes occurs, a snapshot of these changes will be created (highlighting in red or green whether the text was removed or added). This creates 2 jpg and 1 html files, which includes other metadata like time, location and news webpage. Using web-scrapers such as Beautiful Soup or Twarc might be useful to scrape specific accounts or groups in social medias like Twitter, Instagram or Facebook that don’t use RSS technology. Web scraping can also be used to create a specific word count from a text file, which can be interesting for comparing syntactical strategies from different sources.
This data will automatically be updated and hosted in a web server from the local network of a raspberry pi, as a way to reinforce and be critical with the methods used by the group of hackers during the network surveillance against free will of the 1-October in Barcelona, which made possible a registered universal census system, despite it’s non validity. Nevertheless, this work would remain as a neutral figure / eyewitness of the actual demographic issue. The web server will function as a live streaming of a news archive, which data may be useful to be handled by people with different profiles interested in this issue. This data can also be used to produce a series of monthly books compiling some of this “epic” data changes.
Eventually these results can be further used to conduct a wider research to promote awareness on the effects of information overload and how we experience this information. This may also help drawing attention on how sometimes online journalism might look for quick information rather than verified one.