User:Manetta/thesis/thesis-outline: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
Line 3: Line 3:


==intro==
==intro==
===problematic situation===
Written language is primarily a communication technology. Text mining is an undesired side effect of the information economy (ref...!). Text mining becomes part of business plans, where tracking of online-behavior is crucial to make profitable deals with advertisers. But next to mining-business-plans, text mining becomes a technology that seems to be able to 'extract' how people feel. A commonly applied algorithm is the sentiment algorithm, used for opinion mining for example on Twitter, to be able to use tweeted material as part of news-reports or decision making processes. The World Well Being Project goes even a step further, and aims to use Twitter to reveal “how social media can also be used to gain psychological insights“ (http://wwbp.org/papers/sam2013-dla.pdf).
Text mining seems to go beyond its own capabilities here, by convincing people to believe that it is the data that 'speaks'. The actual process is hardly re-traceable, the output explains intangible phenomena, and it seems to be that the process is automated and therefor precise.
The issue that i would like to put central is the fact that text mining technologies are regarded as analytical 'reading' machines that extract information from large sets of written text. (→ consequences of 'objectiveness', claims that 'no humans are involved' in such automated processes because it is 'the data speaks'). But in its process, it rather shows more similarities with a writing process. What happens, if text mining software is used as writing technique?
===in-between-language / inter-language ===
OCR A + B example. Two fonts that did consessions in their precence, and form objects that are optimized for both machine and human reading processes.
===NLP===
===NLP===
With 'i-could-have-written-that' i would like to look at technologies that process natural language (NLP). There is a range of different expectations of NLP systems, but a full-coverage of natural language is unlikely. By regarding NLP software as cultural objects, i'll focus on the inner workings of their technologies: what are the technical and social mechanisms that systemize our natural language in order for it to be understood by a computer?
In the whole project 'i-could-have-written-that', i would like to place NLP software central, not only as technology but also as a cultural object, to reveal in which way NLP software is constructed to understand human language, and what side-effects these techniques have.
 
NLP is a category of software packages that is concerned with the interaction between human language and machine language. NLP is mainly present in the field of computer science, artificial intelligence and computational linguistics. On a daily basis people deal with services that contain NLP techniques: translation engines, search engines, speech recognition, auto-correction, chatbots, OCR (optical character recognition), license plate detection, data-mining. For 'i-could-have-written-that', i would like to place NLP software central, not only as technology but also as a cultural object, to reveal in which way NLP software is constructed to understand human language, and what side-effects these techniques have.


===knowledge discovery in data (data-mining)===
===knowledge discovery in data (data-mining)===
Line 13: Line 21:
=title: i could have written that=
=title: i could have written that=
Text mining is part of an analytical practise of searching for patterns in text following "A Data Driven Approach", and assigning these to (predefined) profiles. It is part of a bigger information-construction process (called: Knowledge Discovery in Data, KDD) which implies source-selection, data-creation, simplification, translation into vectors, and testing techniques.
Text mining is part of an analytical practise of searching for patterns in text following "A Data Driven Approach", and assigning these to (predefined) profiles. It is part of a bigger information-construction process (called: Knowledge Discovery in Data, KDD) which implies source-selection, data-creation, simplification, translation into vectors, and testing techniques.
== context ==
Text mining is a political sensitive technology, closely related to surveillance and privacy discussions around 'big data'. The technique in the middle of tense discussions about capturing people's behavior for security reasons, but affect the privacy of a lot of people — accompanied by an unpleasant controlling force that seems to be omnipresent. After the disclosures of the NSA's data capturing program by Edward Snowden in 2013, a wider public became aware of the silent data collecting activities done by a governmental agency on for example [http://www.popularmechanics.com/military/a9465/nsa-data-mining-how-it-works-15910146/ phone-metadata]. The UK law made special exceptions in their copyright laws to make [https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/375954/Research.pdf text mining practises possible] on intellectual property for non-commercial use since October 2014. Problematic is the skewed balance between data-producer and data-analytics also framed as [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2709498 'data colonialism'], and the accompanied governmental-role that gives to data-analytics for example by construction your search-results-list according to your data-profile.
The magical effects of text mining results, caused by the difficulty of understanding the construction of these results, makes it difficult to formulate an opinion about text mining techniques. It makes it even difficult to formulate what the problem exactly is, as many people are tending to agree with the calculations and word-counts that seemly are excecuted. "What is exactly the problem?", and "This is the data that speaks, right?", are questions that need to be challenged in order to have a conversation about text mining techniques at all.


==hypothesis==
==hypothesis==
Line 25: Line 28:
This thesis will aim for an audience that is interested in an alternative perspective on buzzwords like 'big data' and 'data-mining'. Also, this thesis will (hopefully!) offer a view from a computer-vision side: how software is written to understand the non-computer world of written text.
This thesis will aim for an audience that is interested in an alternative perspective on buzzwords like 'big data' and 'data-mining'. Also, this thesis will (hopefully!) offer a view from a computer-vision side: how software is written to understand the non-computer world of written text.


== reading technique: a 'key value pair' in vectors ==
==1 - text mining; what if i regard it as writing machine?==
Data is seen as a material that easily can be extracted from the web, and regarded to be that little 'truth-snapshot' that is taken at a certain time and moment. In text mining, the material that is used as input are written pieces of texts transformed into data. The data is formatted as a word + number, a feature + weight, a key + value pair, or how a linguist would call it: a subject + predicate combination. This is the material building block where text mining outcomes are constructed with. Immaterial building blocks are: point-of-departures, source-selection, noise-reduction, and test-techniques. All these elements effect the vector / the model / the algorithm.  
If text mining is regarded to be a writing system, what and where does it write?
===1.1 - text mining culture===
* What are the levels of construction in text mining software culture?
** By considering text mining technology as a reading machine?
** terminology
*** How does the metaphor of 'mining' effect the process?
*** 'data mining' → Knowledge Discovery in Data (KDD)
** How much can be based on a cultural side-product (like the text that is commonly used, as it is extracted from ie. social media)?
 
=== 1.2 – Pattern's gray spots===
* What are the levels of construction in text mining software itself?
** What gray spots appear when text is processed?
***What is meant by 'grayness'? How can it be used as an approach to software critique?
*** Text processing: how does written text transform into data?
*** Bag-of-words, or 'document'
*** 'count' → 'weight'
*** trial-and-error, modeling the line
** Testing process
*** how is an algorithm actually only 'right' according to that particular test-set, with its own particularities and exceptions?
***loops to improve your outcomes
 
===1.3 – text mining (context)===
* what are text mining applications? [[User:Manetta/i-could-have-written-that/kdd-applications|listed here]]
Showing that text mining has been applied across very different field, and thereby seeming to be a sort of 'holy grail', solving a lot of problems. (though i'm not sure if this is needed)
 
== 1 - 'key value pair' focus ==
<div style="color:gray;"> Data is seen as a material that easily can be extracted from the web, and regarded to be that little 'truth-snapshot' that is taken at a certain time and moment. In text mining, the material that is used as input are written pieces of texts transformed into data. The data is formatted as a word + number, a feature + weight, a key + value pair, or how a linguist would call it: a subject + predicate combination. This is the material building block where text mining outcomes are constructed with. Immaterial building blocks are: point-of-departures, source-selection, noise-reduction, and test-techniques. All these elements effect the vector / the model / the algorithm.  


* what is the position of the key-value pair in the text mining process?
* what is the position of the key-value pair in the text mining process?
** description of the 5 KDD steps
** description of the 5 KDD steps
* what side effects does it have to describe a profile in key-value pairs?  
* what side effects does it have to describe a profile in key-value pairs?  
** using written words is already a representation: the word represents the meaning/thought/intention
** using written words is already a representation: the word represents the meaning/thought/intention
** in text mining, written text is aimed to be interpreted from a reader's perspective (as opposed to the writer's intention)
** in text mining, written text is aimed to be interpreted from a reader's perspective (as opposed to the writer's intention)
** key-value pair is a representational format: the value represents the key
** key-value pair is a representational format: the value represents the key
</div>


==systemization of language & NLP (a dream)==
==2 - context and history==
* where could the key-value format be traced back from?
* where could the key-value format be traced back from?
** computer history?  
** in a tradition of systematizing language?
** logic
*** Are these aims not relying to much on a rational tradition? Austin's 'Speech Act theory' & Heidegger's 'dasein' and opinion that we “should not split up the object from the predicate”
** rational tradition & philosophy
** in a tradition of data-processing?
** in a tradition of AI/NLP projects like ELIZA & SHRDLU (?)?


Text mining seems to be a rather brutal way to deal with the aim to process natural language into useful information. To reflect on this brutality, tracing back a longer tradition of natural language processing could be usefull. Hopefully this will be a way to create some distance to the hurricanes of data that are mainly known as 'big', 'raw' or 'mined' these days.
Text mining seems to be a rather brutal way to deal with the aim to process natural language into useful information. To reflect on this brutality, tracing back a longer tradition of natural language processing could be usefull. Hopefully this will be a way to create some distance to the hurricanes of data that are mainly known as 'big', 'raw' or 'mined' these days.
Line 48: Line 78:
<div style="color:gray;">(The systemization of language through key-value pairs follows a Western tradition that is based on the convention that there is an individual that perceives, and there is an outsie world that contains its own natural truth. The linguist Austin rather shows that language is merely a speech act, happening as a social act. In these speech acts there is no such external objective meaning of the words we use in language, that meaning only exists in social relations. Heidegger even goes further, and says that we should not split up the object from the predicate. For example: while 'hammering' the person that hammers is not regarding the hammer in a reflective sense. The person is in the moment of using the hammer to achieve something. The only moment when the person would be confronted with a representational sense of the hammer, is when the hammer breaks down. It is at that moment that the person will need to fix the hammer, and learn a bit more about what a hammer 'is'.)</div>
<div style="color:gray;">(The systemization of language through key-value pairs follows a Western tradition that is based on the convention that there is an individual that perceives, and there is an outsie world that contains its own natural truth. The linguist Austin rather shows that language is merely a speech act, happening as a social act. In these speech acts there is no such external objective meaning of the words we use in language, that meaning only exists in social relations. Heidegger even goes further, and says that we should not split up the object from the predicate. For example: while 'hammering' the person that hammers is not regarding the hammer in a reflective sense. The person is in the moment of using the hammer to achieve something. The only moment when the person would be confronted with a representational sense of the hammer, is when the hammer breaks down. It is at that moment that the person will need to fix the hammer, and learn a bit more about what a hammer 'is'.)</div>


==circularity==
==3 - circularity (reasoning)==
* could text mining software be regarded as cybernetic feedback system?
A text mining process is not aiming to 'find' if assumptions can be confirmed, it is rather engineered to be so.
** in which the process of text mining is not about 'finding' if assumptions can be confirmed, [[User:Manetta/i-could-have-written-that/knowledge-discovery-workflow | but it is engineered to be so]]
*  text mining & an iterating workflow
** applied to key-value appearances in other traditions?
** Pattern's workflow [[User:Manetta/i-could-have-written-that/knowledge-discovery-workflow | more here]]
** IBM's 'iterating' graph
*  heliocentric/geocentric diagrams to understand the universe → complexity confirms the input (here: the perception of stars and their positions) Could this be the case in mining practices?





Revision as of 21:53, 16 February 2016

outline

intro

problematic situation

Written language is primarily a communication technology. Text mining is an undesired side effect of the information economy (ref...!). Text mining becomes part of business plans, where tracking of online-behavior is crucial to make profitable deals with advertisers. But next to mining-business-plans, text mining becomes a technology that seems to be able to 'extract' how people feel. A commonly applied algorithm is the sentiment algorithm, used for opinion mining for example on Twitter, to be able to use tweeted material as part of news-reports or decision making processes. The World Well Being Project goes even a step further, and aims to use Twitter to reveal “how social media can also be used to gain psychological insights“ (http://wwbp.org/papers/sam2013-dla.pdf).

Text mining seems to go beyond its own capabilities here, by convincing people to believe that it is the data that 'speaks'. The actual process is hardly re-traceable, the output explains intangible phenomena, and it seems to be that the process is automated and therefor precise.

The issue that i would like to put central is the fact that text mining technologies are regarded as analytical 'reading' machines that extract information from large sets of written text. (→ consequences of 'objectiveness', claims that 'no humans are involved' in such automated processes because it is 'the data speaks'). But in its process, it rather shows more similarities with a writing process. What happens, if text mining software is used as writing technique?

in-between-language / inter-language

OCR A + B example. Two fonts that did consessions in their precence, and form objects that are optimized for both machine and human reading processes.

NLP

In the whole project 'i-could-have-written-that', i would like to place NLP software central, not only as technology but also as a cultural object, to reveal in which way NLP software is constructed to understand human language, and what side-effects these techniques have.

knowledge discovery in data (data-mining)

For the occassion of the graduating project of this year, i would like to focus on the practise of text-mining, which is a subgroup of the so called field of 'data mining'.

title: i could have written that

Text mining is part of an analytical practise of searching for patterns in text following "A Data Driven Approach", and assigning these to (predefined) profiles. It is part of a bigger information-construction process (called: Knowledge Discovery in Data, KDD) which implies source-selection, data-creation, simplification, translation into vectors, and testing techniques.

hypothesis

The results of data-mining software are not mined, results are constructed.

audience

This thesis will aim for an audience that is interested in an alternative perspective on buzzwords like 'big data' and 'data-mining'. Also, this thesis will (hopefully!) offer a view from a computer-vision side: how software is written to understand the non-computer world of written text.

1 - text mining; what if i regard it as writing machine?

If text mining is regarded to be a writing system, what and where does it write?

1.1 - text mining culture

  • What are the levels of construction in text mining software culture?
    • By considering text mining technology as a reading machine?
    • terminology
      • How does the metaphor of 'mining' effect the process?
      • 'data mining' → Knowledge Discovery in Data (KDD)
    • How much can be based on a cultural side-product (like the text that is commonly used, as it is extracted from ie. social media)?

1.2 – Pattern's gray spots

  • What are the levels of construction in text mining software itself?
    • What gray spots appear when text is processed?
      • What is meant by 'grayness'? How can it be used as an approach to software critique?
      • Text processing: how does written text transform into data?
      • Bag-of-words, or 'document'
      • 'count' → 'weight'
      • trial-and-error, modeling the line
    • Testing process
      • how is an algorithm actually only 'right' according to that particular test-set, with its own particularities and exceptions?
      • loops to improve your outcomes

1.3 – text mining (context)

Showing that text mining has been applied across very different field, and thereby seeming to be a sort of 'holy grail', solving a lot of problems. (though i'm not sure if this is needed)

1 - 'key value pair' focus

Data is seen as a material that easily can be extracted from the web, and regarded to be that little 'truth-snapshot' that is taken at a certain time and moment. In text mining, the material that is used as input are written pieces of texts transformed into data. The data is formatted as a word + number, a feature + weight, a key + value pair, or how a linguist would call it: a subject + predicate combination. This is the material building block where text mining outcomes are constructed with. Immaterial building blocks are: point-of-departures, source-selection, noise-reduction, and test-techniques. All these elements effect the vector / the model / the algorithm.
  • what is the position of the key-value pair in the text mining process?
    • description of the 5 KDD steps
  • what side effects does it have to describe a profile in key-value pairs?
    • using written words is already a representation: the word represents the meaning/thought/intention
    • in text mining, written text is aimed to be interpreted from a reader's perspective (as opposed to the writer's intention)
    • key-value pair is a representational format: the value represents the key

2 - context and history

  • where could the key-value format be traced back from?
    • in a tradition of systematizing language?
      • Are these aims not relying to much on a rational tradition? Austin's 'Speech Act theory' & Heidegger's 'dasein' and opinion that we “should not split up the object from the predicate”
    • in a tradition of data-processing?
    • in a tradition of AI/NLP projects like ELIZA & SHRDLU (?)?

Text mining seems to be a rather brutal way to deal with the aim to process natural language into useful information. To reflect on this brutality, tracing back a longer tradition of natural language processing could be usefull. Hopefully this will be a way to create some distance to the hurricanes of data that are mainly known as 'big', 'raw' or 'mined' these days.

By looking at projects from the past in the field of NLP (being part of AI/computer science departments), this chapter will attempt to highlight their attitudes towards written text.

(The systemization of language through key-value pairs follows a Western tradition that is based on the convention that there is an individual that perceives, and there is an outsie world that contains its own natural truth. The linguist Austin rather shows that language is merely a speech act, happening as a social act. In these speech acts there is no such external objective meaning of the words we use in language, that meaning only exists in social relations. Heidegger even goes further, and says that we should not split up the object from the predicate. For example: while 'hammering' the person that hammers is not regarding the hammer in a reflective sense. The person is in the moment of using the hammer to achieve something. The only moment when the person would be confronted with a representational sense of the hammer, is when the hammer breaks down. It is at that moment that the person will need to fix the hammer, and learn a bit more about what a hammer 'is'.)

3 - circularity (reasoning)

A text mining process is not aiming to 'find' if assumptions can be confirmed, it is rather engineered to be so.

  • text mining & an iterating workflow
    • Pattern's workflow more here
    • IBM's 'iterating' graph
  • heliocentric/geocentric diagrams to understand the universe → complexity confirms the input (here: the perception of stars and their positions) Could this be the case in mining practices?


material

bibliography (five key texts)

  • Matthew Fuller - short presentation of the poem: Blue Notebook #10 / The Red-haired Man, during Ideographies of Knowledge, Mundaneum, Mons (Oct. 2015); annotations
  • Joseph Weizenbaum - Computer Power and Human Reason: From Judgement to Calculation (1976);
  • Winograd + Flores - Understanding Computers & Cognition (1987);
  • Vilem Flusser - Towards a Philosophy of Photography (1983); → annotations
  • Antoinette Rouvroy - All Watched Over By Algorithms - Transmediale (Jan. 2015); → annotations
  • The Journal of Typographic Research - OCR-B: A Standardized Character for Optical Recognition this article (V1N2) (1967); → abstract

annotations

  • Alan Turing - Computing Machinery and Intelligence (1936)
  • The Journal of Typographic Research - OCR-B: A Standardized Character for Optical Recognition this article (V1N2) (1967); → abstract
  • Ted Nelson - Computer Lib & Dream Machines (1974);
  • Joseph Weizenbaum - Computer Power and Human Reason (1976); → annotations
  • Water J. Ong - Orality and Literacy (1982);
  • Vilem Flusser - Towards a Philosophy of Photography (1983); → annotations
  • Christiane Fellbaum - WordNet, an Electronic Lexical Database (1998);
  • Charles Petzold - Code, the hidden languages and inner structures of computer hardware and software (2000); → annotations
  • John Hopcroft, Rajeev Motwani, Jeffrey Ullman - Introduction to Automata Theory, Languages, and Computation (2001);
  • James Gleick - The Information, a History, a Theory, a Flood (2008); → annotations
  • Matthew Fuller - Software Studies. A lexicon (2008);
  • Marissa Meyer - the physics of data, lecture (2009); → annotations
  • Matthew Fuller & Andrew Goffey - Evil Media (2012); → annotations
  • Antoinette Rouvroy - All Watched Over By Algorithms - Transmediale (Jan. 2015); → annotations
  • Benjamin Bratton - Outing A.I., Beyond the Turing test (Feb. 2015) → annotations
  • Ramon Amaro - Colossal Data and Black Futures, lecture (Okt. 2015); → annotations
  • Benjamin Bratton - On A.I. and Cities : Platform Design, Algorithmic Perception, and Urban Geopolitics (Nov. 2015);

currently working on

* terminology: data 'mining'
* Knowledge Discovery in Data (KDD) in the wild, problem formulations
* KDD, applications
* KDD, workflow
* text-processing: simplification
* list of data mining parties

other

outline-thesis (2) → NLP