User:Manetta/thesis/thesis-outline-nlp: Difference between revisions

From XPUB & Lens-Based wiki
(Created page with "<div style="width:100%;max-width:800px;"> =outline= ==intro== ===NLP=== With 'i-could-have-written-that' i would like to look at technologies that process natural language (N...")
 
 
(2 intermediate revisions by the same user not shown)
Line 19: Line 19:


== scope==
== scope==
Text mining seems to be a rather brutal way to deal with the aim to process natural language into useful information. To reflect on this britality, tracing back a longer tradition of natural language processing could be usefull. Hopefully this will be a way to create some distance to the hurricane of data that these days is mainly 'big', 'raw' or 'mined'.
Text mining seems to be a rather brutal way to deal with the aim to process natural language into useful information. To reflect on this brutality, tracing back a longer tradition of natural language processing could be usefull. Hopefully this will be a way to create some distance to the hurricanes of data that are mainly known as 'big', 'raw' or 'mined' these days.


This thesis will attempt to show the brutality of text processing techniques that are present in text mining software. By looking at projects from the past in the field of NLP (being part of AI/computer science departments), this thesis will attempt to highlight their attitude towards written text.  
This thesis will attempt to show the brutality towards written text that happen in text mining software. By looking at projects from the past in the field of NLP (being part of AI/computer science departments), this thesis will attempt to highlight their attitude towards written text.


==hypothesis==
==hypothesis==

Latest revision as of 10:33, 4 February 2016

outline

intro

NLP

With 'i-could-have-written-that' i would like to look at technologies that process natural language (NLP). There is a range of different expectations of NLP systems, but a full-coverage of natural language is unlikely. By regarding NLP software as cultural objects, i'll focus on the inner workings of their technologies: what are the technical and social mechanisms that systemize our natural language in order for it to be understood by a computer?

NLP is a category of software packages that is concerned with the interaction between human language and machine language. NLP is mainly present in the field of computer science, artificial intelligence and computational linguistics. On a daily basis people deal with services that contain NLP techniques: translation engines, search engines, speech recognition, auto-correction, chatbots, OCR (optical character recognition), license plate detection, data-mining. For 'i-could-have-written-that', i would like to place NLP software central, not only as technology but also as a cultural object, to reveal in which way NLP software is constructed to understand human language, and what side-effects these techniques have.


title: i could have written that

Text mining is part of an analytical practise of searching for patterns in text following "A Data Driven Approach", and assigning these to (predefined) profiles. It is part of a bigger information-construction process (called: Knowledge Discovery in Data, KDD) which implies source-selection, data-creation, simplification, translation into vectors, and testing techniques.

context

The magical effects of text mining results, caused by the difficulty of understanding the construction of these results, makes it difficult to formulate an opinion about text mining techniques. It makes it even difficult to formulate what the problem exactly is, as many people are tending to agree with the calculations and word-counts that seemly are excecuted. "What is exactly the problem?", and "This is the data that speaks, right?", are questions that need to be challenged in order to have a conversation about text mining techniques at all.

public

This thesis will aim for a public that is interested in an alternative perspective on buzzwords like 'big data' and 'data-mining'. Also, this thesis will (hopefully!) offer a view from a computer-vision side: how software is written to understand the non-computer world of written text.

scope

Text mining seems to be a rather brutal way to deal with the aim to process natural language into useful information. To reflect on this brutality, tracing back a longer tradition of natural language processing could be usefull. Hopefully this will be a way to create some distance to the hurricanes of data that are mainly known as 'big', 'raw' or 'mined' these days.

This thesis will attempt to show the brutality towards written text that happen in text mining software. By looking at projects from the past in the field of NLP (being part of AI/computer science departments), this thesis will attempt to highlight their attitude towards written text.

hypothesis

The results of data-mining software are not mined, results are constructed.

questions

  • Which assumptions are the backbone of their technical infrastructure? Are these assumptions rooted in linguistics, computer science or another field?
  • How is text processed? (......)
  • How is the impossibility to make a language-model explained/justified? For what purpose/intention/context are these projects developed?

Historical projects in NLP

ELIZA (1966)

Joseph Weizenbaum (1923 - 2008)

Men are all alike.
IN WHAT WAY
They're always bugging us about something or other.
CAN YOU THINK OF A SPECIFIC EXAMPLE
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE
He says I'm depressed much of the time.
I AM SORRY TO HEAR YOU ARE DEPRESSED 
It's true. I am unhappy.
DO YOU THINK COMING HERE WILL HELP YOU NOT TO
BE UNHAPPY
I need some help, that much seems certain.

SHRDLU (1968)

Terry Winograd (1946 - )

Person: PICK UP A BIG RED BLOCK. 
Computer: OK. (does it) 
Person: GRASP THE PYRAMID. 
Computer: I DON'T UNDERSTAND WHICH PYRAMID YOU MEAN. 
Person: FIND A BLOCK WHICH IS TALLER THAN THE ONE YOU ARE HOLDING AND PUT IT INTO THE BOX. 
Computer: BY "IT", I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING. 
Computer: OK. (does it) 
Person: WHAT DOES THE BOX CONTAIN? 
Computer: THE BLUE PYRAMID AND THE BLUE BLOCK. 
Person: WHAT IS THE PYRAMID SUPPORTED BY? 
Computer: THE BOX. 
Person: HOW MANY BLOCKS ARE NOT IN THE BOX? 
Computer: FOUR OF THEM. 

EURISKO (1976)

Douglas Bruce Lenat (1950 - )

developed after Lenat made the 'Automated Mathematician' (AM)

based on heuristics

text processing in text mining

case study: (a project from the WWBP?)

  • from unstructured text to vector space models


material

bibliography (five key texts)

  • Joseph Weizenbaum - Computer Power and Human Reason: From Judgement to Calculation (1976);
  • Winograd + Flores - Understanding Computers & Cognition (1987);
  • Vilem Flusser - Towards a Philosophy of Photography (1983); → annotations
  • Antoinette Rouvroy - All Watched Over By Algorithms - Transmediale (Jan. 2015); → annotations
  • The Journal of Typographic Research - OCR-B: A Standardized Character for Optical Recognition this article (V1N2) (1967); → abstract

annotations

  • Alan Turing - Computing Machinery and Intelligence (1936)
  • The Journal of Typographic Research - OCR-B: A Standardized Character for Optical Recognition this article (V1N2) (1967); → abstract
  • Ted Nelson - Computer Lib & Dream Machines (1974);
  • Joseph Weizenbaum - Computer Power and Human Reason (1976); → annotations
  • Water J. Ong - Orality and Literacy (1982);
  • Vilem Flusser - Towards a Philosophy of Photography (1983); → annotations
  • Christiane Fellbaum - WordNet, an Electronic Lexical Database (1998);
  • Charles Petzold - Code, the hidden languages and inner structures of computer hardware and software (2000); → annotations
  • John Hopcroft, Rajeev Motwani, Jeffrey Ullman - Introduction to Automata Theory, Languages, and Computation (2001);
  • James Gleick - The Information, a History, a Theory, a Flood (2008); → annotations
  • Matthew Fuller - Software Studies. A lexicon (2008);
  • Marissa Meyer - the physics of data, lecture (2009); → annotations
  • Matthew Fuller & Andrew Goffey - Evil Media (2012); → annotations
  • Antoinette Rouvroy - All Watched Over By Algorithms - Transmediale (Jan. 2015); → annotations
  • Benjamin Bratton - Outing A.I., Beyond the Turing test (Feb. 2015) → annotations
  • Ramon Amaro - Colossal Data and Black Futures, lecture (Okt. 2015); → annotations
  • Benjamin Bratton - On A.I. and Cities : Platform Design, Algorithmic Perception, and Urban Geopolitics (Nov. 2015);

currently working on

* terminology: data 'mining'
* Knowledge Discovery in Data (KDD) in the wild, problem formulations
* KDD, applications
* KDD, workflow
* text-processing: simplification
* list of data mining parties