User:Manetta/graduation-proposals/proposal-0.5: Difference between revisions

From XPUB & Lens-Based wiki
(Created page with "<div style="width:100%;max-width:800px;"> =<span style="color:blue;">graduation proposal +0.5</span>= == title: "i could have written that" == File:I-could-have-written-th...")
 
No edit summary
Line 15: Line 15:


===abstract===
===abstract===
'i-could-have-written-that' will be a publishing project around technologies that process natural language. The project will put ''tools'' and ''techniques'' central, to report on their contraints and possibilities, and what effect these have on the information they transmit and the people that use them. That will hopefully lead to conversations that are not limited to technological aspects, but also reflect on cultural & political implications.


"i-could-have-written-that" will be a publishing platform operating from an aim of revealing inner workings of technologies that systemize natural language, through tools that function as natural language interfaces (for both the human and machines), while regarding such technologies as reading-writing systems. 'i-could-have-written-that' aims to reflect on the cultural/technological/political implications that such tools have on the information they transmit and the people that use them.  
'i-could-have-written-that' will be a publishing experiment that will formulate design questions on an 'workflow' level. The publications will not only question how information can be structured, visualized and/or related to a context through semiotic processes, but rather will question how 'design' can construct information processes, to create (human) readable documents, datasets, bug-reports, and tutorials.  


'i-could-have-written-that' will be a publishing experiment that will be 'designed' not on the level of the interface. The publications will not only focus on the question of how information can be structured, visualized and/or relate to a context through semiotic processes. 'i-could-have-written-that' will rather 'design' on the level of designing information processes.
[example]!


[example]!
The publications will report on techniques that can be called 'reading-writing(-executing?) systems'. They touch the issues of systemization & automation that work with simplification & modeling processes. Examples are: WordNet (a lexical dataset), Pattern (text-mining software), programming languages (high-level languages like Python or markup-languages like HTML), text-parsing (turning text into number), ngrams (common word combinations);
 
 
====publishing platform? ====
===== alternative conversations (alternative to what?)=====
Language as main communication system ... which makes it possible to create information, documents, friends. Not a human-only system, but also computer-system, or human-computer-system.
 
... A computer system or computer process is easily regarded as an objective act.


This publishing platform will report on reading-writing systems that touch the issues of systemization / automation / an algorithmic 'truth' that contain elements of simplification / probability / modeling processes ...
... When using a computer, language is ''used'' (as interface to control computer systems), ''executed'' (as scripts) and ''processed'' (natural language as data) ...


... by looking closely at the material (technical) elements that are used to construct these systems, in order to look for alternative perspectives.
... both to reveal the fascinating systems that have been developed, the attempts, the dreams, but also to present a critical take on the way that these systems construct their 'truths'.




====publishing platform? ====
=====revealing / informing / publishing=====
Although algorithms become more and more present of daily life — in the form of e.g.: automatic recommendations (music playlists / Amazon products), or predictions in probability rates (suspicious behavior / climate change patterns) — their constructions become more and more complex and hence harder to depict or understand (for you, me, and sometimes even for academics themselves). Therefore i think it is important to publish about these systems, both to reveal the fascinating systems that have been developed, the attempts, the dreams, but also to present a critical take on the way that these systems construct their 'truths'.


By departing from a very technical point of view, i hope to develop ''a stage for alternative perspectives'' on these issues (making 'it-just-not-works' tutorials for example), while keeping a wide audience in mind. I don't want to exclude a broader group of people in understanding reading-writing techniques, as that is precise the critique i have on the field of data-mining.  
By departing from a very technical point of view, i hope to develop ''a stage for alternative perspectives'' on these issues (making 'it-just-not-works' tutorials for example).


These aims are related to cultural principles present in the field of open-source: take for example ''the aim for distribution in stead of centralized sources'' (for example: Wikipedia), the aim of ''making information available for everyone'' (in the sense that it should not only be available but also legible), and the aim for ''collaborative work'' (as opposed to ownership). These principles will influence my design choices, for example: to consider an infrastructure that enables collaborative work.
These aims are related to cultural principles present in the field of open-source: take for example ''the aim for distribution in stead of centralized sources'' (for example: Wikipedia), the aim of ''making information available for everyone'' (in the sense that it should not only be available but also legible), and the aim for ''collaborative work'' (as opposed to ownership). These principles will influence my design choices, for example: to consider an infrastructure that enables collaborative work.
Line 38: Line 43:
Comming from a background in graphic design, i got educated in a traditional way (focus on typography and aesthetics) in combination with courses in 'design strategy' and 'meaning-making' (which was not defined in such clear terms btw.). I became interested in semiotics, and in systems that use symbols/icons/indexes to gain meaning.  
Comming from a background in graphic design, i got educated in a traditional way (focus on typography and aesthetics) in combination with courses in 'design strategy' and 'meaning-making' (which was not defined in such clear terms btw.). I became interested in semiotics, and in systems that use symbols/icons/indexes to gain meaning.  


After my first year at the Piet Zwart, i feel that my interest shifts from designing information on an interface level, to designing information processes. Being fascinated by looking at inner workings of technique and being affected by the open source principles, bring up a whole set of new design questions. For example: How can an interface ''reveal'' its inner system? How can structural descisions ''be'' design actions? And how could a workflow change to the information it is processing?
After my first year at the Piet Zwart, i feel that my interest shifts from designing information on an interface level, to designing information processes. Being fascinated by looking at inner workings of techniques and being affected by the open source principles, bring up a whole set of new design questions. For example: How can an interface ''reveal'' its inner system? How can infrastructural descisions ''be'' design actions? And how could a workflow effect the information it is processing?  
 
I would like to include this shift in my graduation work, to let my project also be a publishing experiment, by focussing on the infrastructure and workflow of the publication(s).
 
 


Existing techniques that already give answers to these questions are GIT and MediaWiki. I would like to include these questions in my graduation work, and work with these software packages.


-------------------------------------------
-------------------------------------------
Line 94: Line 96:


====technologies that systemize natural language? ====
====technologies that systemize natural language? ====
By working closely with software that is used in the fields of ''machine learning & text-mining'', i hope to reveal the inner workings of such mediating techniques through a practical approach. Elements to work with include for example dictionaries, lexicons, lexical databases (WordNet), other datasets (ConceptNet), ngrams, and other elements that are implemented in such software.
By working closely with software that is used (for example in the fields of ''machine learning & text-mining''), i hope to reveal the inner workings of such mediating techniques through a practical approach. Elements to work with include for example dictionaries, lexicons, lexical databases (WordNet), other datasets (ConceptNet), ngrams, and other elements that are implemented in such software.
 
====reading technology for both the computer & the human eye====
[[File:Ocr-b.png|right|thumbnail|200px|OCR-B, designed by Adrian Frutiger (1967), screenshot from the article ''OCR-B: A Standardized Character for Optical Recognition'' in [https://s3-us-west-2.amazonaws.com/visiblelanguage/pdf/V1N2_1967_E.pdf The Journal of Typographic Research, V1N2-1967 (PDF)] ]]
An enthusiastic attempt to create a reading technology for both the human as also the computer's 'eye' is published in 'The Journal of Typographic Research' (V1N2-1967<ref name="jtr">[https://s3-us-west-2.amazonaws.com/visiblelanguage/pdf/V1N2_1967_E.pdf The Journal of Typographic Research, V1N2-1967 (PDF)], published between 1967 and 1971, then transformed into 'Visible Language'</ref>). The article ''OCR-B : A Standardized Character for Optical Recognition'' presents 'OCR-B<ref name="ocrb">[http://www.linotype.com/1283/ocr-b-family.html OCR-B on Linotype], designed by Adrian Frutiger in 1967</ref>', a typeface that is optimized for automatic machinic reading, designed by Adrian Frutiger. The author ends the article by ([https://en.wikipedia.org/wiki/Technological_utopianism techno-optimistically]) stating the ''hope that one day "reading machines" will have reached perfection and will be able to distinguish without any error the symbols of our alphabets, in whatever style they may be written.''
 
====reading / writing systems?====
[[File:Pattern_schema.gif|right|thumbnail|200px|schema example of [http://www.clips.ua.ac.be/pages/pattern Pattern], a web mining module for the Python programming language.]]
In this same article, the author did fortold us a future wherein ''[a]utomatic optical reading is likely to widen the bounds of the field of data processing''<ref name="jtr"></ref>. The term 'data-processing' is referring to typed or printed information on paper, but nowadays 'data-processing' is understood differently. Today, data-processing rather refers to techniques that 'read' natural language not through an optical process, but by perceiving language as 'data', In the field of data-mining, algorithms are trained to recognize patterns in written language. In order to be able to perceive the text mathematically, text is simplified and turned into numbers. Computers therefore 'read' by counting words and most-common-word-combinations (called ''bag-of-words'').
 
Optical reading machines try to 'read' what has been written from paper directly. They try to understand what has been written, to translate it correctly into digital text. But algorithms aren't tools that perform from a 'read-only' position. Algorithmic reading does not try to 'understand' written text, but rather tries to label it as (for example) being positive or negative. An algorithm looks for patterns in the text, and is then able to compare the current pattern to a set of pre-labeled text. Is the algortihm therefore a 'reading' technology, or could pattern-recognition be seen as an act of 'writing' as well? As the algorithm is decoding the written language: first by turning text into patterns, and then by labeling it as being positive or not.
 
[[File:Presentation_by_Antoinette_Rouvroy_–_All_Watched_Over_by_Algorithms.png|right|thumbnail|200px|Antoinette Rouvroy speaking about big-data and its ideology of being a natural resource – [https://www.youtube.com/watch?v=4nPJbC1cPTE youtube-video, Transmediale 2015, All Watched Over by Algorithms] ]]
Data-mining techniques are decoding processes but are hiding these decoding processes, because they follow the ideology to regard 'data' or 'raw-data' as natural objects, untouched by human hands.<ref name="antoinetterouvroy">[https://youtu.be/4nPJbC1cPTE Presentation by Antoinette Rouvroy – All Watched Over by Algorithms]</ref> It is a common practise to present algorithmic results as objective truths (as ''it is the data that speaks''!<ref name="data-speaks">[https://youtu.be/FjibavNwOUI?t=2m11s TED Talk PENN, Lyle Ungar presenting text mining results; "I'm sorry" "but these are the words"], [http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/elements/i-am-sorry-but-these-are-the-words-laughter/i-am-sorry-but-these-are-the-words-laughter.html more info: on ''i-could-have-written-that page'']</ref> or because ''no humans were even involved''!<ref name="no-humans">[https://help.yahoo.com/kb/flickr/tag-keywords-flickr-sln7455.html Yahoo help section for the Friendly Flickr Bot], "The process is fully automated, so no humans are ever involved in tagging your images." more info: http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/elements/flickr_s-friendly-robots/flickr_s-friendly-robots.html</ref>).
 
This ideology seems to come very close to what has been predicted in the article from 1967: ''"reading machines" [that] will have reached perfection and will be able to distinguish without any error the symbols of our alphabets, in whatever style they may be written.'' Though in 1967 imagined as an optical reading device, isn't the perfect automatic machinic reading situation of today found in the field of data-mining? An automatic reading machine that widens the bounds of the field of data-processing? A technique being so natural, that even the data can speak?
 
But data-mining mediates as much as television or a telescope does. Data-mining therefore shouldn't be regarded as a 'read-only' technique, but be treated as a tool that 'reads' and 'writes' at the same time. In stead of hiding the data-processes (workflows, files and choices that have been made) in data-mining practises, i would like to reveal share information about them.
 
To do this, i would like to work closely with data-mining tools: ''data/text-mining (text-parsing, text-simplification, vector-space-models, looking at algorithmic culture), machine learning (training-sets, taxonomies, categories, annotation), logic (simplification, universal representative systems, programming languages)''





Revision as of 15:50, 3 November 2015

graduation proposal +0.5

title: "i could have written that"

alternatives:

Introduction

For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induice understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself "I could have written that". With that thought he moves the program in question from the shelf marked "intelligent" to that reserved for curious, fit to be discussed only with people less enlightened than he. (Joseph Weizenbaum, 1966)

abstract

'i-could-have-written-that' will be a publishing project around technologies that process natural language. The project will put tools and techniques central, to report on their contraints and possibilities, and what effect these have on the information they transmit and the people that use them. That will hopefully lead to conversations that are not limited to technological aspects, but also reflect on cultural & political implications.

'i-could-have-written-that' will be a publishing experiment that will formulate design questions on an 'workflow' level. The publications will not only question how information can be structured, visualized and/or related to a context through semiotic processes, but rather will question how 'design' can construct information processes, to create (human) readable documents, datasets, bug-reports, and tutorials.

[example]!

The publications will report on techniques that can be called 'reading-writing(-executing?) systems'. They touch the issues of systemization & automation that work with simplification & modeling processes. Examples are: WordNet (a lexical dataset), Pattern (text-mining software), programming languages (high-level languages like Python or markup-languages like HTML), text-parsing (turning text into number), ngrams (common word combinations);


publishing platform?

alternative conversations (alternative to what?)

Language as main communication system ... which makes it possible to create information, documents, friends. Not a human-only system, but also computer-system, or human-computer-system.

... A computer system or computer process is easily regarded as an objective act.

... When using a computer, language is used (as interface to control computer systems), executed (as scripts) and processed (natural language as data) ...

... both to reveal the fascinating systems that have been developed, the attempts, the dreams, but also to present a critical take on the way that these systems construct their 'truths'.


By departing from a very technical point of view, i hope to develop a stage for alternative perspectives on these issues (making 'it-just-not-works' tutorials for example).

These aims are related to cultural principles present in the field of open-source: take for example the aim for distribution in stead of centralized sources (for example: Wikipedia), the aim of making information available for everyone (in the sense that it should not only be available but also legible), and the aim for collaborative work (as opposed to ownership). These principles will influence my design choices, for example: to consider an infrastructure that enables collaborative work.

from designing information, to designing information processes

Comming from a background in graphic design, i got educated in a traditional way (focus on typography and aesthetics) in combination with courses in 'design strategy' and 'meaning-making' (which was not defined in such clear terms btw.). I became interested in semiotics, and in systems that use symbols/icons/indexes to gain meaning.

After my first year at the Piet Zwart, i feel that my interest shifts from designing information on an interface level, to designing information processes. Being fascinated by looking at inner workings of techniques and being affected by the open source principles, bring up a whole set of new design questions. For example: How can an interface reveal its inner system? How can infrastructural descisions be design actions? And how could a workflow effect the information it is processing?

Existing techniques that already give answers to these questions are GIT and MediaWiki. I would like to include these questions in my graduation work, and work with these software packages.


'#0'-issue:

intro

  • WordNet as a dataset that 'maps' language
  • Not 'mapping' as a tool to understand (as a primary aim) (as Julie speaks about mapping the physicality of the Internet) but rather 'mapping' in the sense of 'modeling', in order to automate 'natural language processes'.

→ 'automation' is key here ? (natural language processing techniques or automatic reading systems)

→ western urge to simplify / structure / archive knowledge, as sharing knowledge is regarded as something that will bring development in society for the future

(...)

(...)

(...)

elements

the following elements could be part of this issue: (now collected on this 'i-could-have-written-that' webpage)

→ WordNet as structure
i-could-have-written-that/ WordNet skeleton
→ a historical list of information processing systems
i-could-have-written-that/ historical list of information systems
→ text on automatic reading machines, 
placing automation in an optical process (1967) in contrast with an algorithmic process (2015)
i-could-have-written-that/ Automatic Reading Machines

(...)

(...)

(...)


Relation to a larger context

natural language?

Natural language could be considered as the language that evolves naturally in the human mind through repetition, a process that starts for many people at a young age. For this project i would like to look at 'natural language' from a perspective grounded in computer science, computational linguistics and artificial intelligence (AI), where natural language is mostly used in the context of 'natural language processing' (NLP), a field of studies that researches the interactions between human language and the computer.

systemizing natural language?

It is discussable if language itself could be regarded as a technology or not. For my project i will follow James Gleick's statement in his book 'The Information: a Theory, a History, a Flood'[1], where he states: Language is not a technology, (...) it is not best seen as something separate from the mind; it is what the mind does. (...) but when the word is instantiated in paper or stone, it takes on a separate existence as artifice. It is a product of tools and it is a tool. From this moment on 'language' is turned into 'written language'.

A very primary writing technology is the latin alphabet. The main set of 26 characters is a toolbox that enables us to systemize language into characters, into words, into sentences. When considering these tools as technologies, it makes it possible to follow a line from natural language to a language that computers can take as input, via various forms of mediation.

technologies that systemize natural language?

By working closely with software that is used (for example in the fields of machine learning & text-mining), i hope to reveal the inner workings of such mediating techniques through a practical approach. Elements to work with include for example dictionaries, lexicons, lexical databases (WordNet), other datasets (ConceptNet), ngrams, and other elements that are implemented in such software.


Relation to previous practice

In the last year, i've been looking at different tools that process natural language. From speech-to-text software to text-mining tools, they all systemize language in various ways.

training common sense, work track at Relearn 2015

As a continutation of that i took part at the Relearn summerschool in Brussels last August (2015), to propose a work track in collaboration with Femke Snelting on the subject of 'training common sense'. With a group of people we have been trying to deconstruct the 'truth-construction' in algorithmic cultures, by looking at data mining processes, deconstructing the mathematical models that are used, finding moments where semantics are mixed with mathematics, and trying to grasp what kind of cultural context is created around this field. We worked with a text-mining software package called 'Pattern'. The workshop during Relearn transformed into a project that we called '#!Pattern+, and will be strongly collaborative and ongoing over a longer time span. #!Pattern+ will be a critical fork of the latest version of Pattern, including reflections and notes on the software and the culture it is surrounded within. The README file that has been written for #!PATTERN+ is online here, and more information is collected on this wiki page.

i will tell you everything (my truth is a constructed truth") catalog of "Encyclopedia of Media Object" in V2, June 2015

Another entrance to understanding what happens in algorithmic practises such as machine learning, is by looking at training sets that are used to train algorithms to recognize certain patterns in a set of data. These training sets could contain a large set of images, texts, 3d models, or video's. By looking at such datasets, and more specifically at the choices that have been made in terms of structure and hierarchy, steps of the construction a certain 'truth' are revealed. For the exhibition "Encyclopedia of Media Object" in V2 last June, i created a catalog, voice over and booklet, which placed the objects from the exhibition within the framework of the SUN database, a resource of images for image recognition purposes. (link to the "i-will-tell-you-everything (my truth is a constructed truth" interface)

There are a few datasets in the academic world that seem to be basic resources to built these training sets upon. In the field they are called 'knowledge bases'. They live on a more abstract level then the training sets do, as they try to create a 'knowlegde system' that could function as a universal structure. Examples are WordNet (a lexical dataset), ConceptNet, and OpenCyc (an ontology dataset). In the last months i've been looking into WordNet, worked on a WordNet Tour (still ongoing), and made an alternative browser interface (with cgi) for WordNet. It's all a process that is not yet transformed in an object/product, but untill now documented here and here on the Piet Zwart wiki.

Thesis intention

I would like to integrate my thesis in my graduation project, to let it be the content of the publication(s). This could take multiple forms, for example:

  • interview with creators of datasets or lexicons like WordNet
  • close reading of a piece of software, like we did during the workshop at Relearn. Options could be: text-mining software Pattern (Relearn), or Wecka 3.0; or WordNet, ConceptNet, OpenCyc


Practical steps

how?

  • starting a series of reading/writing excercises, in continuation of the way of working in the prototype classes and during Relearn.
    • mapping WordNet's structure
    • using WordNet as a writing filter?
    • WordNet as structure for a collection (similar to the way i've used the SUN database)

while using open-source software, in order to be able to have a conversation with the tools that will be discussed, open them up.

questions of research

  • How can an interface reveal its inner system? How can structural descisions be design actions? And how could a workflow change to the information it is processing?
  • how to communicate an alternative view on algorithmic reading-writing machines?
  • how to built and maintain a (collaborative) publishing project?
    • technically: what kind of system to use to collect? wiki? mailinglist interface?
    • what kind of system to use to publish?
    • publishing: online + print --> inter-relation
    • in what context ?

references

  1. James Gleick's personal webpage, The Information: a Theory, a History, a Flood - James Gleick (2011)

current or former (related) magazines :

other publishing platforms :

publications :

datasets

* WordNet (Princeton)
* ConceptNet 5 (MIT Media)
* OpenCyc

people

algorithmic culture

Luciana Parisi
Matteo Pasquinelli
Antoinette Roivoy
Seda Gurses 

other

Software Studies. A lexicon. by Matthew Fuller (2008)

reading list

notes and related projects

BAK lecture: Matthew Fuller, on the discourse of the powerpoint (Jun. 2015) - annotations

project: Wordnet

project: i will tell you everything (my truth is a constructed truth)

project: serving simulations