User:Manetta/graduation-proposals/proposal-0.5: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
Line 15: Line 15:


===abstract===
===abstract===
'i-could-have-written-that' will be a publishing project around technologies that process natural language. The project will put linguistic ''tools'' and ''techniques'' central, to report on their contraints and possibilities. How do these techniques mediate the written word? What effect these have on the information they transmit and the people that use them. How is natural language processed into data? That will hopefully lead to conversations that are not limited to technological aspects, but also reflect on cultural & political implications.  
'i-could-have-written-that' will be a publishing project around technologies that process natural language. The project will put linguistic ''tools'' and ''techniques'' central, to report on their contraints and possibilities. How do these techniques mediate the written word? What effect do they have on the information they transmit and the people that use them? How is natural language processed into data? These questions will hopefully lead to conversations that are not limited to technological aspects, but also reflect on cultural & political implications of language processing software.  


The computer is a linguistic device, a chain of multiple linguistic systems that can communicate with eachother. The computer uses a large set of languages on various levels, for example: digits, logic, semantic interface language.
==context==
'''In current reactions on''' the effects of an optimistic believe in computation, '''i recognize a pattern''' of requests for a focus on computation and algorithmic culture as 'synthetic' processes, '''because''' there is a convention to recognize anthropomorphic qualities in computers (like 'thinking' or 'intelligence') that obscures their syntactical nature. '''Also, because of''' ideological aims of big data to have direct access to information, there is a believe that 'truth' can be gained from data without any mediation. '''This could be avoided by''' a collection of alternative perspectives on these techniques, made accessible and legible for people with an interest to regard software as a cultural product. A choice for a focus on specifically software that processes natural language, is both a poetic, as also a way to speak directly with the techniques that create information ''using'', ''excecuting'' & ''processing'' language*; the main communication system for both humans & computers. (Though stating this feels dangerous here: they differ!) '''Then it would be possible''' to formulate an opinion or understanding about computational (syntactical) techniques without relying only on the main (and overpowering) perspective.
--------------------------------------------------------
* ''Critical thinking about computers is not possible without an informed understanding.'' (...) ''Software as a whole is not only 'code' but a symbolic form involving cultural practices of its employment and appropriation.'' (...) ''It's upon critics to reflect on the contraints that computer control languages write into culture.'' — from: Language, by Florian Cramer; published in Software Studies, edited by Matthew Fuller (2008)
* ''in many cases these debates [about artificial intelligence] may be missing the real point of what it means to live and think with forms of '''synthetic intelligence''' very different from our own.'' (...) ''To regard A.I. as inhuman and machinic should enable a more reality-based understanding of ourselves, our situation, and a fuller and more complex understanding of what 'intelligence' is and is not.'' — from Benjamin Bratton, Outing A.I. Beyond the Turing Test (2015)
* ''Raw-data is like nature, it is the idea that nature will speak by itself. It is the idea that thanks to big data, the world speaks by itself without any transcription, symbolization, institutional mediation, political mediation or legal mediation'' — from: Antoinette Roivoy; during her lecture as part of the panel 'All Watched Over By Algorithms', held during the Transmediale festival (2015)
* ''The issue we are dealing now with in machine learning and big data is that a heuristic approach is being replaced by a certainty of mathematics. And underneath those logics of mathematics, we know there is no certainty, there is no causality. Correlation doesn't mean causality.'' — from: Colossal Data and Black Futures, lecture by Ramon Amaro; as part of the Impakt festival (2015)


[description of #0 issue]!
<small>* A computer is a linguistic device. When using a computer, language is ''used'' (as interface to control computer systems), ''executed'' (as scripts written in (a set of) programming languages) and ''processed'' (turning natural language into data).</small>


==='#0'-issue:===
==how?==
==== intro ====
* WordNet as a dataset that 'maps' language
* Not 'mapping' as a tool to understand (as a primary aim) (as Julie speaks about mapping the physicality of the Internet) but rather 'mapping' in the sense of 'modeling', in order to '''automate''' 'natural language processes'.


&rarr; 'automation' is key here ? (''natural language processing techniques'' or ''automatic reading systems'')
* central collection of annotations to language processing tools. (wiki / git + gitweb / ...)
* periodical (edited) selection from this collection that will be published; the format will not be fixed, could be printed, digital, combination, offline, online; this is the publishing experiment part; the public aimed at is interested to regard software as a cultural product (not only a technological tone is hence needed)


&rarr; western urge to simplify / structure / archive knowledge, as sharing knowledge is regarded as something that will bring development in society for the future
'i-could-have-written-that' will be a publishing experiment that will formulate design questions on an 'workflow' level. The publications will not only question how information can be structured, visualized and/or related to a context through semiotic processes, but rather will question how 'design' can construct information processes, to create (human) readable documents, datasets, bug-reports, and tutorials.


(...)
These aims are related to cultural principles present in the field of 'free and open software': take for example ''the aim for distribution in stead of centralized sources'' (for example: Wikipedia or Git), the aim of ''making information available'' (in the sense that it should not only be available but also legible), and the aim for ''collaboration'' (in the sense of learning together). These principles will influence my design choices, for example: to consider an infrastructure that enables collaborative work.


(...)
===from designing information, to designing information processes===
Comming from a background in graphic design, i got educated in a traditional way (focus on typography and aesthetics) in combination with courses in 'design strategy' and 'meaning-making' (which was not defined in such clear terms btw.). I became interested in semiotics, and in systems that use symbols/icons/indexes to gain meaning.  


(...)
After my first year at the Piet Zwart, i feel that my interest shifts from designing information on an interface level, to designing information processes. Being fascinated by looking at inner workings of techniques and being affected by the 'free and open software' principles, bring up a whole set of new design questions. For example: How can an interface ''reveal'' its inner system? How can infrastructural descisions ''be'' design actions? And how could a workflow effect the information it is processing?


==== elements ====
would you like to be dependent on an online service like WordPress, to publish your material? Would you like to be able to work on your blog online only? How do you preserve the material you will publish? When could a document be called 'a document'; when it is readable for the user or for the computer? How could you implement the inner working of an online document in the interface? How can we work together at the same file at the same time?
the following elements could be part of this issue: (now collected [http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/ on this 'i-could-have-written-that' webpage])
&rarr; WordNet as structure
[http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/elements/wordnet-skeleton/wordnet-skeleton.html i-could-have-written-that/ WordNet skeleton]


&rarr; a historical list of ''information processing systems''
Existing techniques that already give answers to these questions are GIT and MediaWiki. I would like to include these questions in my graduation work, and work with these software packages.
[http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/elements/historical-list-of-information-systems/historical-list-of-information-systems.html i-could-have-written-that/ historical list of information systems]


&rarr; text on ''automatic reading machines'',
--------------------------------------------------------
placing automation in an optical process (1967) in contrast with an algorithmic process (2015)
('''in the field of''' graphic design, '''i recognize a pattern''' of 'designing' on a high level. I call this 'high level design' '''because of''' the tradition to formulate design questions on the level of the interface and the interpretation of the reader. This is different in the field of computed type design, where fonts are built out of outlines, skeletons or dots; depending on their constructions. Also the field of experimental book design could be regarded as a exception. Though, in the field of web design, working on the level of the interface is very common, but '''could be avoided by''' regarding 'design' as designing a 'workflow'. That will lead to a design practise where current general questions could be included as design questions. Design questions are then related to your tools (software), social workstructure, principles, etc.)
[http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/elements/automatic-reading-machines/automatic-reading-machines.html i-could-have-written-that/ Automatic Reading Machines]


(...)
==#0 issue==
=== intro ===
* WordNet as a dataset that 'maps' language
** Not 'mapping' as a tool to understand (as a primary aim) (as Julie speaks about mapping the physicality of the Internet) but rather 'mapping' in the sense of 'modeling', in order to '''automate''' 'natural language processes'.


(...)
&rarr; 'automation' is key here ? (an aim to automate relies on linguistic models)


(...)
&rarr; western urge to simplify / structure / archive knowledge, as sharing knowledge is regarded as something that will bring development in society for the future


* creating a context of automatic 'reading' machines, the context where WordNet is (also) used in, started here: [http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/elements/automatic-reading-machines/automatic-reading-machines.html i-could-have-written-that/ Automatic Reading Machines]


==publishing platform? ==
* starting a series of reading/writing excercises, in continuation of the way of working in the prototype classes and during Relearn.
=== alternative conversations (alternative to what?)===
** mapping WordNet's structure
Language as main communication system ... which makes it possible to create information, documents, friends. Not a human-only system, but also computer-system, or human-computer-system.
** using WordNet as a writing filter?  
** WordNet as structure for a collection (similar to the way i've used the SUN database)


... A computer system or computer process is easily regarded as an objective act.
while using open-source software, in order to be able to have a conversation with the tools that will be discussed, open them up.


... When using a computer, language is ''used'' (as interface to control computer systems), ''executed'' (as scripts) and ''processed'' (natural language as data) ...
=== elements ===
 
the following elements could be part of this issue: (now collected [http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/ on this 'i-could-have-written-that' webpage])
... both to reveal the fascinating systems that have been developed, the attempts, the dreams, but also to present a critical take on the way that these systems construct their 'truths'.
 
&rarr; WordNet as structure
The publications will report on techniques that can be called 'reading-writing(-executing?) systems'. They touch the issues of systemization & automation that work with simplification & modeling processes. Examples are: WordNet (a lexical dataset), Pattern (text-mining software), programming languages (high-level languages like Python or markup-languages like HTML), text-parsing (turning text into number), ngrams (common word combinations);
[http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/elements/wordnet-skeleton/wordnet-skeleton.html i-could-have-written-that/ WordNet skeleton]
 
By departing from a very technical point of view, i hope to develop ''a stage for alternative perspectives'' on these issues (making 'it-just-not-works' tutorials for example).


===how?===


'i-could-have-written-that' will be a publishing experiment that will formulate design questions on an 'workflow' level. The publications will not only question how information can be structured, visualized and/or related to a context through semiotic processes, but rather will question how 'design' can construct information processes, to create (human) readable documents, datasets, bug-reports, and tutorials.
==little glossary==
===language (as a resource)===


These aims are related to cultural principles present in the field of open-source: take for example ''the aim for distribution in stead of centralized sources'' (for example: Wikipedia), the aim of ''making information available for everyone'' (in the sense that it should not only be available but also legible), and the aim for ''collaborative work'' (as opposed to ownership). These principles will influence my design choices, for example: to consider an infrastructure that enables collaborative work.
It is discussable if language itself could be regarded as a technology or not. For my project i will follow James Gleick's statement in his book 'The Information: a Theory, a History, a Flood'<ref name="gleick">[http://around.com/the-information James Gleick's personal webpage], The Information: a Theory, a History, a Flood - James Gleick (2011)</ref>, where he states: ''Language is not a technology, (...) it is not best seen as something separate from the mind; it is what the mind does. (...) but when the word is instantiated in paper or stone, it takes on a separate existence as artifice. It is a product of tools and it is a tool.''


===from designing information, to designing information processes===
===natural language?===
Comming from a background in graphic design, i got educated in a traditional way (focus on typography and aesthetics) in combination with courses in 'design strategy' and 'meaning-making' (which was not defined in such clear terms btw.). I became interested in semiotics, and in systems that use symbols/icons/indexes to gain meaning.  
For this project i would like to look at 'natural language' from a perspective grounded in ''computer science'', ''computational linguistics'' and ''artificial intelligence'' (AI), where the term 'natural language' is mostly used in the context of 'natural language processing' (NLP), a field of studies that researches the transformation of natural (human) language into information(?), a format(?) that can be processed by a computer. (language-(...)-information)


After my first year at the Piet Zwart, i feel that my interest shifts from designing information on an interface level, to designing information processes. Being fascinated by looking at inner workings of techniques and being affected by the open source principles, bring up a whole set of new design questions. For example: How can an interface ''reveal'' its inner system? How can infrastructural descisions ''be'' design actions? And how could a workflow effect the information it is processing?
===systemization===
I'm interested in that moment of systemization: the moment that language is transformed into a model. These models are materialized in lexical-datasets, text-parsers, or data-mining algorithms. They reflect an aim of understanding the world through language (similar how the invention of geometry made it possible to understand the shape of the earth). 


Existing techniques that already give answers to these questions are GIT and MediaWiki. I would like to include these questions in my graduation work, and work with these software packages.
===automation===
Such linguistic models are needed to write software that automates reading processes, which is a more specific example of natural language processing. It aims to automate the steps of information processing, for example to generate new information (in data-mining) or to perform processes on a bigger scale (which is the case for translation engines).


== Relation to a larger context ==
===reading (automatic reading machines)===
In 1967, The Journal for Typographic Research expressed already high expectations of such 'automatic reading machines', as they would ''widen the bounds of the field of data processing''. The 'automatic reading machine' they refered to used an optical reading process, that would be optimized thanks to the design of a specific font (called OCR-B). It was created to optimize reading both for the human eye and the computer (using OCR software).


====natural language?====
===reading-writing (automatic reading-writing-machines)===
Natural language could be considered as the language that evolves naturally in the human mind through repetition, a process that starts for many people at a young age. For this project i would like to look at 'natural language' from a perspective grounded in ''computer science'', ''computational linguistics'' and ''artificial intelligence'' (AI), where natural language is mostly used in the context of 'natural language processing' (NLP), a field of studies that researches the interactions between human language and the computer.
An optical reading process starts by recognizing a written character by its form, and transforming it into its digital 'version'. This optical reading process could be compared to an encoding and decoding process (like Morse code), in the sense that the process could also be excecuted in reverse, without getting different information. The translation process is a direct process.  


====systemizing natural language? ====
But technologies like data-mining are processing data less direct. Every result that has been 'read' by the algorithmic process, is one version, one interpretation, one information process 'route' of the written text. Therefore could data-mining not be called a 'read-only' process, but better be labeled as a 'reading-writing' process. Where does the data-mining-process write?
It is discussable if language itself could be regarded as a technology or not. For my project i will follow James Gleick's statement in his book 'The Information: a Theory, a History, a Flood'<ref name="gleick">[http://around.com/the-information James Gleick's personal webpage], The Information: a Theory, a History, a Flood - James Gleick (2011)</ref>, where he states: ''Language is not a technology, (...) it is not best seen as something separate from the mind; it is what the mind does. (...) but when the word is instantiated in paper or stone, it takes on a separate existence as artifice. It is a product of tools and it is a tool.'' From this moment on 'language' is turned into 'written language'.


A very primary writing technology is the latin alphabet. The main set of 26 characters is a toolbox that enables us to systemize language into characters, into words, into sentences. When considering these tools as technologies, it makes it possible to follow a line from natural language to a language that computers can take as input, via various forms of mediation.
===tools & technologies===
Tools & examples to look at are: WordNet (a lexical dataset), Pattern (text-mining software), programming languages (high-level languages like Python or markup-languages like HTML), text-parsing (turning text into number), ngrams (common word combinations);


====technologies that systemize natural language? ====
By working closely with software that is used (for example in the fields of ''machine learning & text-mining''), i hope to reveal the inner workings of such mediating techniques through a practical approach. Elements to work with include for example dictionaries, lexicons, lexical databases (WordNet), other datasets (ConceptNet), ngrams, and other elements that are implemented in such software.




Line 116: Line 119:
* interview with creators of datasets or lexicons like WordNet
* interview with creators of datasets or lexicons like WordNet
* close reading of a piece of software, like we did during the workshop at Relearn. Options could be: text-mining software ''Pattern'' (Relearn), or ''Wecka 3.0''; or WordNet, ConceptNet, OpenCyc
* close reading of a piece of software, like we did during the workshop at Relearn. Options could be: text-mining software ''Pattern'' (Relearn), or ''Wecka 3.0''; or WordNet, ConceptNet, OpenCyc
*
*
== Practical steps ==
===how?===
* creating a historical context, a list of information processing systems, started here: [http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/elements/historical-list-of-information-systems/historical-overview-of-information-systems.html i-could-have-written-that/ historical list of information systems]
* creating a context of automatic 'reading' machines, started here: [http://pzwart1.wdka.hro.nl/~manetta/i-could-have-written-that/elements/automatic-reading-machines/automatic-reading-machines.html i-could-have-written-that/ Automatic Reading Machines]
* starting a series of reading/writing excercises, in continuation of the way of working in the prototype classes and during Relearn.
** mapping WordNet's structure
** using WordNet as a writing filter?
** WordNet as structure for a collection (similar to the way i've used the SUN database)
while using open-source software, in order to be able to have a conversation with the tools that will be discussed, open them up.
===questions of research===
* How can an interface ''reveal'' its inner system? How can structural descisions ''be'' design actions? And how could a workflow change to the information it is processing?
* how to communicate an alternative view on algorithmic reading-writing machines?
* how to built and maintain a (collaborative) publishing project?
** technically: what kind of system to use to collect? wiki? mailinglist interface?
** what kind of system to use to publish?
** publishing: online + print --> inter-relation
** in what context ?
*
*  
*  


== references ==
== references ==
<references />
<references />
===publishing experiments===
* [http://networkcultures.org/digitalpublishing/ digital publishing toolkit]
* [http://activearchives.org/aaa/ Active Archives VideoWiki - Constant's VideoWiki]
* [http://post-inter.net/ art post-internet (2014), a PDF + webpage catalogue]<br>
* [https://mcluhan.consortium.io/ Hybrid Lecture Player (to be viewed in Chrome/Chromium)]<br>
*


===current or former (related) magazines : ===
===current or former (related) magazines : ===
* [https://s3-us-west-2.amazonaws.com/visiblelanguage/pdf/V1N1_1967_E.pdf The Journal of Typographic Research (1967-1971)] (now: [http://visiblelanguagejournal.com/about Visible Language])
* [https://s3-us-west-2.amazonaws.com/visiblelanguage/pdf/V1N1_1967_E.pdf The Journal of Typographic Research (1967-1971)] (now: [http://visiblelanguagejournal.com/about Visible Language])
* [http://www.radicalsoftware.org/e/index.html Radical Software (1970-1974, NY)]
* [http://www.radicalsoftware.org/e/index.html Radical Software (1970-1974, NY)]
* [http://ds.ccc.de/download.html die Datenschleuder, Chaos Computer Club publication (1984-ongoing, DE)]
* [http://www.dot-dot-dot.us/ Dot Dot Dot (2000-2011, USA)]
* [http://www.dot-dot-dot.us/ Dot Dot Dot (2000-2011, USA)]
* [http://www.servinglibrary.org/ the Serving Library (2011-ongoing, USA)]
* [http://www.servinglibrary.org/ the Serving Library (2011-ongoing, USA)]
* OASE, on architecture (NL)
* [http://libregraphicsmag.com/ Libre Graphics Magazine (2010-ongoing) PR)]
* [https://worksthatwork.com/ Works that Work (2013-ongoing, NL)]
* [http://neural.it/ Neural (IT)]
* [http://www.aprja.net/ Aprja (DK)]


===other publishing platforms :===
===other publishing platforms :===
* [http://monoskop.org/Monoskop Monoskop]
* [http://monoskop.org/Monoskop Monoskop]
* [http://unfold.thevolumeproject.com/ unfold.thevolumeproject.org]
* mailinglist interface: lurk.org
* mailinglist interface: lurk.org
* mailinglist interface: nettime --> discussions in public
* mailinglist interface: nettime --> discussions in public
* [http://p-dpa.net/ archive of publications closely related to technology: P-DPA (Silvio Larusso)]
* [http://p-dpa.net/ archive of publications closely related to technology: P-DPA (Silvio Larusso)]
===publications :===
* [http://post-inter.net/ art post-internet (2014), a PDF + webpage catalogue]<br>
* [https://mcluhan.consortium.io/ Hybrid Lecture Player (to be viewed in Chrome/Chromium)]<br>


===datasets===
===datasets===
Line 179: Line 148:


===people===
===people===
====algorithmic culture====
algorithmic culture
Luciana Parisi
Matteo Pasquinelli
  Antoinette Roivoy
  Antoinette Roivoy
  Seda Gurses  
  Seda Gurses  
Ramon Amaro
computational culture
Florian Cramer
Benjamin Bratton


==== other ====
==== other ====
[https://mitpress.mit.edu/books/software-studies Software Studies. A lexicon. by Matthew Fuller (2008)]
[https://mitpress.mit.edu/books/software-studies Software Studies. A lexicon. by Matthew Fuller (2008)]
===reading list===


===notes and related projects===
===notes and related projects===

Revision as of 23:41, 5 November 2015

graduation proposal +0.5

title: "i could have written that"

alternatives:

Introduction

For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induice understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself "I could have written that". With that thought he moves the program in question from the shelf marked "intelligent" to that reserved for curious, fit to be discussed only with people less enlightened than he. (Joseph Weizenbaum, 1966)

abstract

'i-could-have-written-that' will be a publishing project around technologies that process natural language. The project will put linguistic tools and techniques central, to report on their contraints and possibilities. How do these techniques mediate the written word? What effect do they have on the information they transmit and the people that use them? How is natural language processed into data? These questions will hopefully lead to conversations that are not limited to technological aspects, but also reflect on cultural & political implications of language processing software.

context

In current reactions on the effects of an optimistic believe in computation, i recognize a pattern of requests for a focus on computation and algorithmic culture as 'synthetic' processes, because there is a convention to recognize anthropomorphic qualities in computers (like 'thinking' or 'intelligence') that obscures their syntactical nature. Also, because of ideological aims of big data to have direct access to information, there is a believe that 'truth' can be gained from data without any mediation. This could be avoided by a collection of alternative perspectives on these techniques, made accessible and legible for people with an interest to regard software as a cultural product. A choice for a focus on specifically software that processes natural language, is both a poetic, as also a way to speak directly with the techniques that create information using, excecuting & processing language*; the main communication system for both humans & computers. (Though stating this feels dangerous here: they differ!) Then it would be possible to formulate an opinion or understanding about computational (syntactical) techniques without relying only on the main (and overpowering) perspective.


  • Critical thinking about computers is not possible without an informed understanding. (...) Software as a whole is not only 'code' but a symbolic form involving cultural practices of its employment and appropriation. (...) It's upon critics to reflect on the contraints that computer control languages write into culture. — from: Language, by Florian Cramer; published in Software Studies, edited by Matthew Fuller (2008)
  • in many cases these debates [about artificial intelligence] may be missing the real point of what it means to live and think with forms of synthetic intelligence very different from our own. (...) To regard A.I. as inhuman and machinic should enable a more reality-based understanding of ourselves, our situation, and a fuller and more complex understanding of what 'intelligence' is and is not. — from Benjamin Bratton, Outing A.I. Beyond the Turing Test (2015)
  • Raw-data is like nature, it is the idea that nature will speak by itself. It is the idea that thanks to big data, the world speaks by itself without any transcription, symbolization, institutional mediation, political mediation or legal mediation — from: Antoinette Roivoy; during her lecture as part of the panel 'All Watched Over By Algorithms', held during the Transmediale festival (2015)
  • The issue we are dealing now with in machine learning and big data is that a heuristic approach is being replaced by a certainty of mathematics. And underneath those logics of mathematics, we know there is no certainty, there is no causality. Correlation doesn't mean causality. — from: Colossal Data and Black Futures, lecture by Ramon Amaro; as part of the Impakt festival (2015)

* A computer is a linguistic device. When using a computer, language is used (as interface to control computer systems), executed (as scripts written in (a set of) programming languages) and processed (turning natural language into data).

how?

  • central collection of annotations to language processing tools. (wiki / git + gitweb / ...)
  • periodical (edited) selection from this collection that will be published; the format will not be fixed, could be printed, digital, combination, offline, online; this is the publishing experiment part; the public aimed at is interested to regard software as a cultural product (not only a technological tone is hence needed)

'i-could-have-written-that' will be a publishing experiment that will formulate design questions on an 'workflow' level. The publications will not only question how information can be structured, visualized and/or related to a context through semiotic processes, but rather will question how 'design' can construct information processes, to create (human) readable documents, datasets, bug-reports, and tutorials.

These aims are related to cultural principles present in the field of 'free and open software': take for example the aim for distribution in stead of centralized sources (for example: Wikipedia or Git), the aim of making information available (in the sense that it should not only be available but also legible), and the aim for collaboration (in the sense of learning together). These principles will influence my design choices, for example: to consider an infrastructure that enables collaborative work.

from designing information, to designing information processes

Comming from a background in graphic design, i got educated in a traditional way (focus on typography and aesthetics) in combination with courses in 'design strategy' and 'meaning-making' (which was not defined in such clear terms btw.). I became interested in semiotics, and in systems that use symbols/icons/indexes to gain meaning.

After my first year at the Piet Zwart, i feel that my interest shifts from designing information on an interface level, to designing information processes. Being fascinated by looking at inner workings of techniques and being affected by the 'free and open software' principles, bring up a whole set of new design questions. For example: How can an interface reveal its inner system? How can infrastructural descisions be design actions? And how could a workflow effect the information it is processing?

would you like to be dependent on an online service like WordPress, to publish your material? Would you like to be able to work on your blog online only? How do you preserve the material you will publish? When could a document be called 'a document'; when it is readable for the user or for the computer? How could you implement the inner working of an online document in the interface? How can we work together at the same file at the same time?

Existing techniques that already give answers to these questions are GIT and MediaWiki. I would like to include these questions in my graduation work, and work with these software packages.


(in the field of graphic design, i recognize a pattern of 'designing' on a high level. I call this 'high level design' because of the tradition to formulate design questions on the level of the interface and the interpretation of the reader. This is different in the field of computed type design, where fonts are built out of outlines, skeletons or dots; depending on their constructions. Also the field of experimental book design could be regarded as a exception. Though, in the field of web design, working on the level of the interface is very common, but could be avoided by regarding 'design' as designing a 'workflow'. That will lead to a design practise where current general questions could be included as design questions. Design questions are then related to your tools (software), social workstructure, principles, etc.)

#0 issue

intro

  • WordNet as a dataset that 'maps' language
    • Not 'mapping' as a tool to understand (as a primary aim) (as Julie speaks about mapping the physicality of the Internet) but rather 'mapping' in the sense of 'modeling', in order to automate 'natural language processes'.

→ 'automation' is key here ? (an aim to automate relies on linguistic models)

→ western urge to simplify / structure / archive knowledge, as sharing knowledge is regarded as something that will bring development in society for the future

  • starting a series of reading/writing excercises, in continuation of the way of working in the prototype classes and during Relearn.
    • mapping WordNet's structure
    • using WordNet as a writing filter?
    • WordNet as structure for a collection (similar to the way i've used the SUN database)

while using open-source software, in order to be able to have a conversation with the tools that will be discussed, open them up.

elements

the following elements could be part of this issue: (now collected on this 'i-could-have-written-that' webpage)

→ WordNet as structure
i-could-have-written-that/ WordNet skeleton


little glossary

language (as a resource)

It is discussable if language itself could be regarded as a technology or not. For my project i will follow James Gleick's statement in his book 'The Information: a Theory, a History, a Flood'[1], where he states: Language is not a technology, (...) it is not best seen as something separate from the mind; it is what the mind does. (...) but when the word is instantiated in paper or stone, it takes on a separate existence as artifice. It is a product of tools and it is a tool.

natural language?

For this project i would like to look at 'natural language' from a perspective grounded in computer science, computational linguistics and artificial intelligence (AI), where the term 'natural language' is mostly used in the context of 'natural language processing' (NLP), a field of studies that researches the transformation of natural (human) language into information(?), a format(?) that can be processed by a computer. (language-(...)-information)

systemization

I'm interested in that moment of systemization: the moment that language is transformed into a model. These models are materialized in lexical-datasets, text-parsers, or data-mining algorithms. They reflect an aim of understanding the world through language (similar how the invention of geometry made it possible to understand the shape of the earth).

automation

Such linguistic models are needed to write software that automates reading processes, which is a more specific example of natural language processing. It aims to automate the steps of information processing, for example to generate new information (in data-mining) or to perform processes on a bigger scale (which is the case for translation engines).

reading (automatic reading machines)

In 1967, The Journal for Typographic Research expressed already high expectations of such 'automatic reading machines', as they would widen the bounds of the field of data processing. The 'automatic reading machine' they refered to used an optical reading process, that would be optimized thanks to the design of a specific font (called OCR-B). It was created to optimize reading both for the human eye and the computer (using OCR software).

reading-writing (automatic reading-writing-machines)

An optical reading process starts by recognizing a written character by its form, and transforming it into its digital 'version'. This optical reading process could be compared to an encoding and decoding process (like Morse code), in the sense that the process could also be excecuted in reverse, without getting different information. The translation process is a direct process.

But technologies like data-mining are processing data less direct. Every result that has been 'read' by the algorithmic process, is one version, one interpretation, one information process 'route' of the written text. Therefore could data-mining not be called a 'read-only' process, but better be labeled as a 'reading-writing' process. Where does the data-mining-process write?

tools & technologies

Tools & examples to look at are: WordNet (a lexical dataset), Pattern (text-mining software), programming languages (high-level languages like Python or markup-languages like HTML), text-parsing (turning text into number), ngrams (common word combinations);


Relation to previous practice

In the last year, i've been looking at different tools that process natural language. From speech-to-text software to text-mining tools, they all systemize language in various ways.

training common sense, work track at Relearn 2015

As a continutation of that i took part at the Relearn summerschool in Brussels last August (2015), to propose a work track in collaboration with Femke Snelting on the subject of 'training common sense'. With a group of people we have been trying to deconstruct the 'truth-construction' in algorithmic cultures, by looking at data mining processes, deconstructing the mathematical models that are used, finding moments where semantics are mixed with mathematics, and trying to grasp what kind of cultural context is created around this field. We worked with a text-mining software package called 'Pattern'. The workshop during Relearn transformed into a project that we called '#!Pattern+, and will be strongly collaborative and ongoing over a longer time span. #!Pattern+ will be a critical fork of the latest version of Pattern, including reflections and notes on the software and the culture it is surrounded within. The README file that has been written for #!PATTERN+ is online here, and more information is collected on this wiki page.

i will tell you everything (my truth is a constructed truth") catalog of "Encyclopedia of Media Object" in V2, June 2015

Another entrance to understanding what happens in algorithmic practises such as machine learning, is by looking at training sets that are used to train algorithms to recognize certain patterns in a set of data. These training sets could contain a large set of images, texts, 3d models, or video's. By looking at such datasets, and more specifically at the choices that have been made in terms of structure and hierarchy, steps of the construction a certain 'truth' are revealed. For the exhibition "Encyclopedia of Media Object" in V2 last June, i created a catalog, voice over and booklet, which placed the objects from the exhibition within the framework of the SUN database, a resource of images for image recognition purposes. (link to the "i-will-tell-you-everything (my truth is a constructed truth" interface)

There are a few datasets in the academic world that seem to be basic resources to built these training sets upon. In the field they are called 'knowledge bases'. They live on a more abstract level then the training sets do, as they try to create a 'knowlegde system' that could function as a universal structure. Examples are WordNet (a lexical dataset), ConceptNet, and OpenCyc (an ontology dataset). In the last months i've been looking into WordNet, worked on a WordNet Tour (still ongoing), and made an alternative browser interface (with cgi) for WordNet. It's all a process that is not yet transformed in an object/product, but untill now documented here and here on the Piet Zwart wiki.

Thesis intention

I would like to integrate my thesis in my graduation project, to let it be the content of the publication(s). This could take multiple forms, for example:

  • interview with creators of datasets or lexicons like WordNet
  • close reading of a piece of software, like we did during the workshop at Relearn. Options could be: text-mining software Pattern (Relearn), or Wecka 3.0; or WordNet, ConceptNet, OpenCyc

references

  1. James Gleick's personal webpage, The Information: a Theory, a History, a Flood - James Gleick (2011)

publishing experiments

current or former (related) magazines :

other publishing platforms :

datasets

* WordNet (Princeton)
* ConceptNet 5 (MIT Media)
* OpenCyc

people

algorithmic culture

Antoinette Roivoy
Seda Gurses 
Ramon Amaro 

computational culture

Florian Cramer
Benjamin Bratton

other

Software Studies. A lexicon. by Matthew Fuller (2008)

notes and related projects

BAK lecture: Matthew Fuller, on the discourse of the powerpoint (Jun. 2015) - annotations

project: Wordnet

project: i will tell you everything (my truth is a constructed truth)

project: serving simulations