User:Manetta/graduation-proposals/proposal-0.1

From XPUB & Lens-Based wiki
< User:Manetta
Revision as of 12:57, 23 September 2015 by Manetta (talk | contribs)

graduation proposal +0.1.1

title: "i could have written that"

alternatives:

  • turning words into numbers
  • machine-human-machine

Introduction

For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induice understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself "I could have written that". With that thought he moves the program in question from the shelf marked "intelligent" to that reserved for curios, fit to be discussed only with people less enlightened that he. (Joseph Weizenbaum, 1966)


what are you doing?

In the last year, i've been looking at different tools that contain linguistic systems. From speech-to-text software to text-mining tools, they all systemize language in various ways in order to understand natural language—as human language is called in computer science. These tools fall under the term 'Natural Language Processing' (NLP), which is a field of computer science that is closely related to Artificial Intelligence (AI).

As a continutation of that i took part at the Relearn summerschool in Brussels last August, to propose a working track in collaboration with Femke Snelting on the subject of 'training common sense'. With a group of people we have been trying to deconstruct the truth-construction in algorithmic cultures, by looking at data mining processes, deconstructing the mathematical models that are used, and understanding which cultural context is created around this field.

Another entrance to understanding what happens in algorithmic practises such as machine learning, is by looking at training sets that are used to train software that is able to recognize patterns that are trained by an algorithm. These training set could contain a large set of images, text, 3d models, or video's. By looking at such databases, and more specifically at the choises that have been made in terms of structure and hierarchy, certain steps of the construction a certain 'truth' are revealed.

There are a few datasets in the academic world that seem to be basic resources to built these trainin sets upon. In the field they are called 'knowledge bases'. They live on a more abstract level then the training sets do, as they try to create a 'knowlegde system' that could function as a universal structure. Examples are WordNet (a lexical dataset), ConceptNet, and OpenCyc (an ontology dataset).


what do you want to do?

  • setting up a publishing platform / magazine to reveal inner workings of technologies that systemize language.


Relation to previous practice

Relation to a larger context

Thesis intention

Practical steps

how?

  • following the approach of 'i will tell you everything' (my truth is a constructed truth), which ......................
  • writing from a technology point of departure
references: 
- Matthew Fuller, powerpoint
- Constant, pipelines
- Steve Rushton, feeback
- Angie Keefer, Octopus

References

people

algorithmic culture

Luciana Parisi Matteo Pasquinelli Antoinette Roivoy Seda Gurses

other

Matthew Fuller

software

reading list

notes and related projects

BAK lecture: Matthew Fuller, on the discourse of the powerpoint (Jun. 2015) - annotations




graduation proposal +0.1.2

title: #!PATTERN+

Introduction

what are you doing?

During the last edition of Relearn this summer, there was a group of people who collaborate together in order to deconstruct the truth-construction in algorithmic cultures. The session was called 'training common sense', and started from an intuition that algorithmic truth constructions are very much relying on a certain amount of 'common sense'.

  • understanding algorithmic processes,
  • questioning where choises are made for the construction
  • finding moments where semantics are mixed with math

+ looking at the data-mining culture

what do you want to do?

  • publishing about the truth construction of algorithmic culture

A way of speaking back to these algorithmic cultural fields would be by publishing a critical fork of the text-mining software package called Pattern. The fork will be called #!PATTERN+, which will be a new release of the original package developed by the CLiPS research group at the university of Antwerp.

Relation to previous practice

Relation to a larger context

Thesis intention

Practical steps

how?

The critical fork '#!PATTERN+' will contain annotations in the form of commands inside the code, files that reflect questions that arose, and alternative tutorials will be added to the original package. By asking the question how is algorithmic 'truth' constructed?, these #!PATTERN+ notes will be developed in one of the following 3 sub-fields:

  • Knowledge Discovery in Data (KDD) steps - a technical approach, revealing the process from 'raw' data to the presentation of the results
  • text mining case studies - a reflection on the research projects that gave been done/are going on, working with examples and demo's
  • culture of data-mining - a more context-based approach, looking at the communication conventions that are present in the field of data-mining


References

people

Luciana Parisi Matteo Pasquinelli Antoinette Roivoy Seda Gurses

software

* CLiPS Pattern, official website

reading list

via Femke: Seda [Gurses preparing a session on Machine Learning]

notes and related projects

Transmediale lecture: All Watched Over By Algorithms (Jan. 2015) - annotations

BAK lecture: Algorithmic Culture (Jun. 2015) - annotations

earlier notes (Jul. 2015)

notes taken during Relearn Aug. 2015, training common sense

#!PATTERN+.readme

the Annotator, Cqrrelations Jan. 2015, on etherpad

notes Cqrrelations day 1, day 2, day 3, day 4, day 5