User:Silviolorusso/prototyping/chitchatwithpetraandmarie: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
 
(5 intermediate revisions by 3 users not shown)
Line 3: Line 3:
Text-based reworking, databases and stuff
Text-based reworking, databases and stuff


* "greatest" on wikipedia
* putting together and create new meaning
* politician saying I'm right
* 10 questions
* ideal city
* vincent/ wikipedia project
* student at udeka google alphabet: http://vimeo.com/26191221
* suggestions
* metahaven
* http://www.a-blast.org/
* markov chains


*collection of sources:
== Inspirations ==
 
*news portals
*Thank you for the add.
*impoting text into a layout
 
*Onomatopee book
 
* buy a word in order to have adds
 
* nike captcha
 
 
++++
 
in a text each word connected to its definition.
 
 
what is behind a word?
 
recursive process into the same text
 
 
 
 
what a word can link to?


* google alphabet http://vimeo.com/26191221
* suggestions http://www.silviolorusso.com/suggestions/suggestions_verbs.php
* ideal city by petra
* http://www.a-blast.org/
* the form and the frame http://www.onomatopee.net/pages/projecten/omp41.html


== Questions and ideas ==


using urls  www.one.com www.day.com www.i.com www.was.com
* In a text each word connected to its definition.
 
* What is behind a word?
* Recursive process into the same text.
* What a word can link to?
* Using urls  www.one.com www.day.com www.i.com www.was.com




Line 88: Line 61:
== Possible texts to apply the script ==
== Possible texts to apply the script ==


Lyotard, les immaterieux
* Lyotard, les immaterieux
Archive Fever
* Derrida, Archive Fever
 
<source lang="python">
import sys, json, urllib2, os
from pprint import pprint
#urllib2.urlopen("http://en.wikipedia.org/w/api.php?format=json&action=query&titles=Nick%20Carter&prop=revisions&rvprop=content")
f = urllib2.urlopen("http://en.wikipedia.org/w/api.php?format=json&action=query&list=categorymembers&cmtitle=Category:American_child_singers&cmdir=desc&cmlimit=500")
data = json.load(f)
 
#Category:American_child_singers
#api.php?action=query&list=allcategories&acprefix=List%20of
 
#print data
print json.dumps(data, sort_keys=True, indent=4)
 
for r in data['query']['categorymembers']:
print r["pageid"]
f2 = urllib2.urlopen( "http://en.wikipedia.org/w/api.php?format=json&action=query&pageids="+str(r["pageid"])+"&prop=revisions&rvprop=content" )
data2 = json.load(f2)
maintext = data2['query']['pages'][str(r["pageid"])]["revisions"][0]["*"]
if maintext.rfind('song') != -1:
print maintext
 
 
#print data['query']['pages']['83106']
#print json.dumps(data['query']['pages'], sort_keys=True, indent=4)
#for r in data['query']['pages']:
# print json.dumps(r["revisions"][0]["*"], sort_keys=True, indent=4)
 
</source>

Latest revision as of 15:07, 9 February 2012

Chitchat with Marie, Petra and Silvio | 19/01/12

Text-based reworking, databases and stuff


Inspirations

Questions and ideas

  • In a text each word connected to its definition.
  • What is behind a word?
  • Recursive process into the same text.
  • What a word can link to?
  • Using urls www.one.com www.day.com www.i.com www.was.com


Disappearance, repetition and consumption

I wrote a bit of code that does what we were discussing in class: each word in a text become more transparent each time is repeated. I was thinking at the world of information and news. I was thinking that the meaning of the words is consumed by their repetition. Think of #OccupyWallStreet, think of what will happen with SOPA.

Here's the script if you wanna play with it.

 
#!/usr/bin/python

import re

text = "The text that has to be processed."
words = text.split()
wordsOp = ""
checklist = {}

for words in words:
    # split the word from punctuation
    word = re.findall(r'\w+',words)
    # make the word lowercase
    wordLow = word[0].lower()
    # check if the word is in the dictionary, if the word is there count +1,  if not there add the word to the checklist and count 255(maximum opacity)
    if wordLow in checklist:
        checklist[wordLow] += 1
    else:
        checklist[wordLow] = 0
    # append words with opacity (HTML)
    wordsOp = wordsOp + "<span style=\"color:rgb(" + str(checklist[wordLow]) + "," + str(checklist[wordLow]) + "," + str(checklist[wordLow]) + ");\">" + words + "</span> "

# print the new string 
print wordsOp

And here's a little demo of how it works

This text is going to disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear. disappear.


Possible texts to apply the script

  • Lyotard, les immaterieux
  • Derrida, Archive Fever
 
import sys, json, urllib2, os
from pprint import pprint 
 
#urllib2.urlopen("http://en.wikipedia.org/w/api.php?format=json&action=query&titles=Nick%20Carter&prop=revisions&rvprop=content")
f = urllib2.urlopen("http://en.wikipedia.org/w/api.php?format=json&action=query&list=categorymembers&cmtitle=Category:American_child_singers&cmdir=desc&cmlimit=500")
data = json.load(f)

#Category:American_child_singers
#api.php?action=query&list=allcategories&acprefix=List%20of

#print data
print json.dumps(data, sort_keys=True, indent=4)

for r in data['query']['categorymembers']:
	print r["pageid"]
	
	f2 = urllib2.urlopen( "http://en.wikipedia.org/w/api.php?format=json&action=query&pageids="+str(r["pageid"])+"&prop=revisions&rvprop=content" )
	
	data2 = json.load(f2)
	maintext = data2['query']['pages'][str(r["pageid"])]["revisions"][0]["*"]
	if maintext.rfind('song') != -1:
		print maintext


#print data['query']['pages']['83106']
#print json.dumps(data['query']['pages'], sort_keys=True, indent=4)
#for r in data['query']['pages']:
#	print json.dumps(r["revisions"][0]["*"], sort_keys=True, indent=4)