User:SN/Untitled: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
No edit summary
Line 17: Line 17:


{{vimeo|172370428}}
{{vimeo|172370428}}
The second vercion of code includes random function to generate numbers. I ran the code several times, using different sites as an starting point. Here are some results.


==Process==
==Process==

Revision as of 08:06, 27 June 2016

About the project

Originally the project was a performance made by Pleun and me during the thematic project "Chain reaction". Pleun and I played on the field of combinatory literature, machine art and chain reactions. We looked at the way Youtube autoplay and Google method of crawling the internet works and took it as a starting point. The process is based on the loops of chain reactions, where the error plays a role of 'a decision maker' where to stop poem. The result is a list of words with or within a connection among them. The project states the role of the reader and its subjective interpretation. He is the one who creates connections between words. The computer pretends to be a human; human pretends to be a computer and in between, there is a spectator who creates connections and interpretations.

During the performance, I was operating the computer and played the role of a coupler between audience, computer, and Pleun. Pleun was operating the printer.

  • The performance starts with providing the link to the Python script.
  • Script follows the link, extracts all the text and links that are on this page.
  • I ask a random person to feed me a number.
  • The number determines which word to extract.
  • I ask Pleun to print this word.
  • After she finishes, I ask one more time a random person to feed me a number.
  • The number determines which link to follow.
  • Process repeats.
  • The failure to follow a link or extract a word that causes the system to finish "the poem."

http://vimeo.com/172370428

The second vercion of code includes random function to generate numbers. I ran the code several times, using different sites as an starting point. Here are some results.

Process

The main part of the project is a Python script

import urllib, urlparse
import sys
from bs4 import BeautifulSoup
import string
import re
import itertools
import time

def visit(x):
	page = x
	list_links = []
	url = urllib.urlopen(page).read()
	html = BeautifulSoup(url, 'html.parser')
	for link in html.find_all('a'):
		link = link.get('href')
		if link:
			link = urlparse.urljoin(x, link)
			list_links.append(link)

	for script in html(["script", "style"]):
		script.extract()

	text = html.get_text()
	words = re.findall(r"\w+", text)
	print words, len(words)
	number_word = raw_input('Feed me a number: ')
	number_link = raw_input('Feed me a number: ')
	random_word = words[int(number_word)]
	random_link = list_links[int(number_link)]
	return random_word, random_link
ww = []

try:
	link = raw_input("Feed me a link: ")
	while True:
		print "Visiting", link
		word, link = visit(link)
		print "WORD:", word
		ww.append(word)
		time.sleep(5)
for wrd in ww:
		if len(ww) > 2:
			print wrd
	print "The End"
	sys.exit()

except IOError:
	for wrd in ww:
		print wrd
	print "The End"
	sys.exit()
except IndexError:
	for wrd in ww:
		print wrd
	print "the End"
	sys.exit()
except ValueError:
	for wrd in ww:
		print wrd
	print "the End"
	sys.exit()

A mistake in the first version of code caused script to run an infinite loop. The Code was running for half an hour until my computer ran out of memory. I made a short screen recording of this prosses. There is something hypnotizing in it.

http://vimeo.com/172365632