User:Max Dovey/ PT/TRIMESTER 1 ntw6: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
 
(25 intermediate revisions by the same user not shown)
Line 108: Line 108:


i got my twitter search function to write to a text file. Im going to look at automating that text file to network printer for this cloud project.
i got my twitter search function to write to a text file. Im going to look at automating that text file to network printer for this cloud project.
useful link
http://lifehacker.com/5652311/print-files-on-your-printer-from-any-phone-or-remote-computer-via-dropbox
http://docs.python.org/2/tutorial/inputoutput.html
facebook - https://github.com/OpenTechSchool/python/wiki/Facebook-Client
youtube api - http://gdata.youtube.com/
http://nealcaren.web.unc.edu/an-introduction-to-text-analysis-with-python-part-1/
===Week3====
This script prints a json grab from twitter api, loops and sends to printer every 55 seconds.
<source lang="python">
uth = twitter.oauth.OAuth(OAUTH_TOKEN, OAUTH_TOKEN_SECRET,
                          CONSUMER_KEY, CONSUMER_SECRET)
twitter_api = twitter.Twitter(domain='api.twitter.com',
                              api_version='1.1',
                              auth=auth
                                            )
filepath = "Desktop/autoprinting/todo/" #define file path
filename = "cloud101" #def file name
#if file path does not exist make this
if not os.path.exists('Desktop/autoprinting/todo'):
    os.makedirs('Desktop/autoprinting/todo')
#add filename to path
completepath = os.path.join(filepath, filename+".txt")
   
while True:
    q = "the_cloud"
    count = 30
    f = open (completepath, "w",)
    search_results= twitter_api.search.tweets(q=q, count=count)
    # search_results['meta_data']
    for status in search_results['statuses']:
        text = status['text']
        date = status['created_at']
        simplejson.dump(text + date, f)
        f.writelines("\n")
    f.close()
   
    time.sleep(55)
#if path exists
    if os.path.exists('Desktop/autoprinting/todo/cloud101.txt'):
#lbr print
        os.system("lpr -p -r Desktop/autoprinting/todo/cloud101.txt")
</source>
===week 4===
basic arithmetic (moving the digit to another column when u reach *power 10*)
keeping the count. binary systems would use 10 figures ( 0,1,2,3,4,5,6,7,8,9) and then when it gets to 10 will give to another cog to start counting giving you an infinite counting system.
The difference engine
George Boole began implementing numerical binary into text in logic. He is the inventor of what is known as Boolean logic, true and false statements and conditionals.
Lewis carroll wrote logic conditionals in alice and wonderland
"It is a very inconvenient habit of kittens (Alice had once made the remark) that, whatever you say to them, they Always purr.  'If them would only purr for "yes" and mew for "no," or any rule of that sort,' she had said, 'so that one could keep up a conversation!  But how can you talk with a person if they always say the same thing?'
Lewis Carroll (1832 - 1898)
Alice predicament allows us to talk with computers.
Alice's whole adventure is based on encountering scenarios and applying logic conditionals.
8-bit = 8 on and offs. giving you 255 characters to allocate.
all commmands are distributed via the ascII table.
http://en.wikipedia.org/wiki/ASCII
unicode (utf-8) (UTF-16) Think about the task of the translator by Walter Benjamin/ Everything is interpreted. every webpage is being encoded all bytes are counters for their 8 bit reference gives them a Glyph.
see last years  page for more on human computation http://pzwart3.wdka.hro.nl/wiki/Human_Computation_(Slides)
PYTHON AND AUDIO
wav files have a header like how utf is encoded at the top of html.
(16 bit little endian , rate 44100 hz, stereo)
number of samples / sample rate = time of piece.
using this example http://zacharydenton.com/generate-audio-with-python/
<h4>This code generates a sound byte the same frequency as the temperature of Rotterdam that day. </h4>
<source lang = "python">
#!/usr/bin/env python
#-*- coding:utf-8 -*-
import json
import requests
import urllib2
url = ('http://api.openweathermap.org/data/2.5/weather?q=rotterdam,nl')
r = requests.get(url)
data = r.text
r.text
     
#urllib2.urlopen(url)
#json.dump(url)
data = json.loads(r.text)
t = data["main"]["temp"]
import wave, struct
filename = "rotterdam.wav"
nframes=None
nchannels=2
sampwidth=1 # in bytes so 2=16bit, 1=8bit
framerate=44100
bufsize=2048
w = wave.open(filename, 'w')
w.setparams((nchannels, sampwidth, framerate, nframes, 'NONE', 'not compressed'))
max_amplitude = float(int((2 ** (sampwidth * 8)) / 2) - 1)
# split the samples into chunks (to reduce memory consumption and improve performance)
#for chunk in grouper(bufsize, samples):
#    frames = ''.join(''.join(struct.pack('h', int(max_amplitude * sample)) for sample in channels) for channels in chunk if channels is not None)
#    w.writeframesraw(frames)
freq = int(t)
# this means that FREQ times a second, we need to complete a cycle
# there are FRAMERATE samples per second
# so FRAMERATE / FREQ = CYCLE LENGTH
cycle = framerate / freq
for x in range(freq):
    data = ''
    for i in range(1):
        for x in range(cycle/2):
            data += struct.pack('h', int(0.5 * max_amplitude))
            data += struct.pack('h', 0)
        for x in range(cycle/2):
            data += struct.pack('h', int(-0.5 * max_amplitude))
            data += struct.pack('h', 0)
    w.writeframesraw(data)
w.close()
</source>
<h4> Image Generation </h4>
[[File:Outputrandom2.png]]
<h4> The sound of weather - Rotterdam, Netherlands and Lagos, Nigeria </h4>
<source lang = "python">
#!/usr/bin/env python
#-*- coding:utf-8 -*-
import json
import requests
import urllib2
import time
url = ('http://api.openweathermap.org/data/2.5/weather?q=lagos,ni')
r = requests.get(url)
data = r.text
r.text
     
#urllib2.urlopen(url)
#json.dump(url)
data = json.loads(r.text)
t1 = data["main"]["temp"]
url = ('http://api.openweathermap.org/data/2.5/weather?q=rotterdam,nl')
r = requests.get(url)
data = r.text
r.text
     
#urllib2.urlopen(url)
#json.dump(url)
data = json.loads(r.text)
t2 = data["main"]["temp"]
import wave, struct
filename = "weatherreport.wav"
nframes=None
nchannels=2
sampwidth=2 # in bytes so 2=16bit, 1=8bit
framerate=22150
bufsize=2048
if nframes is None:
    nframes = -1
w = wave.open(filename, 'w')
w.setparams((nchannels, sampwidth, framerate, nframes, 'NONE', 'not compressed'))
max_amplitude = float(int((2 ** (sampwidth * 8)) / 2) - 1)
# split the samples into chunks (to reduce memory consumption and improve performance)
#for chunk in grouper(bufsize, samples):
#    frames = ''.join(''.join(struct.pack('h', int(max_amplitude * sample)) for sample in channels) for channels in chunk if channels is not None)
#    w.writeframesraw(frames)
freq = int(t2 * 10)
freq2 = int(t1 * 10)
# this means that FREQ times a second, we need to complete a cycle
# there are FRAMERATE samples per second
# so FRAMERATE / FREQ = CYCLE LENGTH
cycle = framerate / freq
for x in range(5):
    data2 = ''
    data = ''
    for i in range(1):
        for x in range(cycle*1000):
            data += struct.pack('h', int(0.5 * max_amplitude))
            data += struct.pack('h', 1)
        for x in range(cycle*1000):
            data += struct.pack('h', int(-0.5 * max_amplitude))
            data += struct.pack('h', 1)
    for i in range(1):
        #if freq2 >= 600:
            for x in range(cycle):
                data2 += struct.pack('h', int(0.5 * max_amplitude))
                data2 += struct.pack('h', 0)
           
            for x in range(cycle):
                data2 += struct.pack('h', int(-0.5 * max_amplitude))
                data2 += struct.pack('h', 0)
           
    #  newstring = ''.join([data] + [data2])
    w.writeframesraw(data)
    w.writeframesraw(data2)
w.close()
</source>
<h4> Audio output </h4>
[[File:Weatherreport.ogg]]
<h4> WEATHER visualization graphic </h4>
<source lang = "python">
import struct, array
import random
import json
import requests
import urllib2
url = ('http://api.openweathermap.org/data/2.5/weather?q=antartica')
r = requests.get(url)
data = r.text
r.text
data = json.loads(r.text)
temp = data["main"]["temp"]
width = 100
height = 400
filename="antartica.tga"
datafile = open(filename, "wb")
# TGA format: http://gpwiki.org/index.php/TGA
# Offset, ColorType, ImageType, PaletteStart, PaletteLen, PalBits, XOrigin, YOrigin, Width, Height, BPP, Orientation
header = struct.pack("<BBBHHBHHHHBB", 0, 0, 2, 0, 0, 8, 0, 0, width, height, 24, 1 << 5)
datafile.write(header)
base = (height - int(temp)) #because the co-ordinates begin at 0 , deduct temp from height to make base that will show white, and color will fill from then on.
data = ''
for y in xrange(height):
    for x in xrange(width):
        r, g, b = 0, 0, 0
        if y > 0 and y < 400:
            b = 255
            g = 255
            r = 255
            if y > base and y < 400:
                r = 0
                g = 0
                b = 240
        if x <= 100 and y <= 1: #bottom border
            b = 0
            g = 0
            r = 0
        if x <= 100 and y >= 398: #top border
            b = 0
            g = 0
            r = 0
        if y <= 400 and x <= 1: #left border
            b = 0
            g = 0
            r = 0
        if y <= 400 and x >= 98: #right border
            b = 0
            g = 0
            r = 0
        data += struct.pack('B', b)
        data += struct.pack('B', g)
        data += struct.pack('B', r)
datafile.write(data)
datafile.close()
</source>
<h3> Rotterdam temp, Lagos Temp and Anartica </h3>
[[File:Tempvizrotterdam.png|thumbnail|left]] [[File:Tempviznigeria.png|thumbnail|centre]] [[File:Tempvizantartica.png|thumbnail|right]]
http://palewi.re/posts/2008/04/20/python-recipe-grab-a-page-scrape-a-table-download-a-file/
===week 6===
Looking at EPUB, Calbire and free publishing tools.
http://dpt.automatist.org/digitalworkflows/
===week 7===
Working with text - NLTK toolkit for natural language processing in python http://nltk.org/
Bitnik Deivery for Mr Assange http://wwwwwwwwwwwwwwwwwwwwww.bitnik.org/assange/
<source lang = python>
import time
f = open('emma.txt')
print f
out = open('janedump.txt', "w")
sawWord = False
for line in f:
    line = line.strip().lower()
    group = line.split()
    for word in group:
        if(word.lower() == 'emma'): #if emma = true sawWord becomes true
            sawWord = True
            #print
            out.write(word)
            out.write("\n")
            #time.sleep(0.2)
        elif (sawWord): #if saW
            out.write(word)
            out.write("\n")
            sawWord = False
            #time.sleep(0.2)
   
out.close()
</source>
Jane austen emma (+next word)
emma
author:
emma
***
emma
by
emma
woodhouse,
emma
doing
emma
first
emma
was
emma
could
emma
smiled
emma
spared
emma
playfully.
emma
woodhouse,
emma
herself,
emma
bears
emma
turned
emma
should
emma
was
emma
could
emma
found
emma
knew
emma
particularly
emma
was
emma
well
emma
help
emma
allowed
emma
lost
emma
had
emma
was
emma
encouraged
emma
watched
emma
was
emma
thought
emma
for
emma
imagined
emma
and
emma
must
emma
good.
emma
wants
emma
is
emma
has
emma
such
emma
could
emma
as
emma
imagine
emma
altogether--face
emma
always
emma
errs
emma
shall
emma
with
emma
in
emma
could
emma
exclaimed,
emma
wished
emma
began
emma
drew
emma
knew
emma
thought
emma
was
emma
was
emma
rather
emma
continued:
emma
persevered
emma
waited
emma
felt
emma
assured
emma
continued
emma
believed
emma
could
emma
judged
emma
will
emma
had
emma
knew
emma
was
emma
as
emma
not
emma
made
emma
laughed
emma
had
emma
remained
emma
could
emma
was
emma
assisted
emma
was
emma
than
emma
could
emma
spoke
emma
could
emma
only
emma
saw
emma
could
emma
thanked
emma
and
emma
smilingly
emma
could
emma
had
emma
could
emma
laughed,
emma
was
emma
time
emma
experienced
emma
passed
emma
felt
emma
quietly
emma
could
emma
felt
emma
only
emma
could
emma
could
emma
long
emma
could
emma
called
emma
sat
emma
was
emma
soon
emma
did
emma
thought
emma
smiled
emma
was
emma
might
emma
spoke
emma
liked
emma
wished
emma
found
emma
listened,
emma
were
emma
was
emma
saw
emma
tried
emma
should
emma
could
emma
on
emma
settled
emma
hoped
emma
in
emma
found,
emma
felt,
emma
sat
emma
was
emma
got
emma
to
emma
was
emma
was
emma
was
emma
had
emma
felt
emma
was
emma
immediately
emma
and
emma
knew
emma
said
emma
had
emma
was
emma
could
emma
was
emma
left
emma
was
emma
saw
emma
procure
emma
could
emma
could
emma
saw
emma
thinks
emma
had
emma
and
emma
said,
emma
was
emma
felt
emma
wish
emma
learned
emma
had
emma
thought
emma
was
emma
guessed
emma
saw
emma
would
emma
observed
emma
collected
emma
must
emma
consider
emma
could
emma
could
emma
was
emma
was
emma
wondered
emma
remained
emma
had
emma
to
emma
could
emma
watched
emma
was
emma
recollected
emma
would
emma
felt
emma
found
emma
heard
emma
want
emma
could
emma
did
emma
dine
emma
should
emma
comes
emma
thus
emma
had
emma
could
emma
to
emma
said,
emma
watched
emma
suspected
emma
should
emma
could
emma
divined
emma
restrained
emma
began
emma
guessed
emma
found
emma
soon
emma
rather
emma
best
emma
would
emma
could
emma
found
emma
had
emma
to
emma
did
emma
was
emma
went
emma
caught
emma
watched
emma
would
emma
wondered
emma
again.
emma
did
emma
could
emma
joined
emma
wished
emma
a
emma
still
emma
found
emma
said
emma
demurred.
emma
perceived
emma
up,
emma
was
emma
nor
emma
could
emma
felt
emma
disappointed;
emma
was
emma
looked
emma
was
emma
had
emma
felt
emma
continued
emma
had
emma
could
emma
grew
emma
could
emma
felt
emma
attacked
emma
continued,
emma
feel
emma
for
emma
had
emma
would
emma
thought
emma
made
emma
made
emma
doubted
emma
was
emma
could
emma
was
emma
was
emma
could
emma
had
emma
was
emma
hoped
emma
had
emma
woodhouse-ing
emma
had
emma
felt
emma
returned
emma
could
emma
triumphantly
emma
had
emma
was
emma
would
emma
apprehended
emma
both
emma
began
emma
found
emma
heard
emma
read
emma
could
emma
doubted
emma
has
emma
as
emma
heard
emma
saw
emma
were
emma
could
emma
perceived
emma
found
emma
longed
emma
most
emma
could
emma
in
emma
could
emma
that
emma
must
emma
was
emma
felt
emma
thought
emma
could
emma
had
emma
was
emma
considerable
emma
could
emma
acquainted
emma
engaging
emma
thought,
emma
had
emma
would
emma
and
emma
with
emma
was
emma
read
emma
saw
emma
was
emma
was
emma
then
emma
could
emma
was
emma
was
emma
herself
emma
and
emma
was
emma
had
emma
could
emma
was
emma
had
emma
had
emma
opposing
emma
some
emma
was
emma
had
emma
could
emma
denied
emma
had
emma
felt
emma
was
emma
and
emma
would
emma
received
emma
found
emma
walked
emma
very
emma
had
emma
had
emma
listened,
emma
returned
emma
were,
emma
and
emma
and
emma
was
emma
could
emma
found
emma
was
emma
grew
emma
recollected,
emma
felt
emma
seriously
emma
was
emma
made
emma
time
emma
was
emma
was
emma
has
emma
could
emma
communicated
emma
could
emma
was
emma
it
emma
listened
emma
felt
emma
did
emma
afterwards
emma
was
emma
found
emma
distinctly
emma
thought
emma
even
emma
might
emma
scarcely
emma
pondered
emma
began
emma
dryly,
emma
could
emma
feelingly.
emma
could
emma
was
emma
did
emma
looked
emma
could
emma
turned
emma
the
emma
knew
emma
felt
emma
herself.--the
emma
came,
emma
came
emma
felt)
emma
was
emma
to
emma
had
emma
as
emma
had;
emma
smiled,
emma
again.
emma
with
emma
resolved
emma
was
emma
understood
emma
could
emma
could
emma
was
emma
had
emma
a
emma
take
emma
woodhouse,
emma
woodhouse,
emma
agreed
emma
knew
emma
would
emma
had
emma
was
emma
could
emma
to
emma
fancied
emma
proposed
emma
could,
emma
was
emma
could
emma
saw
emma
could
emma
could
emma
guessed
emma
would
emma
amused
emma
felt
emma
thought
emma
warmly,
emma
laughed,
emma
grieved
emma
was
emma
accepted
emma
having
emma
hung
emma
first
emma
could
emma
would
emma
was
emma
gave
emma
could
emma
was
emma
began
emma
spoke
emma
was
emma
could
emma
soon
emma
had
emma
had
emma
could
emma
had
emma
became
emma
admitted
emma
must
emma
attended
emma
and
emma
***
===TWITTERBOT===
CREATING AN AUTOMATED TWITTERBOT
STILL NEED TO CREATE A 140 CHAR PARAMETER SO THAT TWEETS ARE NOT REJECTED.
<source lang = python>
import tweepy
import re
import os, sys
import time
CONSUMER_KEY = "cqPrXvSrS5WcSVpldOSNQ"
CONSUMER_SECRET = "uswR9EK1TuUorv3RfzFpcn46tljG98RTWnMKCD6yk"
ACCESS_KEY = "2247492469-1fr0vrCXA0SVS3oXFZ8FRu9X6gBesv9KnelWVZd"
ACCESS_SECRET = "cvM618YmJErThPZiLALJtmr7oUjBlmikfq1I3wUp8L05R"
# setup the twitter connection...
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
text = "".join(open("cruise.txt").readlines())
sentences = re.split(r' *[\.\?!][\'"\)\]]* *', text)
# f = open('journeylog1.txt')
for line in sentences:
    line = line.strip()
    api.update_status(line)
    print line
    time.sleep(25)
</source>
===Twitterbots & CRON JOBS===
<ul>
<li>  http://emerging.commons.gc.cuny.edu/2013/10/making-twitter-bot-python-tutorial/ </li>
<li> http://inventwithpython.com/yhobos_script.py </li>
<li> http://www.inmotionhosting.com/support/edu/cpanel/301-run-cron-job</li>
<li>http://www.unixgeeks.org/security/newbie/unix/cron-1.html</li>
<li> http://inventwithpython.com/blog/2012/03/25/how-to-code-a-twitter-bot-in-python-on-dreamhost/</li>
<strong> ELIZA </strong>
<li> http://www.jezuk.co.uk/cgi-bin/view/software/eliza </li>
</ul>

Latest revision as of 22:46, 7 January 2014

Week 1

Alan Turing's Universal Turing Machine (UTM) http://en.wikipedia.org/wiki/Universal_machine

was a concept for an infinite loop of tape that seperated into frames, each frame would present a different state. This created an infinite programming potential for reading. 

deconstructing the seamlessness of the factory line. The pipeline.


in the afternoon we played with Turtle.

http://opentechschool.github.io/python-data-intro/core/recap.html

https://github.com/OpenTechSchool/python/wiki/Facebook-Client

http://bitsofpy.blogspot.nl/2010/04/in-my-cosc-lab-today-few-students-were.html

facebook page query

[
>>> import json
>>> import urllib2
>>> def load_facebook_page(facebook_id):
...     addy = 'https://graph.facebook.com/548951431'
...     return json.load(urllib2.urlopen(addy))

load_facebook_page(548951431)
{u'username': u'max.dovey', u'first_name': u'Max', u'last_name': u'Dovey', u'name': u'Max Dovey', u'locale': u'en_US', u'gender': u'male', u'link': u'http://www.facebook.com/max.dovey', u'id': u'548951431'}
]

stuff to do & Resources - start fetching data from the twitter api https://code.google.com/p/python-twitter/ http://pzwart3.wdka.hro.nl/wiki/PythonTwitter http://www.lynda.com/Python-tutorials/Up-Running-Python/122467-2.html

mining the social web by o'reilly https://github.com/ptwobrussell/Mining-the-Social-Web updated github for twitter oauth http://www.pythonforbeginners.com/python-on-the-web/how-to-access-various-web-services-in-python/ http://www.greenteapress.com/thinkpython/thinkpython.pdf http://hetland.org/writing/instant-python.html

replacing "Music" with "Crap" on 20 most popular video search from youtube

import json
import requests 

r = requests.get("http://gdata.youtube.com/feeds/api/standardfeeds/top_rated?v=2&alt=jsonc")
r.text

data = json.loads(r.text)

#print data
for item in data['data']["items"]:   
    print " %s" % (item['category'].replace("Music", "CRAP"))

CRAP
Entertainment
CRAP
CRAP
CRAP
CRAP
CRAP
CRAP
CRAP
Comedy
CRAP
CRAP
Comedy
CRAP
CRAP
CRAP
CRAP
Entertainment
CRAP
CRAP
CRAP

Week2=

In the morning we looked at SVG files, and how you can edit the xml of them in a live editor within inskape. You can also execute python commands by pasting in python drawings and the vectors will be generated within inskape.

In the afternoon we looked at making api grabs , loading them with Json libs

ajax.googleapis.com/ajax

add json extension for firefox

Json turns xml into a javascript object.

json has lists - lists [] append("milk")

and dictionary {} foods = [foods["chocolate"] = "love to eat it"]

runcron - can execute pythonn scripts from a server timed.

i got my twitter search function to write to a text file. Im going to look at automating that text file to network printer for this cloud project.

useful link http://lifehacker.com/5652311/print-files-on-your-printer-from-any-phone-or-remote-computer-via-dropbox http://docs.python.org/2/tutorial/inputoutput.html facebook - https://github.com/OpenTechSchool/python/wiki/Facebook-Client youtube api - http://gdata.youtube.com/ http://nealcaren.web.unc.edu/an-introduction-to-text-analysis-with-python-part-1/

Week3=

This script prints a json grab from twitter api, loops and sends to printer every 55 seconds.

uth = twitter.oauth.OAuth(OAUTH_TOKEN, OAUTH_TOKEN_SECRET,
                           CONSUMER_KEY, CONSUMER_SECRET)

twitter_api = twitter.Twitter(domain='api.twitter.com', 
                              api_version='1.1',
                              auth=auth
                                             )
filepath = "Desktop/autoprinting/todo/" #define file path 

filename = "cloud101" #def file name 
#if file path does not exist make this
if not os.path.exists('Desktop/autoprinting/todo'):
    os.makedirs('Desktop/autoprinting/todo')
#add filename to path 
completepath = os.path.join(filepath, filename+".txt")
    

while True:
    q = "the_cloud"
    count = 30
    f = open (completepath, "w",)
    search_results= twitter_api.search.tweets(q=q, count=count)
    # search_results['meta_data']
    for status in search_results['statuses']:
        text = status['text']
        date = status['created_at']
        simplejson.dump(text + date, f)
        f.writelines("\n")

    f.close()
    
    time.sleep(55)
#if path exists 
    if os.path.exists('Desktop/autoprinting/todo/cloud101.txt'):
#lbr print 
        os.system("lpr -p -r Desktop/autoprinting/todo/cloud101.txt")

week 4

basic arithmetic (moving the digit to another column when u reach *power 10*) keeping the count. binary systems would use 10 figures ( 0,1,2,3,4,5,6,7,8,9) and then when it gets to 10 will give to another cog to start counting giving you an infinite counting system. The difference engine

George Boole began implementing numerical binary into text in logic. He is the inventor of what is known as Boolean logic, true and false statements and conditionals.

Lewis carroll wrote logic conditionals in alice and wonderland "It is a very inconvenient habit of kittens (Alice had once made the remark) that, whatever you say to them, they Always purr.  'If them would only purr for "yes" and mew for "no," or any rule of that sort,' she had said, 'so that one could keep up a conversation!  But how can you talk with a person if they always say the same thing?' Lewis Carroll (1832 - 1898)

Alice predicament allows us to talk with computers. Alice's whole adventure is based on encountering scenarios and applying logic conditionals.

8-bit = 8 on and offs. giving you 255 characters to allocate. all commmands are distributed via the ascII table. http://en.wikipedia.org/wiki/ASCII unicode (utf-8) (UTF-16) Think about the task of the translator by Walter Benjamin/ Everything is interpreted. every webpage is being encoded all bytes are counters for their 8 bit reference gives them a Glyph. see last years page for more on human computation http://pzwart3.wdka.hro.nl/wiki/Human_Computation_(Slides) PYTHON AND AUDIO wav files have a header like how utf is encoded at the top of html. (16 bit little endian , rate 44100 hz, stereo) number of samples / sample rate = time of piece. using this example http://zacharydenton.com/generate-audio-with-python/

This code generates a sound byte the same frequency as the temperature of Rotterdam that day.

#!/usr/bin/env python
#-*- coding:utf-8 -*-
import json
import requests
import urllib2

url = ('http://api.openweathermap.org/data/2.5/weather?q=rotterdam,nl')
r = requests.get(url)
data = r.text

r.text

      
#urllib2.urlopen(url)

#json.dump(url)
data = json.loads(r.text)

t = data["main"]["temp"]


import wave, struct
 
filename = "rotterdam.wav"
nframes=None
nchannels=2
sampwidth=1 # in bytes so 2=16bit, 1=8bit
framerate=44100
bufsize=2048
 
w = wave.open(filename, 'w')
w.setparams((nchannels, sampwidth, framerate, nframes, 'NONE', 'not compressed'))
 
max_amplitude = float(int((2 ** (sampwidth * 8)) / 2) - 1)
 
# split the samples into chunks (to reduce memory consumption and improve performance)
#for chunk in grouper(bufsize, samples):
#    frames = ''.join(''.join(struct.pack('h', int(max_amplitude * sample)) for sample in channels) for channels in chunk if channels is not None)
#    w.writeframesraw(frames)
 
freq = int(t)
# this means that FREQ times a second, we need to complete a cycle
# there are FRAMERATE samples per second
# so FRAMERATE / FREQ = CYCLE LENGTH
cycle = framerate / freq
 
 
for x in range(freq):
    data = ''
    for i in range(1):
        for x in range(cycle/2):
            data += struct.pack('h', int(0.5 * max_amplitude))
            data += struct.pack('h', 0)
        for x in range(cycle/2):
            data += struct.pack('h', int(-0.5 * max_amplitude))
            data += struct.pack('h', 0)
 

    w.writeframesraw(data)
 
w.close()

Image Generation

Outputrandom2.png

The sound of weather - Rotterdam, Netherlands and Lagos, Nigeria

#!/usr/bin/env python
#-*- coding:utf-8 -*-
import json
import requests
import urllib2
import time 
url = ('http://api.openweathermap.org/data/2.5/weather?q=lagos,ni')
r = requests.get(url)
data = r.text

r.text

      
#urllib2.urlopen(url)

#json.dump(url)
data = json.loads(r.text)

t1 = data["main"]["temp"]

url = ('http://api.openweathermap.org/data/2.5/weather?q=rotterdam,nl')
r = requests.get(url)
data = r.text

r.text

      
#urllib2.urlopen(url)

#json.dump(url)
data = json.loads(r.text)

t2 = data["main"]["temp"]
import wave, struct
 
filename = "weatherreport.wav"
nframes=None
nchannels=2
sampwidth=2 # in bytes so 2=16bit, 1=8bit
framerate=22150
bufsize=2048
 
if nframes is None:
    nframes = -1
 
w = wave.open(filename, 'w')
w.setparams((nchannels, sampwidth, framerate, nframes, 'NONE', 'not compressed'))
 
max_amplitude = float(int((2 ** (sampwidth * 8)) / 2) - 1)
 
# split the samples into chunks (to reduce memory consumption and improve performance)
#for chunk in grouper(bufsize, samples):
#    frames = ''.join(''.join(struct.pack('h', int(max_amplitude * sample)) for sample in channels) for channels in chunk if channels is not None)
#    w.writeframesraw(frames)
 
freq = int(t2 * 10)
freq2 = int(t1 * 10)
# this means that FREQ times a second, we need to complete a cycle
# there are FRAMERATE samples per second
# so FRAMERATE / FREQ = CYCLE LENGTH
cycle = framerate / freq
 
 
for x in range(5):
    data2 = ''
    data = ''
    for i in range(1):
        for x in range(cycle*1000):
            data += struct.pack('h', int(0.5 * max_amplitude))
            data += struct.pack('h', 1)
        for x in range(cycle*1000):
            data += struct.pack('h', int(-0.5 * max_amplitude))
            data += struct.pack('h', 1)


    for i in range(1):
        #if freq2 >= 600:
            for x in range(cycle):
                data2 += struct.pack('h', int(0.5 * max_amplitude))
                data2 += struct.pack('h', 0)
            
            for x in range(cycle):
                data2 += struct.pack('h', int(-0.5 * max_amplitude))
                data2 += struct.pack('h', 0)
            




    #   newstring = ''.join([data] + [data2])
    w.writeframesraw(data)
    w.writeframesraw(data2)
 
w.close()

Audio output

File:Weatherreport.ogg

WEATHER visualization graphic

import struct, array
import random 

import json
import requests
import urllib2

url = ('http://api.openweathermap.org/data/2.5/weather?q=antartica')
r = requests.get(url)
data = r.text

r.text

data = json.loads(r.text)

temp = data["main"]["temp"]

width = 100
height = 400
 
filename="antartica.tga"
datafile = open(filename, "wb")
# TGA format: http://gpwiki.org/index.php/TGA
# Offset, ColorType, ImageType, PaletteStart, PaletteLen, PalBits, XOrigin, YOrigin, Width, Height, BPP, Orientation
header = struct.pack("<BBBHHBHHHHBB", 0, 0, 2, 0, 0, 8, 0, 0, width, height, 24, 1 << 5)
datafile.write(header)

base = (height - int(temp)) #because the co-ordinates begin at 0 , deduct temp from height to make base that will show white, and color will fill from then on. 
 
 
data = ''
for y in xrange(height):
    for x in xrange(width):
        r, g, b = 0, 0, 0
 
 
        if y > 0 and y < 400:
            b = 255
            g = 255
            r = 255

            if y > base and y < 400:
                r = 0
                g = 0
                b = 240

        if x <= 100 and y <= 1: #bottom border
            b = 0
            g = 0
            r = 0
        if x <= 100 and y >= 398: #top border 
            b = 0
            g = 0
            r = 0
        if y <= 400 and x <= 1: #left border 
            b = 0
            g = 0
            r = 0
        if y <= 400 and x >= 98: #right border 
            b = 0
            g = 0
            r = 0 

        data += struct.pack('B', b)
        data += struct.pack('B', g)
        data += struct.pack('B', r)
 
datafile.write(data)
datafile.close()

Rotterdam temp, Lagos Temp and Anartica

Tempvizrotterdam.png
Tempviznigeria.png
Tempvizantartica.png


http://palewi.re/posts/2008/04/20/python-recipe-grab-a-page-scrape-a-table-download-a-file/

week 6

Looking at EPUB, Calbire and free publishing tools. http://dpt.automatist.org/digitalworkflows/

week 7

Working with text - NLTK toolkit for natural language processing in python http://nltk.org/ Bitnik Deivery for Mr Assange http://wwwwwwwwwwwwwwwwwwwwww.bitnik.org/assange/

 
import time 
f = open('emma.txt')
print f

out = open('janedump.txt', "w")
sawWord = False 
for line in f:
    line = line.strip().lower()
    group = line.split()
    for word in group:
        if(word.lower() == 'emma'): #if emma = true sawWord becomes true
            sawWord = True 
            #print 
            out.write(word)
            out.write("\n") 
            #time.sleep(0.2)
        elif (sawWord): #if saW
            out.write(word)
            out.write("\n") 
            sawWord = False 
            #time.sleep(0.2)
    

out.close()

Jane austen emma (+next word) emma author: emma

emma by emma woodhouse, emma doing emma first emma was emma could emma smiled emma spared emma playfully. emma woodhouse, emma herself, emma bears emma turned emma should emma was emma could emma found emma knew emma particularly emma was emma well emma help emma allowed emma lost emma had emma was emma encouraged emma watched emma was emma thought emma for emma imagined emma and emma must emma good. emma wants emma is emma has emma such emma could emma as emma imagine emma altogether--face emma always emma errs emma shall emma with emma in emma could emma exclaimed, emma wished emma began emma drew emma knew emma thought emma was emma was emma rather emma continued: emma persevered emma waited emma felt emma assured emma continued emma believed emma could emma judged emma will emma had emma knew emma was emma as emma not emma made emma laughed emma had emma remained emma could emma was emma assisted emma was emma than emma could emma spoke emma could emma only emma saw emma could emma thanked emma and emma smilingly emma could emma had emma could emma laughed, emma was emma time emma experienced emma passed emma felt emma quietly emma could emma felt emma only emma could emma could emma long emma could emma called emma sat emma was emma soon emma did emma thought emma smiled emma was emma might emma spoke emma liked emma wished emma found emma listened, emma were emma was emma saw emma tried emma should emma could emma on emma settled emma hoped emma in emma found, emma felt, emma sat emma was emma got emma to emma was emma was emma was emma had emma felt emma was emma immediately emma and emma knew emma said emma had emma was emma could emma was emma left emma was emma saw emma procure emma could emma could emma saw emma thinks emma had emma and emma said, emma was emma felt emma wish emma learned emma had emma thought emma was emma guessed emma saw emma would emma observed emma collected emma must emma consider emma could emma could emma was emma was emma wondered emma remained emma had emma to emma could emma watched emma was emma recollected emma would emma felt emma found emma heard emma want emma could emma did emma dine emma should emma comes emma thus emma had emma could emma to emma said, emma watched emma suspected emma should emma could emma divined emma restrained emma began emma guessed emma found emma soon emma rather emma best emma would emma could emma found emma had emma to emma did emma was emma went emma caught emma watched emma would emma wondered emma again. emma did emma could emma joined emma wished emma a emma still emma found emma said emma demurred. emma perceived emma up, emma was emma nor emma could emma felt emma disappointed; emma was emma looked emma was emma had emma felt emma continued emma had emma could emma grew emma could emma felt emma attacked emma continued, emma feel emma for emma had emma would emma thought emma made emma made emma doubted emma was emma could emma was emma was emma could emma had emma was emma hoped emma had emma woodhouse-ing emma had emma felt emma returned emma could emma triumphantly emma had emma was emma would emma apprehended emma both emma began emma found emma heard emma read emma could emma doubted emma has emma as emma heard emma saw emma were emma could emma perceived emma found emma longed emma most emma could emma in emma could emma that emma must emma was emma felt emma thought emma could emma had emma was emma considerable emma could emma acquainted emma engaging emma thought, emma had emma would emma and emma with emma was emma read emma saw emma was emma was emma then emma could emma was emma was emma herself emma and emma was emma had emma could emma was emma had emma had emma opposing emma some emma was emma had emma could emma denied emma had emma felt emma was emma and emma would emma received emma found emma walked emma very emma had emma had emma listened, emma returned emma were, emma and emma and emma was emma could emma found emma was emma grew emma recollected, emma felt emma seriously emma was emma made emma time emma was emma was emma has emma could emma communicated emma could emma was emma it emma listened emma felt emma did emma afterwards emma was emma found emma distinctly emma thought emma even emma might emma scarcely emma pondered emma began emma dryly, emma could emma feelingly. emma could emma was emma did emma looked emma could emma turned emma the emma knew emma felt emma herself.--the emma came, emma came emma felt) emma was emma to emma had emma as emma had; emma smiled, emma again. emma with emma resolved emma was emma understood emma could emma could emma was emma had emma a emma take emma woodhouse, emma woodhouse, emma agreed emma knew emma would emma had emma was emma could emma to emma fancied emma proposed emma could, emma was emma could emma saw emma could emma could emma guessed emma would emma amused emma felt emma thought emma warmly, emma laughed, emma grieved emma was emma accepted emma having emma hung emma first emma could emma would emma was emma gave emma could emma was emma began emma spoke emma was emma could emma soon emma had emma had emma could emma had emma became emma admitted emma must emma attended emma and emma

TWITTERBOT

CREATING AN AUTOMATED TWITTERBOT

STILL NEED TO CREATE A 140 CHAR PARAMETER SO THAT TWEETS ARE NOT REJECTED.

 

import tweepy
import re
import os, sys
import time

CONSUMER_KEY = "cqPrXvSrS5WcSVpldOSNQ"
CONSUMER_SECRET = "uswR9EK1TuUorv3RfzFpcn46tljG98RTWnMKCD6yk"
ACCESS_KEY = "2247492469-1fr0vrCXA0SVS3oXFZ8FRu9X6gBesv9KnelWVZd"
ACCESS_SECRET = "cvM618YmJErThPZiLALJtmr7oUjBlmikfq1I3wUp8L05R"

# setup the twitter connection...
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)

text = "".join(open("cruise.txt").readlines())
sentences = re.split(r' *[\.\?!][\'"\)\]]* *', text)


# f = open('journeylog1.txt')

for line in sentences:
    line = line.strip()
    api.update_status(line)

    print line 
    time.sleep(25)

Twitterbots & CRON JOBS