User:Max Dovey/ PT/TRIMESTER 1 ntw6: Difference between revisions
m (Max Dovey moved page TRIMESTER 1 ntw6 to User:Max Dovey/ PT/TRIMESTER 1 ntw6) |
(→Week 1) |
||
Line 115: | Line 115: | ||
youtube api - http://gdata.youtube.com/ | youtube api - http://gdata.youtube.com/ | ||
http://nealcaren.web.unc.edu/an-introduction-to-text-analysis-with-python-part-1/ | http://nealcaren.web.unc.edu/an-introduction-to-text-analysis-with-python-part-1/ | ||
===Week3==== | |||
<source lang="python"> | |||
filepath = "Desktop/autoprinting/todo/" #define file path | |||
filename = "cloud101" #def file name | |||
#if file path does not exist make this | |||
if not os.path.exists('Desktop/autoprinting/todo'): | |||
os.makedirs('Desktop/autoprinting/todo') | |||
#add filename to path | |||
completepath = os.path.join(filepath, filename+".txt") | |||
while True: | |||
q = "the_cloud" | |||
count = 30 | |||
f = open (completepath, "w",) | |||
search_results= twitter_api.search.tweets(q=q, count=count) | |||
# search_results['meta_data'] | |||
for status in search_results['statuses']: | |||
text = status['text'] | |||
date = status['created_at'] | |||
simplejson.dump(text + date, f) | |||
f.writelines("\n") | |||
f.close() | |||
time.sleep(55) | |||
#if path exists | |||
if os.path.exists('Desktop/autoprinting/todo/cloud101.txt'): | |||
#lbr print | |||
os.system("lpr -p -r Desktop/autoprinting/todo/cloud101.txt") | |||
</source> |
Revision as of 22:46, 1 October 2013
Week 1
Alan Turing's Universal Turing Machine (UTM) http://en.wikipedia.org/wiki/Universal_machine
was a concept for an infinite loop of tape that seperated into frames, each frame would present a different state. This created an infinite programming potential for reading.
deconstructing the seamlessness of the factory line. The pipeline.
in the afternoon we played with Turtle.
http://opentechschool.github.io/python-data-intro/core/recap.html
https://github.com/OpenTechSchool/python/wiki/Facebook-Client
http://bitsofpy.blogspot.nl/2010/04/in-my-cosc-lab-today-few-students-were.html
facebook page query
[
>>> import json
>>> import urllib2
>>> def load_facebook_page(facebook_id):
... addy = 'https://graph.facebook.com/548951431'
... return json.load(urllib2.urlopen(addy))
load_facebook_page(548951431)
{u'username': u'max.dovey', u'first_name': u'Max', u'last_name': u'Dovey', u'name': u'Max Dovey', u'locale': u'en_US', u'gender': u'male', u'link': u'http://www.facebook.com/max.dovey', u'id': u'548951431'}
]
stuff to do & Resources - start fetching data from the twitter api https://code.google.com/p/python-twitter/ http://pzwart3.wdka.hro.nl/wiki/PythonTwitter http://www.lynda.com/Python-tutorials/Up-Running-Python/122467-2.html
mining the social web by o'reilly https://github.com/ptwobrussell/Mining-the-Social-Web updated github for twitter oauth http://www.pythonforbeginners.com/python-on-the-web/how-to-access-various-web-services-in-python/ http://www.greenteapress.com/thinkpython/thinkpython.pdf http://hetland.org/writing/instant-python.html
replacing "Music" with "Crap" on 20 most popular video search from youtube
import json
import requests
r = requests.get("http://gdata.youtube.com/feeds/api/standardfeeds/top_rated?v=2&alt=jsonc")
r.text
data = json.loads(r.text)
#print data
for item in data['data']["items"]:
print " %s" % (item['category'].replace("Music", "CRAP"))
CRAP
Entertainment
CRAP
CRAP
CRAP
CRAP
CRAP
CRAP
CRAP
Comedy
CRAP
CRAP
Comedy
CRAP
CRAP
CRAP
CRAP
Entertainment
CRAP
CRAP
CRAP
Week2=
In the morning we looked at SVG files, and how you can edit the xml of them in a live editor within inskape. You can also execute python commands by pasting in python drawings and the vectors will be generated within inskape.
In the afternoon we looked at making api grabs , loading them with Json libs
ajax.googleapis.com/ajax
add json extension for firefox
Json turns xml into a javascript object.
json has lists - lists [] append("milk")
and dictionary {} foods = [foods["chocolate"] = "love to eat it"]
runcron - can execute pythonn scripts from a server timed.
i got my twitter search function to write to a text file. Im going to look at automating that text file to network printer for this cloud project.
useful link http://lifehacker.com/5652311/print-files-on-your-printer-from-any-phone-or-remote-computer-via-dropbox http://docs.python.org/2/tutorial/inputoutput.html facebook - https://github.com/OpenTechSchool/python/wiki/Facebook-Client youtube api - http://gdata.youtube.com/ http://nealcaren.web.unc.edu/an-introduction-to-text-analysis-with-python-part-1/
Week3=
filepath = "Desktop/autoprinting/todo/" #define file path
filename = "cloud101" #def file name
#if file path does not exist make this
if not os.path.exists('Desktop/autoprinting/todo'):
os.makedirs('Desktop/autoprinting/todo')
#add filename to path
completepath = os.path.join(filepath, filename+".txt")
while True:
q = "the_cloud"
count = 30
f = open (completepath, "w",)
search_results= twitter_api.search.tweets(q=q, count=count)
# search_results['meta_data']
for status in search_results['statuses']:
text = status['text']
date = status['created_at']
simplejson.dump(text + date, f)
f.writelines("\n")
f.close()
time.sleep(55)
#if path exists
if os.path.exists('Desktop/autoprinting/todo/cloud101.txt'):
#lbr print
os.system("lpr -p -r Desktop/autoprinting/todo/cloud101.txt")