Twitter Bot: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
No edit summary
Line 3: Line 3:
The easiest way is to load data using JSON:
The easiest way is to load data using JSON:


== Print the last tweets from a particular user ==
<source lang="python">
<source lang="python">
import urllib2, json
import urllib2, json
url = "http://api.twitter.com/1/statuses/user_timeline.json?screen_name=" + "TRACKGent"
screen_name = "TRACKGent"
url = "http://api.twitter.com/1/statuses/user_timeline.json?screen_name=" + screen_name
data = json.load(urllib2.urlopen(url))
data = json.load(urllib2.urlopen(url))
print len(data)
 
print len(data), "tweets"
tweet = data[0]
tweet = data[0]
print tweet.keys()
print tweet.keys()
for tweet in data:
    print tweet['text']
</source>
</source>


== Other examples ==


You can also use [[feedparser]]:
You can also use [[feedparser]]:

Revision as of 11:54, 30 April 2012

Lots of data is available from Twitter via a public API (no specific API key required to use)

The easiest way is to load data using JSON:

Print the last tweets from a particular user

import urllib2, json
screen_name = "TRACKGent"
url = "http://api.twitter.com/1/statuses/user_timeline.json?screen_name=" + screen_name
data = json.load(urllib2.urlopen(url))

print len(data), "tweets"
tweet = data[0]
print tweet.keys()

for tweet in data:
    print tweet['text']


Other examples

You can also use feedparser:

import feedparser
 
url = "http://search.twitter.com/search.atom?q=feel"
feed = feedparser.parse(url)
for e in feed.entries:
    print e.title.encode("utf-8")
import feedparser
 
url = "http://search.twitter.com/search.atom?q=feel"
feed = feedparser.parse(url)
for e in feed.entries:
    for word in e.title.split():
        print word.encode("utf-8")

An older example using JSON:

from urllib import urlencode
import urllib2
import json


def openURL (url, user_agent="Mozilla/5.0 (X11; U; Linux x86_64; fr; rv:1.9.1.5) Gecko/20091109 Ubuntu/9.10 (karmic) Firefox/3.5.5"):
    """
    Returns: tuple with (file, actualurl)
    sets user_agent & follows redirection if necessary
    realurl maybe different than url in the case of a redirect
    """    
    request = urllib2.Request(url)
    if user_agent:
        request.add_header("User-Agent", user_agent)
    pagefile=urllib2.urlopen(request)
    realurl = pagefile.geturl()
    return (pagefile, realurl)

def getJSON (url):
    (f, url) = openURL(url)
    return json.loads(f.read())

TWITTER_SEARCH = "http://search.twitter.com/search.json"

data = getJSON(TWITTER_SEARCH + "?" + urlencode({'q': 'Rotterdam'}))
for r in data['results']:
    print r['text']