User:Rita Graca/gradproject/prototyping/twittertrends: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
Line 95: Line 95:


(less accurate, abandoned)
(less accurate, abandoned)
There's a website that has been saving daily trends from Twitter.
Using Selenium I could go through the website and scrape the trends related with cancel culture.
I stoped the prototype because I can't rely on the website accuracy.

Revision as of 22:12, 3 December 2019

Twitter trends

API

Cancel culture happens in Twitter through design features such as hashtags and trending topics.
To investigate better this movement, I understood I had to inform myself about which topics/people/things were being cancelled, how was the engagement with this topic, what were the language and strategies used.


1. Using the Twitter API I could get the current trends in the US.

Steps:

  • Create a Twitter developer account
  • Get keys and tokens from Twitter
  • Install Ruby
  • Install Twurl
  • Install JQ to read JSON
  • Use the command line
  twurl "/1.1/trends/place.json?id=23424977" | jq


2. I was only interested in the trends related to cancel culture, so I used Python to develop the script a bit more.

Steps:

  • Use Python library Tweepy
  • Get trends
  • Look for trends with words related with cancel culture


3. It was useful to save the trends. Instead of saving them in a .txt file, it made more sense to post them back in a Twitter account.

Steps:

  • Create a status with the search results (a status is a tweet in the library)


4. To make it look for trends regularly I created a cron job on my computer.

  46 * * * * /usr/local/bin/python3 /Users/0972516/desktop/ritaiscancelled/trends.py


Outcome:
The account @CancelledWho looks for trends related to my topic and posts them. This way I can be always monitoring an important topic of my research.

#!/usr/bin/python

import tweepy
import key # this is a pyhton file with my API passwords
import time

# using the passwords to OAuth process, authentication
auth = tweepy.OAuthHandler(key.consumer_key, key.consumer_secret)
auth.set_access_token(key.access_token, key.access_token_secret)
api = tweepy.API(auth)


trends1 = api.trends_place(23424977)  # american woeid id

trends = set([trend['name'] for trend in trends1[0]['trends']]) # just getting the name, not timestamp, author, etc.

trendsLower = [item.lower() for item in trends] # makes everything lowercase, important for then to match with cancelwords.txt
trendsLine = '\n'.join(trendsLower) # makes it more readable, puts the names with line breaks

#print(trendsLine)

cancelwords = ["cancelled", "canceled", "cancel", "isoverparty", "booed", "boycott"]
#print(cancelwords)


for line in trendsLine.splitlines():
            #print(line)
    for word in cancelwords:
        if  word in line:
            try:
                status = "Who are we fighting today? " + line
                print(status)
                api.update_status(status) # Creates a tweet, a status is a tweet
                time.sleep(5)
            except tweepy.TweepError as e: # the error is occuring when the last status is the same
                print("ups, you already tweeted this")
                break
    #    time.sleep(3600) # so the script will wait 1h to run again if catches error


Outcome


Get trends from historical archive

Use existing database

Scrape from existing website

(less accurate, abandoned)

There's a website that has been saving daily trends from Twitter. Using Selenium I could go through the website and scrape the trends related with cancel culture. I stoped the prototype because I can't rely on the website accuracy.