Sniff, Scrape, Crawl (Prototyping)

From XPUB & Lens-Based wiki

In 2011, Sniff, Scrape, Crawl was a thematic project led by Aymeric Mansoux, Renee Turner, and Michael Murtaugh.

This prototyping module covers some of the core themes and tools around the practice of "scraping", with the goal to better familiarize yourself with the possibilities of this technique and to develop strategic uses of the tools for your specific research/projects. This module follows on the ideas developed in Roll your own google.

Meeting 1: May 20

Morning: Scraping Tools (11:00)

Scraping tools and recipes exist at a variety of scales:

  • small: Simple scraping with wget, Simple Web Spider in Python, you can get pretty far with just some standard python code and some loops...
  • medium: Scrapy, python "framework" inspired by web frameworks like Django specifically for scraping
  • large: Heritrix, full-fledged tools used for institutional scraping; used by tradition libraries among others, and provided by (and used extensively for) the Internet Archive (aka archive.org).

We'll use the hot seat to get our collective feet wet with some simple scraping tools, focussing on the small to medium scale.

Afternoon: Meeting to discuss / develop / brainstorm project ideas

Meet as a group to discuss/brainstorm ideas for individual research / projects.

Meeting 2: May 27

  • Workshop (Topic/Tool to be determined based on brainstorm) / Tutorials 27 May

Web scraping with Python

Wikipedia Image Scraping

Meeting 3: June 30 (Final session)

Presentation of your prototype for the (joint) final presentation Monday 30 June

Some Examples

Links

from which