User:Alessia/Making

From XPUB & Lens-Based wiki

⟑ π–π–”π–šπ–˜π–Š 𝖔𝖋 π–‰π–šπ–˜π–™ ⟑


Made inspired by the house of dust project by Alison Knowles.

⟑ π•·π–”π–›π–Š/π–π–†π–™π–Š π–‘π–Šπ–™π–™π–Šπ–—π–˜ ⟑


Made inspired by the reimplementation of Nick Montfort of Strachey's Love Letters

⟑ π•·π–”π–›π–Š π–•π–”π–Šπ–™π–—π’š π–˜π–π–šπ–‹π–‹π–‘π–Š ⟑


I wanted to retrieve a list of common words used in love poems.
To do so I used this website: https://reedsy.com/discovery/blog/love-poems
I put them all into a txt file.

To convert the text in lowercase I used python

with open("poems.txt", "r", encoding="utf-8") as file:
    text = file.read()

    #r as read mode. w as writing mode, to create a new file later

lowercase_text = text.lower()

with open("poems_lowercase.txt", "w", encoding="utf-8") as file:
    file.write(lowercase_text)

Utf-8 is the standard. It’s the best to use in this situation.
To get a list of the most used words in this collection of love poem I used spaCy. SpaCy is a python library to process text, it helps build models and analyse documents. Counter is a class from the module collections, to count occurrences of elements. Here used to count the word frequencies.

To make the code as a whole:

import spacy
from collections import Counter

#loading spaCy
nlp = spacy.load("en_core_web_sm")

with open("poems_copy.txt", "r", encoding="utf-8") as file:
   text = file.read()

doc = nlp(text.lower())  #converting to lowercase, and process the doc to use spaCy

#counting word frequencies
words = [token.text for token in doc if token.is_alpha]  #token.is_alpha to ignore symbols and 
numbers
word_counts = Counter(words)  #creating the counter

#getting the 120 most common words
most_common_words = word_counts.most_common(120)

#POS: part of speech
#categorising words by part of speech

pos_categories = {
    "NOUN": [],      
    "VERB": [],     
    "ADJ": [],       
    "ADV": [],         
    "OTHER": [],
}

for word, count in most_common_words:
    token = nlp(word)[0]  #processing each word separately, and categorised on their POS
    if token.pos_ == "NOUN":
        pos_categories["NOUN"].append((word, count))
    elif token.pos_ == "VERB":
        pos_categories["VERB"].append((word, count))
    elif token.pos_ == "ADJ":
        pos_categories["ADJ"].append((word, count))
    elif token.pos_ == "ADV":
        pos_categories["ADV"].append((word, count))
    else:
        pos_categories["OTHER"].append((word, count))

#printing the list
for pos, words in pos_categories.items():
    print(f"\n{pos}:")
    for word, count in words:
        print(f"{word}: {count}")


I decided to get the first 5 entry for each category:

NOUN:
love: 66
heart: 14
sky: 9
eyes: 9
sonnet: 7 (I decided to skip sonnet, because it was part of many titles)
fire: 6
VERB:
have: 12
come: 10
do: 8
forget: 7
has: 7
ADJ:
red: 7
warm: 5
dear: 4
ADV:
so: 14
again: 12
more: 9
first: 9
back: 7

OTHER:
the: 157
i: 97
by: 77
and: 73
you: 66


Then I wrote these words on paper, shuffled and asked people to create love poems out of them. Here are some results.

Shuffle poetry (1).jpg Shuffle poetry (2).jpg Shuffle poetry (3).jpg Shuffle poetry (5).jpg


drag and drop website version


prototype:
https://aleevadh.github.io/poetry-shuffle/

Poem shuffle.png

I would like to implement this https://codepen.io/shshaw/pen/yPjJKO


I tried working a bit with Pygame too, to create a floating words game, where the user drag the words around, while they fade or move, slip away (I still need to finish it)

Experiment pygame.png

⟑ 𝖒𝖆𝖗𝖐𝖔𝖛 π–ˆπ–π–†π–Žπ–“ π–Šπ–π–•π–Šπ–—π–Žπ–’π–Šπ–“π–™π–˜ ⟑


NLTK’s POS tagger, web scraping, understanding the basics, searching to create an interactive game of some sort.
Poetry generator using Markov chains to suggest words based on the user's input, experimenting with different corpus (first with random books I have, then creating a txt file with all poems from the poetry international archive).

Poetrygame.png Poetrygame2.png

⟑ π–›π–†π–“π–Žπ–˜π–π–Žπ–“π–Œ π–•π–”π–Šπ–’ ⟑


Python script to generate a gradually evolving poem using randomised word substitutions and modifications to its structure. The generated poem slowly vanishes into silence. Every 3 seconds an iteration of the poem is generated.
Here examples of the beginning of the poem and the end of the poem.

Poetrygame3.png Poetrygame4.png

⟑ π–Šπ–‘π–Šπ–ˆπ–™π–—π–”π–“π–Žπ–ˆ π–‘π–Žπ–™π–Šπ–—π–†π–™π–šπ–—π–Š π–Ÿπ–Žπ–“π–Š ⟑

Just an overview on algorithmic poetry, and artistic projects involving artificial poetry production.
Made for the second public moment. I used this program https://hub.xpub.nl/chopchop/~aleevadh/random_shapes_generator/ to generate the random shapes for the covers.
Eliteraturezine.jpg