User:Angeliki/2nd Trimester: Difference between revisions

From XPUB & Lens-Based wiki
 
(14 intermediate revisions by the same user not shown)
Line 22: Line 22:
[https://pzwiki.wdka.nl/mw-mediadesign/images/7/71/XPUB_reader_concept.pdf Mini reader]
[https://pzwiki.wdka.nl/mw-mediadesign/images/7/71/XPUB_reader_concept.pdf Mini reader]


[[Reader#6/Angeliki|Reader#6- From Tedious Tasks to Liberating Orality]]
[[Reader#6/Angeliki|Reader#6<br />
 
<big>''~~ From Tedious Tasks to Liberating Orality ~~''</big>]]


Cover made with Graphviz
Cover made with Graphviz


== Python scripts ==
== [[Angeliki/PROTOTYPING 2|Python scripts]] ==
=== '''"Python whisperer"''' ===
 
<syntaxhighlight lang="python" line='line'>
import nltk
import collections
import random
import sys
 
from sys import stdin, stderr, stdout
 
o = open("Synopsis_24012018.txt", 'r')
original = o.read()
tokens = nltk.word_tokenize(original)
for noun in tokens:
noun = noun.lower()
# print (tokens)
v = open("nouns/91K nouns.txt")
nouns = v.read()
tokens_nouns = nltk.word_tokenize(nouns)
# print (tokens_nouns)
newnouns = []
for word in tokens:
if word in tokens_nouns:
n=tokens_nouns.index(word)
# # print (n)
newnouns.append(tokens_nouns[n])
# # print (newnouns)
 
filename = 'Audiosfera-2015-Westerkamp.txt'
vocabulary = []
 
vocabulary_size = 1000
def read_input_text(filename):
    txtfile = open(filename, 'r')
    string = txtfile.read()
    words = nltk.word_tokenize(string)
    # print (words)
    for word in words:
    word=word.lower()
    vocabulary.append(word)
    # print('Data size:', len(vocabulary))               
 
read_input_text(filename)
# print(vocabulary)
 
newsynopsis = []
for word in vocabulary:
if word in tokens_nouns:
newsynopsis.append(random.choice(newnouns))
else:
newsynopsis.append(word)
print (" ".join(newsynopsis))
</syntaxhighlight>
{|
|-
|Let’s not speak to anyone. Let us move on and listen. Listen for voices while walking. Listen for pauses.Listen. What sounds in your home town indicate a specific time of day? Here are such sounds from Vancouver. Listen. Mix—Sound signals, time of day. Listen for hums and motors for birdcalls and for pauses between the birdcalls. Listen for echoes. Echo under parabolic bridge. Bang on other objects that make interesting sounds—such as Henry Moore’s sculpture called Knife Edge in Queen Elizabeth Park in Vancouver. Henry Moore’s sculpture Knife Edge. Hear your breath and its rhythms your footsteps and their rhythm. Stop for a moment and listen to your thoughts. Let them pass like the sound of a car. Follow your thoughts until you cannot hear them any longer. Hear the pauses between sirens and horns and airplanes. The sounds of different seasons. Soundwalking—mix of excerpts, beaches and parks. Sounds of clothes and of wind. Listen into the distance. Stop. listening for a moment.3 Radio that Listens. Soundwalking—mix of excerpts, shopping malls. Listen as you return home. Did you hear the sounds of this walk of this time in your life? Radio That Listens In his article Radical Radio, Canadian composer R. Murray Schafer suggested that radio is not new. He writes: It existed long before it was invented. It existed whenever there were invisible voi­ces: in the wind, in thunder, in the dream. Listening back through history, we find that it was the original communication system by which gods spoke to humanity. It was the means by which voices, free from the phenomenal world, com­mu­ni­ca­ted their thoughts and desires to awestruckmortals. The divine voice, the Ursound, infinitely powerful precisely because of its invisibility, is encountered repeatedly in ancient religions and in folklore... In those days there was nothing but religious broadcasting
 
|| <pre style="white-space: pre-wrap; background-color: #dfdf20;"> where ’ s not speak to anyone . bots us clear on literature listen . listen for great poets and . listen for accordance . listen . what clear in your result and indicate a electric beings of creation ? ancestors and such place from vancouver . listen . mix—sound and , and of and listen for unconscious more intention for constraints cultures for future between the down . listen for Jacques . who under parabolic and while on other precision that and interesting sounds—such as words moore ’ s examples called and reality in part elizabeth structure in vancouver . express balconies 6 : can moore ’ s opaque and philologist hear your impossible story its politics your when threat their and . selection for a mind book listen to your like . major them form content the treat of a piece . but your human until activities message not hear them any longer . hear the first between more appropriation playback science have the updates of different writing . soundwalking—mix of how , and and text copy of well broadening of and . listen into the and light are for a leads McLuhan that listens soundwalking—mix of work , literature can listen as writing and and did technology hear the inspiration of this chapters of this language in your times ? above that listens in his once edit painting , canadian information r. and schafer suggested that takes is not new . he idea : it existed poetry before it was invented . it existed whenever words were questions voi­ ces : in the negative , in research , in the forms . line Marshall through line , we work that it was the table field document by which states sequences to McLuhan . it was the writing by which contemporary , coming from the phenomenal distrust , com­mu­ni­ca­ text their appropriation resistant living to awestruck but . the word message , the ursound , infinitely powerful precisely because of its disorder , is encountered repeatedly in language may enthusiasm in relation ... in those view one was red pieces descriptions elements
</pre>
 
|-
|
''From a [http://pracownia.audiosfery.uni.wroc.pl/wp-content/uploads/2016/12/Audiosfera-2_2015_Hildegard-WesterkampEN.pdf text] of Hildegard Westerkamp''
||
''The output text with borrowed nouns from [[Synopsis_24-1-2018|synopsis of several texts]]''
|}
 
=== '''"No_vowels poem"''' ===
 
<syntaxhighlight lang="python" line='line'>
import sys
from sys import stdin
 
l=list('aeiou')
# original = input('give me the poem ')
original = stdin.read()
 
if len(original) > 0:
    word= original.lower()
    s=list(word)
    new_word=[]
    for i in s:
    # or if i not in l
        if not i in l:
          new_word.append(i)
    print (''.join(new_word))
else:
    print ('empty')
 
</syntaxhighlight>
{|
|-
|Jack tells Jill Jill tells Jack Jack wants Jill Jill wants Jack a perfect contract Jill wants Jack to want Jill to want Jacks want ofher want for his want of her want of Jacks want that Jill wants Jack to want Jill to want Jacks want of her want for his want of her to want Jack to want || <pre style="white-space: pre-wrap; background-color: #dfdf20;"> jck tlls jll jll tlls jck jck wnts jll jll wnts jck  prfct cntrct jll wnts jck t wnt jll t wnt jcks wnt fhr wnt fr hs wnt f hr wnt f jcks wnt tht jll wnts jck t wnt jll t wnt jcks wnt f hr wnt fr hs wnt f hr t wnt jck t wnt
</pre>
|-
|''From "Knots", R.D.Laing''
|}
 
=== '''"Listening each other"''' ===
'''Sound poems'''
 
<syntaxhighlight lang="python" line='line'>
import nltk
import collections
import random
import sys
 
from sys import stdin, stderr, stdout
 
def get_words(filename):
    txtfile = open(filename, 'r')
    string = txtfile.read()
    words = nltk.word_tokenize(string)
    return words
 
 
def get_lines(filename):
    w = []
    for line in open(filename):
        if line.strip():
            w.append(line.strip())
    return w
 
source_words = get_words("Audiosfera-2015-Westerkamp.txt")
all_verbs = get_lines("verbs/31K verbs.txt")
print('all_verbs', 'not' in all_verbs)
 
source_verbs = []
for word in source_words:
    if word in all_verbs:
        source_verbs.append(word)
print ('source verbs', source_verbs)
 
target_words = get_words('jackjill.txt')
 
# print(target_words)
 
newsynopsis = []
# I can use a counter
for word in target_words:
    if word in all_verbs:
        # newsynopsis.append(random.choice(source_verbs))
        newsynopsis.append(source_verbs[0])
    else:
        newsynopsis.append(word)
print (" ".join(newsynopsis))
 
 
 
</syntaxhighlight>
{|
|-
|Jack tells Jill Jill tells Jack Jack wants Jill Jill wants Jack a perfect contract Jill wants Jack to want Jill to want Jacks want ofher want for his want of her want of Jacks want that Jill wants Jack to want Jill to want Jacks want of her want for his want of her to want Jack to want || <pre style="white-space: pre-wrap; background-color: #dfdf20;"> Jack seasons Jill Jill desires Jack Jack listen Jill Jill example Jack a ted radio Jill sculpture Jack to return Jill to ted Jacks echoes ofher listen for his make of her sculpture of Jacks listen that Jill were Jack to thoughts Jill to hear Jacks pass of her sculpture for his excerpts of her to parks Jack to desires
 
</pre>
|-
|''From "Knots", R.D.Laing''
||''The output text with borrowed verbs from the [http://pracownia.audiosfery.uni.wroc.pl/wp-content/uploads/2016/12/Audiosfera-2_2015_Hildegard-WesterkampEN.pdf text] of Hildegard Westerkamp''
|}
 
=== '''"Slping instructions"''' ===
'''Missing Perec's e'''
 
<syntaxhighlight lang="python" line='line'>
import sys
from sys import stdin
 
l=list('eè')
original = stdin.read()
 
if len(original) > 0:
    word= original.lower()
    s=list(word)
    new_word=[]
    for i in s:
        if not i in l:
          new_word.append(i)
    print (''.join(new_word))
else:
    print ('empty')
 
</syntaxhighlight>
 
{|
|-
|the day of your sleep study:
• Please wash your hair prior to coming to the sleep center. <br />
• Do not use hairspray, crème rinses, mouse, styling gel or conditioner. <br />
• Do not apply makeup or lotion.<br />
• Do not take any naps during the day of your study.<br />
• Do not drink alcoholic beverages on the day of your study.<br />
• Do not consume beverages or foods containing caffeine for at least four hours prior to your test.<br />
• Please take your regular nighttime medications, with the exception of a sleep aid prior to arriving for your sleep study.<br />
may take your regular sleep aid once you have been instructed to do so by the technologist performing your study. If you are
diabetic, please bring your supplies and medications with you.
include the following items in your overnight bag:<br />
• Loose fitting, overnight attire, preferably a two-piece garment (no silk or satin).<br />
• You may bring your own pillow if you prefer.<br />
• Bathroom and shower facilities are available for your convenience. We provide a wash cloth and towel, however please bring
any toiletries that you may need.<br />
• A cable television and DVD player are provided.<br />
• Feel free to bring reading materials or DVDs with you.
| <pre style="white-space: pre-wrap; background-color: #dfdf20;"> th day of your slp study:
• plas wash your hair prior to coming to th slp cntr.
• do not us hairspray, crm rinss, mous, styling gl or conditionr.
• do not apply makup or lotion.
• do not tak any naps during th day of your study.
• do not drink alcoholic bvrags on th day of your study.
• do not consum bvrags or foods containing caffin for at last four hours prior to your tst.
• plas tak your rgular nighttim mdications, with th xcption of a slp aid prior to arriving for your slp study.
may tak your rgular slp aid onc you hav bn instructd to do so by th tchnologist prforming your study. if you ar
diabtic, plas bring your supplis and mdications with you.
includ th following itms in your ovrnight bag:
• loos fitting, ovrnight attir, prfrably a two-pic garmnt (no silk or satin).
• you may bring your own pillow if you prfr.
• bathroom and showr facilitis ar availabl for your convninc. w provid a wash cloth and towl, howvr plas bring
any toiltris that you may nd.
• a cabl tlvision and dvd playr ar providd.
• fl fr to bring rading matrials or dvds with you.
</pre>
|}
 
 
=== '''"Sitting in a pocket(sphinx)"''' ===
or "Python Whispers"
 
==== '''ttssr-loop''' ====
The first line of a given scanned text is read by a computerized female voice. Then, the outcome (a sound file) is transcribed by a program, called pocketsphinx, and stored as a textfile. The process is looped 10 times. More specifically every time the previous outcome becomes input for the computerized voice and then the transcription. Depending on the quality of the machine, the voice and the reading, the first line is being transformed into different texts but with similar phonemes. At the same time with the transcription you hear all the sounds. The process resembles the game of the broken telephone (Chinese whispers).
 
 
''Dependencies:''<br />
audio_transcribe.py
<syntaxhighlight lang="python" line='line'>
#!/usr/bin/env python3
# https://github.com/Uberi/speech_recognition/blob/master/examples/audio_transcribe.py
 
import speech_recognition as sr
import sys
from termcolor import cprint, colored
from os import path
import random
 
a1 = sys.argv[1] #same as $1 so when you run python3 audio_transcribe.py FOO ... argv[1] is FOO
# print ("transcribing", a1, file=sys.stderr)
AUDIO_FILE = path.join(path.dirname(path.realpath(__file__)), a1) # before it was english.wav
 
 
# use the audio file as the audio source
r = sr.Recognizer()
with sr.AudioFile(AUDIO_FILE) as source:
    audio = r.record(source)  # read the entire audio file
 
color = ["white", "yellow"]
on_color = ["on_red", "on_magenta", "on_blue", "on_grey"]
 
# recognize speech using Sphinx
try:
    cprint( r.recognize_sphinx(audio), random.choice(color), random.choice(on_color))
    # print( r.recognize_sphinx(audio))
except sr.UnknownValueError:
    print("uknown")
except sr.RequestError as e:
    print("Sphinx error; {0}".format(e))
 
# sleep (1)
 
</syntaxhighlight>
 
ttssr-loop.py
<syntaxhighlight lang="bash" line='line'>
#!/bin/bash
i=0;
#cp $1 output/input0.txt
head -n 1 $1 > output/input0.txt
while [[ $i -le 10 ]]
do echo $i
cat output/input$i.txt | espeak -s 140 -v f2 --stdout > src/sound$i.wav
cat output/input$i.txt
play src/sound$i.wav 2> /dev/null #&
python3 src/audio_transcribe.py sound$i.wav > output/input$((i+1)).txt 2> /dev/null
sleep 1
(( i++ ))
done
 
</syntaxhighlight>
 
''Input:'' Any one is one having been that one Any one is such a one.
''From "Many Many Many Women", Gerdrude Stein'' <br />
 
''Output:''<br />
<pre style="white-space: pre-wrap; background-color: #dfdf20;">0
Any one is one having been that one Any one is such a one.
1
and he wanted one for the b. not want anyone is turned on
2
he wanted wanted that we ought to want and what is it dawned on me
3
he wanted wanted to we ought to knock on what he calling on me
4
he wanted wanted to we ought to knock on what the calling on me
5
he wanted wanted to we ought to knock on what's that going on me
6
he wanted wanted to we ought to knock on what's not going on me
7
he wanted want the country ought to knock on what's not going on me
8
he wanted want the country ought to knock on what's not only on me
9
he wanted want the country ought to knock on what's ultimately on me
10
he wanted want the country ought to knock on what sell directly on me
</pre>
 
==== '''ttssr-loop-human''' ====
The first line of a given scanned text is read by me or any other human. Then, the outcome (a sound file) is transcribed by a program, called pocketsphinx, and stored as a textfile. The new line is read by a computerized voice, which is going to be transcribed. The process is looped 10 times. More specifically every time the previous outcome becomes input for the computerized voice and then the transcription. Depending on the quality of the machine, the voice and the reading, the first line is being transformed into different texts but with similar phonemes. At the same time with the transcription you hear all the sounds. The process resembles the game of the broken telephone (Chinese whispers).
 
''Dependencies:''<br />
[[User:Angeliki/2nd_Trimester#ttssr-loop|audio_transcribe.py]]<br />
write_audio.py
<syntaxhighlight lang="python" line='line'>
#!/usr/bin/env python3
# https://github.com/Uberi/speech_recognition/blob/master/examples/write_audio.py
# NOTE: this example requires PyAudio because it uses the Microphone class
 
import speech_recognition as sr
import sys
from time import sleep
 
a1 = sys.argv[1]
 
# obtain audio from the microphone
r = sr.Recognizer()
with sr.Microphone() as source:
    # print("Read every new sentence out loud!")
    audio = r.listen(source)
 
# sleep (1)
 
# write audio to a WAV file
with open(a1, "wb") as f:
f.write(audio.get_wav_data())
</syntaxhighlight>
 
ttssr-loop-human.py
<syntaxhighlight lang="bash" line='line'>
#!/bin/bash
i=0;
#cp $1 output/input0.txt
head -n 1 $1 > output/input.txt
cat output/input.txt
python3 src/write_audio.py src/sound0.wav 2> /dev/null
play src/sound0.wav repeat 5 2> /dev/null &
python3 src/audio_transcribe.py sound0.wav > output/input0.txt 2> /dev/null
while [[ $i -le 10 ]]
do echo $i
cat output/input$i.txt | espeak -s 140 -a 7 -v f2 --stdout > src/sound$i.wav
cat output/input$i.txt
play src/sound$i.wav repeat 9 2> /dev/null & #in the background the sound, without it all the sounds play one by one//2 is stderr
python3 src/audio_transcribe.py sound$i.wav > output/input$((i+1)).txt 2> /dev/null
sleep 1
(( i++ ))
done
 
</syntaxhighlight>
 
Any one is one having been that one Any one is such a one.
''From "Many Many Many Women", Gerdrude Stein'' <br />
 
<pre style="white-space: pre-wrap; background-color: #dfdf20;">
Any one is one having been that one Any one is such a one.
0
anyone is one haven't been that the one anyone is set to want
1
and what is one hot with me i didn't want anyone is economics
2
what did want thoughtfully leon didn't want to what is economics
3
what did want folks to leon pickets want to walk in economics
4
what did want them to agree on picket want to walk in economics
5
what did want them to agree on picket want to walk in economics
6
what did want them to agree on picket want to walk in economics
7
what did want them to agree on picket want to walk in economics
8
what did want them to agree on picket want to walk in economics
9
what did want them to agree on picket want to walk in economics
10
what did want them to agree on picket want to walk in economics
</pre>
 
==== '''ttssr-loop-human-only''' ====
The first line of a given scanned text is read by someone. Then, the outcome (a sound file) is transcribed by a program, called pocketsphinx, and stored as a textfile. The new line is read by the same person or someone else, whose voice is going to be transcribed. The process is looped 10 times. More specifically every time the previous outcome becomes input for somebody to read and then the transcription follows. Depending on the quality of the machine, the voice and the reading, the first line is being transformed into different texts but with similar phonemes. At the same time with the transcription, each voice is played and repeated for five times, so for some moments they are overlapping each other. The process resembles the game of the broken telephone and the karaoke. <br />
 
Angeliki's collection of texts ''From Tedious Tasks to Liberating Orality- Practices of the Excluded on Sharing Knowledge'', refers to orality in relation to programming, as a way of sharing knowledge including our individually embodied position and voice. The emphasis on the role of personal positioning is often supported by feminist theorists. Similarly, and in contrast to scanning, reading out loud is a way of distributing knowledge in a shared space with other people, and this is the core principle behind the ''ttssr-> Reading and speech recognition in loop software''. Using speech recognition software and python scripts Angeliki proposes to the audience to participate in a system that highlights how each voice bears the personal story of an individual. In this case the involvement of a machine provides another layer of reflection of the reading process.
 
In the context of the present available technology (speech recognition, computerised recitation) I am using the errors of these functions to create an new oral experience. As these projects do while following the media of their time. Following the example of two other projects: [https://www.youtube.com/watch?v=8z32JTnRrHc Boomerang (1974)] and [https://www.youtube.com/watch?v=fAxHlLK3Oyk I Am Sitting In A Room (1981)]. The first one is about forming a tape by recording and broadcasting continuously a voice speaking. The latter is exploiting the imperfections of the system of recording tapes machine and grabs the room echoes/as musical qualities in the process (echoes of the room while recording)...tape-delay system/for voice and electromagnetic tape.
 
 
Scanning is a way of creating copies of original written and printed texts, with the purpose to preserve them and reproduce culture. There are other forms of knowledge transmission and preservation like the ones that were developed in oral cultures. On the other hand, the oral cultures, in contrast to the literate cultures use different methods for maintaining their stories. And at the same time maintaining doesn't seem to be their main purpose, as long as it doesn't satisfy the present audience or oral poet. Although they try to narrate and copy the story of another poet, the way they produce knowledge is fundamentally different. The oral narratives are based on the previous ones keeping a movable line to the past by adjusting to the history of the performer, but only if they are important for the present.
 
This reader attempts to go through the automated tasks that female employees were doing in the beginning of the 20th century and their development to producing knowledge. From weaving to typewriting and programming women, mainly hidden from the public, were exploring the realm of writing beyond its conventional form. According to Kittler (1999, pg. 221) “A desexualized writing profession, distant from any authorship, only empowers the domain of text processing. That is why so many novels written by recent women writers are endless feedback loops making secretaries into writers”. But aren’t these endless feedback loops similar to the rhythmic narratives of the anonymous oral cultures? How this knowledge is produced through repetitive formulas that are easily memorized? The orality is not built on written practices and texts, but in memory, sounds and human interaction.
Oral cultures exist without the need of writing, texts and dictionaries. Ong describes in detail what are those methods with which oral people can organize complex thoughts. His aim is to provide a connection between the two cultures for the shake of the human awareness.It doesn’ t need a library to be stored, for people to look up and create their texts. The learning process is shared from individual positions but with the need of the community and is flexible and active to the present.
This repetition , Boomerang, sitting, typewriters<br />
Participatory<br />
Orality reading/oral poets perform<br />
Scanning/ transcription- history of it and typewriters<br />
formula-instructions<br />
 
*''Keywords:'' overlapping<br />
*''References:'' http://www.ubu.com/sound/lucier.html, http://www.ubu.com/film/serra_boomerang.html
 
*''Instructions:''<br />
 
<small>The first line of a scanned text is being projected on the screen. I am reading this line. Pocketsphinx is transcribing my voice, being played in loop for five times. The new line is being projected on the screen. I am passing the microphone to you. While you are reading my transcribed line, you are listening to my voice. Pocketsphinx is transcribing your voice, being played in loop for five times. The new line is being projected on the screen. You are passing the microphone to the next you. While the next you is reading your transcribed line, is listening to your voice. Pocketsphinx is transcribing the voice of the next you, being played in loop for five times. The new line is being projected on the screen. The next you is passing the microphone to the next next you. While the next next you is reading the transcribed line of the next you, is listening to the voice of the next you. Pocketsphinx is transcribing the voice of the next next you, being played in loop for five times. The new line is being projected on the screen. The next next you is passing the microphone to the next next next you. While the next next next you is reading the transcribed line of the next next you, is listening to the voice of the next next you. Pocketsphinx is transcribing the voice of the next next next you, being played in loop for five times. The process continuous for five more times. (press enter and run the makefile)</small>
 
*''Necessary Equipment:'' 1 set of headphones/loudspeaker, 1 microphone, 1 laptop, >1 oral scanner poets  <br />
 
*''Software Dependencies:''<br />
*:[[Cookbook#Audio|pocketsphinx]]<br />
*:python3<br />
*:espeak<br />
*:[[User:Angeliki/2nd_Trimester#ttssr-loop|audio_transcribe.py]]<br />
*:[[User:Angeliki/2nd_Trimester#ttssr-loop-human|write_audio.py]]<br />
*:[https://pypi.python.org/pypi/termcolor termcolor.py]<br />
*:ttssr-loop-human-only.sh:
<syntaxhighlight lang="bash" line='line'>
#!/bin/bash
i=0;
echo "Read every new sentence out loud!"
head -n 1 $1 > output/input0.txt
while [[ $i -le 10 ]]
do echo $i
cat output/input$i.txt
python3 src/write_audio.py src/sound$i.wav 2> /dev/null
play src/sound$i.wav repeat 5 2> /dev/null &
python3 src/audio_transcribe.py sound$i.wav > output/input$((i+1)).txt 2> /dev/null
sleep
(( i++ ))
done
today=$(date +%Y%m%d.%H-%M);
mkdir -p "output/ttssr.$today"
mv -v output/input* output/ttssr.$today;
mv -v src/sound* output/ttssr.$today;
 
</syntaxhighlight>
 
*''[https://git.xpub.nl/OuNoPo-make/ Common makefile]'':
:<small>ttssr-human-only: ocr/output.txt</small>
:::<small>bash src/ttssr-loop-human-only.sh ocr/output.txt</small>
*''Trying out''
:Input: (You can choose any of the scanned texts you like) <br />
Any one is one having been that one Any one is such a one.
''From "Many Many Many Women", Gerdrude Stein'' <br />
 
:First output: (In the beginning I ask for this:"Read every new sentence out loud!")
<pre style="white-space: pre-wrap; background-color: #dfdf20;">0
Any one is one having been that one Any one is such a one.
1
anyone is one haven't been that the one anyone except to wind
2
anyone is one happening that they want anyone except the week
3
anyone is one happening that they want anyone except the week
4
anyone is one happening that they want anyone except that we
5
anyone is one happy that they want anyone except that we
6
anyone is one happy that they want anyone except at the week
7
anyone is one happy that they want anyone except that they were
8
anyone is one happy that they want anyone except
9
and when is one happy that they want anyone except
10
and when is one happy that they want anyone makes
</pre>
 
:Second output: (In the beginning I ask for this:
"Read every new sentence out loud!")
<pre style="white-space: pre-wrap; background-color: #dfdf20;">0
Any one is one having been that one Any one is such a one.
1
anyone is one haven't been that the one anyone is set to wind
2
anyone nice one haven't been that they want anyone is said to weep
3
anyone nice one half and being that they want anyone he said to pretend
4
anyone awhile nice white house and being that they want anyone he said to prevent this
5
anyone awhile nice white house and the bed they want anyone he said to prevent these
6
anyone awhile nice white house and the bed they want anyone he said to prevent aids
7
anyone awhile nice white house and a bed they want anyone he said to prevent aids
8
anyone awhile nice white house and the bed they want anyone he said to prevent aids
9
anyone awhile nice white house and the bed they want and when he said to prevent a
10
anyone know what nice white house and the bed they want and when he said to prevent an
</pre>
 
:Third output:
 
[[File:Ttssr-human-only.png]]
 
:[https://vvvvvvaria.org/algologs.html Algologs presentation (at Varia)]
[[File:Ttssr-algologs.png|700px]]


== The secrets of pocketsphinx ==
== The secrets of pocketsphinx ==
=== Acoustic model/training ===
=== Acoustic model/training ===
what
what

Latest revision as of 17:52, 24 March 2018


OCR

Tesseract training

1. Install Tesseract
2. Recipe for training

Reading- Writing

Synopsis
Comparative essay

Reader

Mini reader

Reader#6
~~ From Tedious Tasks to Liberating Orality ~~

Cover made with Graphviz

Python scripts

The secrets of pocketsphinx

Acoustic model/training

what