User:Markvandenheuvel/prototyping hackpacts: Difference between revisions

From XPUB & Lens-Based wiki
 
(47 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Hackpacts=
=Hackpacts: "home school prototyping"=


'''Prototypes so far:'''
'''Prototypes and experiments so far:'''
* '''IRC bot experiments''': (Urzaloid Franklin)
* '''1-Bit sound synthesis''' (making music on a 'hacked' TI calculator)
* '''1-Bit sound synthesis''' (making music on a 'hacked' TI calculator)
* '''KCS workflow''': txt input > generated text > KSC audio > HTML
* '''AALIB experiments''' + Ascii (quilting): images converted to ascci via a generator in Python
* '''SSTV workflow tests''' in Python: modulating images (sending/receiving – encoding/decoding, storing on tape)
* '''SSTV workflow tests''' in Python: modulating images (sending/receiving – encoding/decoding, storing on tape)
* '''AALIB experiments''' + Ascii (quilting): images converted to ascci via a generator in Python
* '''IRC bot experiments''': (Urzaloid Franklin)
* '''KCS workflow''': txt input > generated text > KSC audio > HTML
* '''API scraping'''(Planetary NASA data mixed with other sources)
* '''API scraping'''(Planetary NASA data mixed with other sources)
* '''A)PART to de(PART''' PY.RATE.CHNC workshop: collaborative tape loop creation and recording (materiality of magnetic tape, deconstruction of sound, sonic fiction)
* '''A)PART to de(PART''' PY.RATE.CHNC workshop: collaborative tape loop creation and recording (materiality of magnetic tape, deconstruction of sound, sonic fiction)
Line 37: Line 37:
I came across an open-source project that makes it possible to 'hack' old Texas Instruments calculators and turn them into 1-bit music composing tool (instrument). The program ([[https://irrlichtproject.de/houston/ Houston Tracker)]] is still not being used so much and remains quite obscure. HT2 converts the binary output of the TI-calculator to generate 1-bit sound. What I think is really interesting is that 1-bit sound represents the on/off binary basics of computing. The 1-bit sound is therefore close to the inner process.  
I came across an open-source project that makes it possible to 'hack' old Texas Instruments calculators and turn them into 1-bit music composing tool (instrument). The program ([[https://irrlichtproject.de/houston/ Houston Tracker)]] is still not being used so much and remains quite obscure. HT2 converts the binary output of the TI-calculator to generate 1-bit sound. What I think is really interesting is that 1-bit sound represents the on/off binary basics of computing. The 1-bit sound is therefore close to the inner process.  


[[File:20201103 140137.jpg| 600px]]
[[File:20201103 140137.jpg|420px]]


https://youtu.be/sZSq1MqnVIw
[[https://youtu.be/Ktyi1d2ohAY See it work]]


===Key points===
===Key points===
Line 74: Line 74:
: 1-bit synthesis techniques: https://phd.protodome.com/#anchor-pulse-width-sweep\
: 1-bit synthesis techniques: https://phd.protodome.com/#anchor-pulse-width-sweep\
: Graphlink cable'''for converting binary data to sound: https://www.amazon.com/Texas-Instruments-94327-Graphlink-USB/dp/B00006BXBS
: Graphlink cable'''for converting binary data to sound: https://www.amazon.com/Texas-Instruments-94327-Graphlink-USB/dp/B00006BXBS
=KCS: digital data standard explorations=
'''KCS''' or ''[https://en.wikipedia.org/wiki/Kansas_City_standard Kansas City Standard]'' was developed to convert code (in the form of ASCII text) into sound so it could be stored on media such as magnetic tape but became also suitable to broadcast over radio. This way, its data became more easily interchangeable. KSC is still being used today to restore large quantities of archival material that was stored on magnetic tape. There are other standards that are closely related to this standard. KCS was an attempt to standardize (For instance, the Commodore 64 had its own method however this was very much prone to errors.)
{{youtube|YOzy6H16Oqs|600}} 
====Key Points====
* KCS workflow setup in Python (Jupyter Notebook) (text input > generated text > KSC audio > HTML)
* slow data transmission / image arise line by line
* materiality of data via sound (physical connection, embodiment of a process)
* deconstruction: encoding/decoding
* storage and playback (on audio cassette)
====playful ideas & potentials to further explore====
* bot that outputs encoded texts via audio
* bot that outputs encoded ASCII art (ASCII IMAGES) via audio
* printing out text receipt printer
* creating a modern Flexi Fisc Floppy Rom: https://en.wikipedia.org/wiki/Kansas_City_standard#/media/File:FloppyRom_Magazine.jpg
====resources====
* KCS standard: https://en.wikipedia.org/wiki/Kansas_City_standard
* Storing https://www.instructables.com/Storing-files-on-an-audio-cassette/
* future of data storage: https://spectrum.ieee.org/computing/hardware/why-the-future-of-data-storage-is-still-magnetic-tape
* Data Files on Tape (A Modern Attempt) https://youtu.be/muJDUonIOz8
=AALIB + ASCCI generating experiments=
'''ASCII art''' is a graphic design technique that uses computers for presentation and consists of pictures pieced together from the 95 printable (from a total of 128) characters defined by the ASCII Standard from 1963 and ASCII compliant character sets with proprietary extended characters (beyond the 128 characters of standard 7-bit ASCII). The term is also loosely used to refer to text-based visual art in general. ASCII art can be created with any text editor, and is often used with free-form languages. Most examples of ASCII art require a fixed-width font (non-proportional fonts, as on a traditional typewriter) such as Courier for presentation.
'''AAlib''' is a software library that allows applications to automatically convert still and moving images into ASCII art. It was released by Jan Hubicka as part of the BBdemo project in 1997.
<gallery perrow="2" widths="450" heights="250">
File:Instagram_logo.jpg| Instagram logo
File:Youtube_logo.jpg|Youtube logo
</gallery>
[[File:Ascii-planet_renders.jpg|980px]] <br>
Ascii planet renders in Python


=SSTV (Slow Scan Television) experiments=
=SSTV (Slow Scan Television) experiments=
Line 123: Line 163:




Listen to this image:
[https://hub.xpub.nl/sandbox/~markvandenheuvel/results/t.wav Listen to this image ]
https://hub.xpub.nl/sandbox/~markvandenheuvel/results/t.wav


====playful ideas & potentials to further explore:====
====playful ideas & potentials to further explore:====
Line 138: Line 177:
: recent SSTV project: https://hsbp.org/rpi-sstv   
: recent SSTV project: https://hsbp.org/rpi-sstv   
: Pictures On Cassette: https://www.youtube.com/watch?v=c38dLDQoRtM
: Pictures On Cassette: https://www.youtube.com/watch?v=c38dLDQoRtM
=KCS: digital data standard explorations=
'''KCS''' or ''[https://en.wikipedia.org/wiki/Kansas_City_standard Kansas City Standard]'' was developed to convert code (in the form of ASCII text) into sound so it could be stored on media such as magnetic tape but became also suitable to broadcast over radio. This way, its data became more easily interchangeable. KSC is still being used today to restore large quantities of archival material that was stored on magnetic tape. There are other standards that are closely related to this standard. KCS was an attempt to standardize (For instance, the Commodore 64 had its own method however this was very much prone to errors.)
{{youtube|YOzy6H16Oqs|600}} 
====Key Points====
* KCS workflow setup in Python (Jupyter Notebook) (text input > generated text > KSC audio > HTML)
* slow data transmission / image arise line by line
* materiality of data via sound (physical connection, embodiment of a process)
* deconstruction: encoding/decoding
* storage and playback (on audio cassette)
====playful ideas & potentials to further explore====
* bot that outputs encoded texts via audio
* bot that outputs encoded ASCII art (ASCII IMAGES) via audio
* printing out text receipt printer
* creating a modern Flexi Fisc Floppy Rom: https://en.wikipedia.org/wiki/Kansas_City_standard#/media/File:FloppyRom_Magazine.jpg
====resources====
* KCS standard: https://en.wikipedia.org/wiki/Kansas_City_standard
* Storing https://www.instructables.com/Storing-files-on-an-audio-cassette/
* future of data storage: https://spectrum.ieee.org/computing/hardware/why-the-future-of-data-storage-is-still-magnetic-tape
* Data Files on Tape (A Modern Attempt) https://youtu.be/muJDUonIOz8


=Interactive Fiction/'Text-based' adventure=
=Interactive Fiction/'Text-based' adventure=
Line 174: Line 188:
[[File:6100c610fbf73c210905eef0d538f9af.png|500px]]
[[File:6100c610fbf73c210905eef0d538f9af.png|500px]]


===Tutorial with David Moroto: https://www.davidmaroto.info/===
===Tutorial with David Moroto ===


====What is your content?====
====What is your content?====
Line 182: Line 196:
: dept! (true for me, fiction for other)
: dept! (true for me, fiction for other)
: 'conversational tool' between the analog and digital
: 'conversational tool' between the analog and digital
: tool of mediation  
: tool of mediation
 


==Inspiration==
==Inspiration==
Line 232: Line 245:
===resources:===
===resources:===
* shell: showing processes! https://github.com/jupyter/terminado
* shell: showing processes! https://github.com/jupyter/terminado
* https://pythonspot.com/building-an-irc-bot/
* https://almanac.computer/


=Tape + analog sound/recording experiments=
=Tape + analog sound/recording experiments=
Line 258: Line 273:
*https://youtu.be/qejNT4Cmm8g
*https://youtu.be/qejNT4Cmm8g


=AALIB + ASCCI generating experiments=
'''ASCII art''' is a graphic design technique that uses computers for presentation and consists of pictures pieced together from the 95 printable (from a total of 128) characters defined by the ASCII Standard from 1963 and ASCII compliant character sets with proprietary extended characters (beyond the 128 characters of standard 7-bit ASCII). The term is also loosely used to refer to text-based visual art in general. ASCII art can be created with any text editor, and is often used with free-form languages. Most examples of ASCII art require a fixed-width font (non-proportional fonts, as on a traditional typewriter) such as Courier for presentation.


'''AAlib''' is a software library which allows applications to automatically convert still and moving images into ASCII art. It was released by Jan Hubicka as part of the BBdemo project in 1997.
<gallery perrow="2" widths="450" heights="250">
File:Instagram_logo.jpg| Instagram logo
File:Youtube_logo.jpg|Youtube logo
</gallery>
[[File:Ascii-planet_renders.jpg|980px]] <br>
Ascii planet renders in Python




Line 292: Line 296:
: http://screenl.es/slow.html
: http://screenl.es/slow.html


<gallery>
<gallery perrow= "5" widths= "300px" heights= "200px">  
[[File:Desert_journalism.png|300px]]
File:Desert_journalism.png| Desert Journalism
[[File:FloppyRom_Magazine.jpg |300px]]
File:FloppyRom_Magazine.jpg | Floppy Rom: data on Flexi Disk
[[File:Screenshot_2020-11-26_at_22.10.34.png |300px]]
File:Screenshot_2020-11-26_at_22.10.42.png| Solar Punk: 'The map is the territory'
[[File:Screenshot_2020-11-26_at_22.10.42.png|300px]]
File:Tristan_Perich_-_1-bit_symphony.jpg| Tristan Perich - 1bit symphony
[[File:Tristan_Perich_-_1-bit_symphony.jpg|300px]]
File:Screenshot.jpg
</gallery>
</gallery>


 
=Project content: interim conclusion=
=Project content=


<pre>
<pre>
Line 336: Line 339:
:: ''- "You just looked outside through a window, staring at the screen again, you think about the past."''  
:: ''- "You just looked outside through a window, staring at the screen again, you think about the past."''  
:: ''- "Lately, I often got asked if I am a robot. Are a robot. Are you a robot?"''
:: ''- "Lately, I often got asked if I am a robot. Are a robot. Are you a robot?"''
:: ''- "As a kid, I used to record the sound of games instead of playing them: let me show it to you"''
:: ''- "Listen closely to this image!"''


====Output====
====Output====
Line 343: Line 346:


====Inspiration taken from tape recordings of:====
====Inspiration taken from tape recordings of:====
*'Alfa Training' course for motivation, regaining autonomy, and confidence to make life and work decisions
*'Alfa Training' motivational audio course about 'self-programming', regaining autonomy and guidance in making life and work decisions [https://youtu.be/sZSq1MqnVIw Listen here]
* software (detail: music snippets of music in between)
* software on tape! (fun detail: there are music snippets of ABBA in between)
* recording of computer course: 'How to get online?' (mechanical typing sounds, registration)
* Recording of computer course: 'How to get online?' (mechanical typing sounds, registration, silence to do an exercise)
* guide/course: leader/character, bumpers and theme songs and scores, silence for executing
* guide/course: leader/character, bumpers and theme songs and scores, silence for executing
* blending in personal experiences
* low key blending in personal stories and experiences
* use metaphors and figure of speech from 'alpha training'
 
 
Important input:
: Donna Harraway: viewpoints "cyborg" (some parts are hard to digest)
: Sun Ra: creating myths, "sonic fiction" (planetary importance)
: Radical software: knowledge, sharing, community building
: Erkki Kurenniemi / Computer eats Art (misusing technology to 'keep control' / in between formats)
 


<pre>
<pre>
Line 357: Line 369:
   (_,...'(_,.`__)/'.....+
   (_,...'(_,.`__)/'.....+
</pre>
</pre>
===Feedback(w/Manetta 12.02.2021)===
'''On digital text and the development of the Word processor:'''
*[https://www.youtube.com/watch?v=nN9wNvEnn-Q Secret Life Of Machines - The Word Processor]
*[https://monoskop.org/images/f/f9/Cramer_Florian_Anti-Media_Ephemera_on_Speculative_Arts_2013.pdf Anti Media - Florian Cramer (- 'literate programming' (code is an esthetic!)]
*[https://vvvvvvaria.org/curriculum/In-the-Beginning-...-Was-the-Commandline/READER.html Exploring computer text interfaces]
'''Research as publication:'''
* Situationist Times: http://vandal.ist/thesituationisttimes/
* Publishing: https://www.woodstonekugelblitz.org/
* http://lists.artdesign.unsw.edu.au/pipermail/empyre/2020-December/011211.html
* streaming the 'sound' of streaming: http://anarchaserver.org:8000/
'''Practical tools/tips:'''
: Turn a Git repo into a collection of interactive notebooks: https://mybinder.org/
: jupyter nbconvert
: /var/log/ for (accessing logs of processes)
'''potential idea's:'''
*instagrain' (insstvagram) broadcasting platform (SSTV)
- a website you can visit which constantly broadcasting images you can capture with your phone
* text feed (Twitter) to audio and storing on cassette tape (KCS)
- a 'service' to capture text and store it on cassette tape (and decode it as well)
:::''Mini task: Think about what substantive questions and observations regarding content arise?'' <br>
:::'fluxus' / system / continuity / analog vs digital / etc
standalone:
- /dev/zero
- /zero/null
- RM optie
> server (hub)
===Feedback(w/Michael 12.02.2021)===
TAKEN FROM SOUNDSTUDIES READER:
'''Semantic Listening'''
I call semantic listening that which refers to a code or a language to interpret a message: spoken language, of course, as well as Morse and other such codes. This
mode of listening, which functions in an extremely complex way, has been the object
of linguistic research and has been the most widely studied. One crucial fi nding is that
it is purely differential. A phoneme is listened to not strictly for its acoustical
properties but as part of an entire system of oppositions and differences. Thus
semantic listening often ignores considerable differences in pronunciation (hence in
sound) if they are not pertinent differences in the language in question. Linguistic
listening in both French and English, for example, is not sensitive to some widely
varying pronunciations of the phoneme a.
Obviously one can listen to a single sound sequence employing both the causal
and semantic modes at once. We hear at once what someone says and how they say it.
In a sense, causal listening to a voice is to listening to it semantically as perception of
the handwriting of a written text is to reading it.2
'''On Background Listening'''
In early 2009, Jay Rosen, a professor of journalism at New York University, posed a
question to his 12,000 followers on the micro-blogging service Twitter. He wanted to
know why they twittered. Of the almost 200 responses that he initially received via
his blog and Twitter account, he noticed an important similarity. ‘Surprise fi nding
from my project’, he wrote on Twitter on 8 January, ‘is how often I wound up with
radio as a comparison.’
The comparison may have been unexpected, but it is provocative. Twitter is a
social networking service where users send and receive text-based updates of up to
140 characters. They can be delivered and read via the web, instant messaging clients
or mobile phone as text messages. Unlike radio, which is a one-to-many medium,
Twitter is many-to-many. People choose whom they will follow, which may be a small
group of intimates, or thousands of strangers – they then receive all updates written
by those people. Perhaps the most obvious difference from radio is that there is no
sound broadcast on Twitter. It is simply a network of people scanning, reading and
occasionally posting written messages. Yet the radio analogy persists. As MSN editor
Jane Douglas writes, ‘I see Twitter like a ham radio for tuning into the world’ (cited
in Rosen 2009).
The keynote sounds of a landscape are those created by its geography and climate:
water, wind, forests, plains, birds, insects and animals. Many of these sounds may
possess archetypal signifi cance; that is, they may have imprinted themselves so deeply
on the people hearing them that life without them would be sensed as a distinct
impoverishment. They may even affect the behavior or life style of a society, though
for a discussion of this we will wait until the reader is more acquainted with the
matter.
Signals are foreground sounds and they are listened to consciously. In terms of
the psychologist, they are fi gure rather than ground. Any sound can be listened to
consciously, and so any sound can become a fi gure or signal, but for the purposes of
our community-oriented study we will confi ne ourselves to mentioning some of
those signals which must be listened to because they constitute acoustic warning
devices: bells, whistles, horns and sirens. Sound signals may often be organized into
quite elaborate codes permitting messages of considerable complexity to be
transmitted to those who can interpret them. Such, for instance, is the case with the
cor de chasse, or train and ship whistles, as we shall discover.
The term soundmark is derived from landmark and refers

Latest revision as of 12:35, 28 January 2021

Hackpacts: "home school prototyping"

Prototypes and experiments so far:

  • 1-Bit sound synthesis (making music on a 'hacked' TI calculator)
  • KCS workflow: txt input > generated text > KSC audio > HTML
  • AALIB experiments + Ascii (quilting): images converted to ascci via a generator in Python
  • SSTV workflow tests in Python: modulating images (sending/receiving – encoding/decoding, storing on tape)
  • IRC bot experiments: (Urzaloid Franklin)
  • API scraping(Planetary NASA data mixed with other sources)
  • A)PART to de(PART PY.RATE.CHNC workshop: collaborative tape loop creation and recording (materiality of magnetic tape, deconstruction of sound, sonic fiction)
  • interactive fiction + text based interface writing en generating text based on input.



  __    __     ______   ___      ___   _______   ________  ______    __    __     ______      ______    ___       
 /" |  | "\   /    " \ |"  \    /"  | /"     "| /"       )/" _  "\  /" |  | "\   /    " \    /    " \  |"  |      
(:  (__)  :) // ____  \ \   \  //   |(: ______)(:   \___/(: ( \___)(:  (__)  :) // ____  \  // ____  \ ||  |      
 \/      \/ /  /    ) :)/\\  \/.    | \/    |   \___  \   \/ \      \/      \/ /  /    ) :)/  /    ) :)|:  |      
 //  __  \\(: (____/ //|: \.        | // ___)_   __/  \\  //  \ _   //  __  \\(: (____/ //(: (____/ //  \  |___   
(:  (  )  :)\        / |.  \    /:  |(:      "| /" \   :)(:   _) \ (:  (  )  :)\        /  \        /  ( \_|:  \  
 \__|  |__/  \"_____/  |___|\__/|___| \_______)(_______/  \_______) \__|  |__/  \"_____/    \"_____/    \_______) 
                                                                                                                  
   _______    _______     ______  ___________  ______  ___________  ___  ___  __    _____  ___    _______         
  |   __ "\  /"      \   /    " \("     _   ")/    " \("     _   ")|"  \/"  ||" \  (\"   \|"  \  /" _   "|        
  (. |__) :)|:        | // ____  \)__/  \\__/// ____  \)__/  \\__/  \   \  / ||  | |.\\   \    |(: ( \___)        
  |:  ____/ |_____/   )/  /    ) :)  \\_ /  /  /    ) :)  \\_ /      \\  \/  |:  | |: \.   \\  | \/ \             
  (|  /      //      /(: (____/ //   |.  | (: (____/ //   |.  |      /   /   |.  | |.  \    \. | //  \ ___        
 /|__/ \    |:  __   \ \        /    \:  |  \        /    \:  |     /   /    /\  |\|    \    \ |(:   _(  _|       
(_______)   |__|  \___) \"_____/      \__|   \"_____/      \__|    |___/    (__\_|_)\___|\____\) \_______)        
                                                                                                                  

                                                                                                                                               

1-bit music: TI-83+ calculator experiments

I came across an open-source project that makes it possible to 'hack' old Texas Instruments calculators and turn them into 1-bit music composing tool (instrument). The program ([Houston Tracker)] is still not being used so much and remains quite obscure. HT2 converts the binary output of the TI-calculator to generate 1-bit sound. What I think is really interesting is that 1-bit sound represents the on/off binary basics of computing. The 1-bit sound is therefore close to the inner process.

20201103 140137.jpg

[See it work]

Key points

  • 1-bit music programming and playback (using Houston Tracker II by UTZ)
  • materializing binary data + sound of the CPU (on/off)
  • recording on cassette tape: analog processing of digital information
  • the benefits of working with limitations
  • 'Zombie media': reappropriation of obsolete tech and explore it's potential instead of discarding it: what does it mean?
  • implement it in today's workflow (audio/visual, programming, etc)

playful ideas & potentials to further explore:

  • 1-bit sound publication
  • implementing graphics: bitmaps
  • How to embed this in a modern context? What's the use?
  • BASIC programming language
  • 1-bit sound publication

To spread this project and both music, I thought about making a publication/release/demo in one.

write a 4 track album for it and release it on a TI-83.
People that buy it would receive a TI with the tracks on it (collected from thrift store / Marktplaats)
mail it to people
way to get started 'hands-on
See how tracks were produced might get people started
enlarge interest, spread the word & expand community?

This way, the public can not only listen but also directly engage and get their hands dirty if preferred. What I also think is interesting that in contrary to making music with the sound chip of obsolete gaming consoles is that it's much further detached from retro aesthetics. So it focuses much more on the tech part and thinking how to use this device otherwise.

links & Resources

TILP: open source program for memory flashing on TI
Houston Tracker 2: https://www.irrlichtproject.de/houston/houston1/index.html
DOORS GUI: https://dcs.cemetech.net/index.php/Doors_CS_7_Scratchwork
graphics: https://www.ticalc.org/pub/win/graphics/
1-bit synthesis paper: https://www.gwern.net/docs/cs/2020-troise.pdf
1-bit synthesis techniques: https://phd.protodome.com/#anchor-pulse-width-sweep\
Graphlink cablefor converting binary data to sound: https://www.amazon.com/Texas-Instruments-94327-Graphlink-USB/dp/B00006BXBS

KCS: digital data standard explorations

KCS or Kansas City Standard was developed to convert code (in the form of ASCII text) into sound so it could be stored on media such as magnetic tape but became also suitable to broadcast over radio. This way, its data became more easily interchangeable. KSC is still being used today to restore large quantities of archival material that was stored on magnetic tape. There are other standards that are closely related to this standard. KCS was an attempt to standardize (For instance, the Commodore 64 had its own method however this was very much prone to errors.)


Key Points

  • KCS workflow setup in Python (Jupyter Notebook) (text input > generated text > KSC audio > HTML)
  • slow data transmission / image arise line by line
  • materiality of data via sound (physical connection, embodiment of a process)
  • deconstruction: encoding/decoding
  • storage and playback (on audio cassette)

playful ideas & potentials to further explore

resources

AALIB + ASCCI generating experiments

ASCII art is a graphic design technique that uses computers for presentation and consists of pictures pieced together from the 95 printable (from a total of 128) characters defined by the ASCII Standard from 1963 and ASCII compliant character sets with proprietary extended characters (beyond the 128 characters of standard 7-bit ASCII). The term is also loosely used to refer to text-based visual art in general. ASCII art can be created with any text editor, and is often used with free-form languages. Most examples of ASCII art require a fixed-width font (non-proportional fonts, as on a traditional typewriter) such as Courier for presentation.

AAlib is a software library that allows applications to automatically convert still and moving images into ASCII art. It was released by Jan Hubicka as part of the BBdemo project in 1997.

Ascii-planet renders.jpg
Ascii planet renders in Python


SSTV (Slow Scan Television) experiments

SSTV or Slow scan Television – originally invented as an analog method in the late '60s but accessible to the public in the early 90s when the custom radio equipment was replaced by PC software – is a protocol for sending images over audio frequencies. The sound holds the information of what color where to place, line by line. This way, the image is slowly generated when decoding the audio in real-time. This method is still used today by the HAM amateur radio community for sending and collecting images. Next to that, the International Space station (ISS) still sends images to planet earth this way. The way SSTV generates images is closely related to the material process of digital printing on paper.

SSTV Workflow experiments

Key points

  • modulation: protocol/standard to send images over radio
  • workflows and experiments using Python3 and bash in Jupyter Notebook
  • exploring the materiality of data via sound
  • 'slow' data transmission (in contrast with invisible processes and speed)
  • encoding /decoding: deconstruction of data
  • both interests combined: lo-tech graphics & sound!


SSTV Workflow in use: Image to SSTV signal received by phone


SSTV workflow and decoding via app

LISTEN

Instagram to SSTV: Instagrain Broadcasting


Listen to this image

playful ideas & potentials to further explore:

links & resources:

boadcasting software: https://www.qsl.net/kd6cji/downloads.html
Python scripts to convert images to audio: https://pypi.org/project/PySSTV/
general resources: http://users.belgacom.net/hamradio/sat-info.htm
RXSSTV: http://users.belgacom.net/hamradio/rxsstv.htm
SSTV tools (encoding/decoding) http://www.dxatlas.com/sstvtools/
recent SSTV project: https://hsbp.org/rpi-sstv
Pictures On Cassette: https://www.youtube.com/watch?v=c38dLDQoRtM

Interactive Fiction/'Text-based' adventure

Hypertext fiction is a genre of electronic literature, characterized by the use of hypertext links that provide a new context for non-linearity in literature and reader interaction. The reader typically chooses links to move from one node of text to the next, and in this fashion arranges a story from a deeper pool of potential stories. Its spirit can also be seen in interactive fiction.

Interactive fiction, often abbreviated IF, is software simulating environments in which players use text commands to control characters and influence the environment. Works in this form can be understood as literary narratives, either in the form of interactive narratives or interactive narrations. These works can also be understood as a form of video game,[1] either in the form of an adventure game or role-playing game. In common usage, the term refers to text adventures, a type of adventure game where the entire interface can be "text-only",[2] however, graphical text adventures still fall under the text adventure category if the main way to interact with the game is by typing text.

Due to their text-only nature, they sidestepped the problem of writing for widely divergent graphics architectures.


6100c610fbf73c210905eef0d538f9af.png

Tutorial with David Moroto

What is your content?

speaks to my heart (fiction)
source: emotional investment
solve the coldness of the medium!
dept! (true for me, fiction for other)
'conversational tool' between the analog and digital
tool of mediation

Inspiration

SanctuaryRPG - (Classic Text Adventure Game)

SanctuaryRPG - (Classic Text Adventure Game)

SanctuaryRPG - (Classic Text Adventure Game)

SanctuaryRPG - (Classic Text Adventure Game)

SanctuaryRPG - (Classic Text Adventure Game)


Screenshot 2020-11-28 at 22.11.32.png

Inspiration: Solarpunk: The map is the territory


Shadow wolf zine.png

Inspiration regarding tone of voice: Shadow Wolf Zine (ASCII zine by musician Legowelt about various topics)

Resources & links

PI: Selfhosting IRC + bot + API scraping

https://pythonspot.com/building-an-irc-bot/


Sandbox as a publication

  • showing where it is! (this is where you are!)
  • what happens when you visit (exposing a process with the emphasis on materiality)

resources:

Tape + analog sound/recording experiments

Deconstruction workshop: materializing data over sound

  • recording sound of a deconstruction process: transfer to the physical carrier (analog tape)
  • recording data of images: broadcast SSTV signal
  • material combined in audio-visual performance

>WORKSHOP: A)PART_to_de(PART (In collaboration with Tisa Neža and Ioana Tomici)

Performance:




>>>>>>>C0/\/\/\/\0N F/\CT0R<<<<<<<

Related:

  • 'wacky tech' (low and obsolete tech processes )
  • what meaning emerges when low-tech meets high-tech
  • regaining autonomy and the value of misusing technology
  • revealing inner workings (trough sonification of a process)
  • obsolete systems as a method (not retro!)
  • the affordances of not emulating
  • "sonic fiction"
  • meaning that occurs when going between formats

general links:

https://www.westminsterpapers.org/articles/10.16997/wpcc.209/print/
https://www.electronicdesign.com/industrial-automation/article/21808186/sending-data-over-sound-how-and-why
https://spectrum.ieee.org/computing/hardware/why-the-future-of-data-storage-is-still-magnetic-tape
http://screenl.es/slow.html

Project content: interim conclusion

                                                     _   (_)                          
  ____     ____ ___  ____ _   _ ____  ____ ___  ____| |_  _  ___  ____                
 / _  |   / ___) _ \|  _ \ | | / _  )/ ___)___)/ _  |  _)| |/ _ \|  _ \               
( ( | |  ( (__| |_| | | | \ V ( (/ /| |  |___ ( ( | | |__| | |_| | | | |              
 \_||_|   \____)___/|_| |_|\_/ \____)_|  (___/ \_||_|\___)_|\___/|_| |_|              
 _                                          ___                                       
| |          _                             / __)                       _              
| | _   ____| |_ _ _ _  ____ ____ ____    | |__ ___   ____ ____   ____| |_   ___   
| || \ / _  )  _) | | |/ _  ) _  )  _ \   |  __) _ \ / ___)    \ / _  |  _) /___)
| |_) | (/ /| |_| | | ( (/ ( (/ /| | | |  | | | |_| | |   | | | ( ( | | |__|___ | 
|____/ \____)\___)____|\____)____)_| |_|  |_|  \___/|_|   |_|_|_|\_||_|\___|___/
                                                                                      

Project contents

  • inner workings and processes of data modulation explained
  • hyperfocus on detailed information regarding processes mixed with fictional elements (metaphors and inspiration from a 'guide')
  • user can interact: sending > receiving information (real-time encoding / decoding)

Practical

  • text-based / command-line style interface: to create a focus on other senses
  • ASCII as image, code, sound: in between formats
  • website, Python, on a Raspberry Pi.

Tone of voice of the interface:

  • interactive fiction and text-based games (imagination!)
  • tone of voice: speaking on a personal level
  • Sonic Fiction / Matters Fiction: notion of 'planetary importance'
- "Welcome to my home. You are here:" <showing pi in my house>
- "You traveled a long way to get here in a very short time""
- "You just looked outside through a window, staring at the screen again, you think about the past."
- "Lately, I often got asked if I am a robot. Are a robot. Are you a robot?"
- "Listen closely to this image!"

Output

  • website (publication) hosted on a Raspberry Pi as publication
  • part of the workflow will be used for audio/visual performance based on modulation of data (magnetic tape)

Inspiration taken from tape recordings of:

  • 'Alfa Training' motivational audio course about 'self-programming', regaining autonomy and guidance in making life and work decisions Listen here
  • software on tape! (fun detail: there are music snippets of ABBA in between)
  • Recording of computer course: 'How to get online?' (mechanical typing sounds, registration, silence to do an exercise)
  • guide/course: leader/character, bumpers and theme songs and scores, silence for executing
  • low key blending in personal stories and experiences
  • use metaphors and figure of speech from 'alpha training'


Important input:

Donna Harraway: viewpoints "cyborg" (some parts are hard to digest)
Sun Ra: creating myths, "sonic fiction" (planetary importance)
Radical software: knowledge, sharing, community building
Erkki Kurenniemi / Computer eats Art (misusing technology to 'keep control' / in between formats)


    |\----/|
    | . .  |
     \_><_/-..----.
  ___/ `   ' ,""+ \  
 (__...'   __\    |`.___.';
   (_,...'(_,.`__)/'.....+

Feedback(w/Manetta 12.02.2021)

On digital text and the development of the Word processor:

Research as publication:

Practical tools/tips:

Turn a Git repo into a collection of interactive notebooks: https://mybinder.org/
jupyter nbconvert
/var/log/ for (accessing logs of processes)

potential idea's:

  • instagrain' (insstvagram) broadcasting platform (SSTV)

- a website you can visit which constantly broadcasting images you can capture with your phone

  • text feed (Twitter) to audio and storing on cassette tape (KCS)

- a 'service' to capture text and store it on cassette tape (and decode it as well)


Mini task: Think about what substantive questions and observations regarding content arise?
'fluxus' / system / continuity / analog vs digital / etc


standalone: - /dev/zero - /zero/null - RM optie > server (hub)

Feedback(w/Michael 12.02.2021)

TAKEN FROM SOUNDSTUDIES READER:



Semantic Listening I call semantic listening that which refers to a code or a language to interpret a message: spoken language, of course, as well as Morse and other such codes. This mode of listening, which functions in an extremely complex way, has been the object of linguistic research and has been the most widely studied. One crucial fi nding is that it is purely differential. A phoneme is listened to not strictly for its acoustical properties but as part of an entire system of oppositions and differences. Thus semantic listening often ignores considerable differences in pronunciation (hence in sound) if they are not pertinent differences in the language in question. Linguistic listening in both French and English, for example, is not sensitive to some widely varying pronunciations of the phoneme a. Obviously one can listen to a single sound sequence employing both the causal and semantic modes at once. We hear at once what someone says and how they say it. In a sense, causal listening to a voice is to listening to it semantically as perception of the handwriting of a written text is to reading it.2


On Background Listening In early 2009, Jay Rosen, a professor of journalism at New York University, posed a question to his 12,000 followers on the micro-blogging service Twitter. He wanted to know why they twittered. Of the almost 200 responses that he initially received via his blog and Twitter account, he noticed an important similarity. ‘Surprise fi nding from my project’, he wrote on Twitter on 8 January, ‘is how often I wound up with radio as a comparison.’ The comparison may have been unexpected, but it is provocative. Twitter is a social networking service where users send and receive text-based updates of up to 140 characters. They can be delivered and read via the web, instant messaging clients or mobile phone as text messages. Unlike radio, which is a one-to-many medium, Twitter is many-to-many. People choose whom they will follow, which may be a small group of intimates, or thousands of strangers – they then receive all updates written by those people. Perhaps the most obvious difference from radio is that there is no sound broadcast on Twitter. It is simply a network of people scanning, reading and occasionally posting written messages. Yet the radio analogy persists. As MSN editor Jane Douglas writes, ‘I see Twitter like a ham radio for tuning into the world’ (cited in Rosen 2009).


The keynote sounds of a landscape are those created by its geography and climate: water, wind, forests, plains, birds, insects and animals. Many of these sounds may possess archetypal signifi cance; that is, they may have imprinted themselves so deeply on the people hearing them that life without them would be sensed as a distinct impoverishment. They may even affect the behavior or life style of a society, though for a discussion of this we will wait until the reader is more acquainted with the matter.

Signals are foreground sounds and they are listened to consciously. In terms of the psychologist, they are fi gure rather than ground. Any sound can be listened to consciously, and so any sound can become a fi gure or signal, but for the purposes of our community-oriented study we will confi ne ourselves to mentioning some of those signals which must be listened to because they constitute acoustic warning devices: bells, whistles, horns and sirens. Sound signals may often be organized into quite elaborate codes permitting messages of considerable complexity to be transmitted to those who can interpret them. Such, for instance, is the case with the cor de chasse, or train and ship whistles, as we shall discover. The term soundmark is derived from landmark and refers