User:Fako Berkers/assignment1: Difference between revisions

From XPUB & Lens-Based wiki
Line 17: Line 17:
To make the melody fit the bass I changed the possible notes from the entire scale to the 1st, 3rd, 5th and 7th of the scale calculated from the bass note currently being played.
To make the melody fit the bass I changed the possible notes from the entire scale to the 1st, 3rd, 5th and 7th of the scale calculated from the bass note currently being played.


This procedure leads to a computer playing longer bass notes accompanied by several melody notes like in this example: [[File:]]
This procedure leads to a computer playing longer bass notes accompanied by several melody notes like in this example: [[File:PrototypeFakoSample1.ogg]]


===step 2: synchronizing computers===
===step 2: synchronizing computers===

Revision as of 13:57, 17 October 2010

Prototype 1: bonding synchronization automated

summary

It is thought that when people bond their brains synchronize their activity. Brain activity has a certain rhythm and the theory is that these rhythms become (more or less) the same when people interact. An experiment to generate evidence for this theory is described [[Media:Brains_swinging_in_concert.txt|here]. What I wanted to do with Prototype 1 is simulate the bonding of two (virtual) computers.

In general I think it worked out pretty good. Sometimes you still hear notes out of tune and it is not always as rhythmical as I want it to be. But some of the output really got stuck in my head and in a way inspired me. The music surprises most of the time especially when you play a little with the input variables. I made a bash file for people to generate their own music and of course you could have a look at the source code in prototype1.4.py

step 1: making music

The exercise Prototype 1 is about a computer generating music from plain text input (i.e. the computer should turn any sentence given to him into music). The experiment mentioned above uses music to demonstrate the point that brain activity synchronizes in social interaction. It was only natural to take the definition of music used in the experiment as the concept for the music that the computer would generate. This means that:

  • the music consists of 6 measures with a signature of 4 quarter notes per measure
  • the music is played in E minor

Before I go into how I made two (virtual) computers synchronize their music I describe how my program turns text into notes. Each character like “A” has a number equivalent. In the case of A it is 65. The possible notes are limited to the notes in the scale of E minor, which are seven notes. By calculating 65 % 7 the result is the remainder of 65 / 7. This is a number between 0 and 6. This number can stand for a note on the scale (0=E and 1=F# and 2=G etc.). In case of 65 % 7 the outcome is 2. This means that any “A” sign encountered in the input text is translated into a G.

The above results into a computer playing one note at a time. To make the outcome a little more interesting I had the computer play two notes a time. One bass note (in a low octave for a longer period of time) and one melody note (in a higher octave with shorter duration). To make the melody fit the bass I changed the possible notes from the entire scale to the 1st, 3rd, 5th and 7th of the scale calculated from the bass note currently being played.

This procedure leads to a computer playing longer bass notes accompanied by several melody notes like in this example: File:PrototypeFakoSample1.ogg

step 2: synchronizing computers

The next step was to let two computers generate music at the same time and let them synchronize their music. I did this by making the computer play into a matrix and then again into the same matrix. Each time the computer plays into the matrix it takes a different starting position in the text to make the music really different. This way the matrix would contain the music of two computers in one. After that the computer would analyze all the music in the matrix. Each note that would match the note of the other “computer” that filled the matrix (in other words, notes that would synchronize) were saved and would have a duration until the next synchronization in the matrix.

As the input text file for the computer to create music from I took the description of the experiment to prove that during social bonding we synchronize our brain patterns. Two computers bond over a text about bonding. The results can be downloaded [[File:]|here]

step 3: some improvements/limitations

For aesthetic reasons I made some changes to the general outline:

  • because of insufficient design it is now only possible to use C major or A minor scales (unfortunately there seems to be no quick fix for this)
  • the first and last bass notes are converted to keynotes (the C in this case).
  • apart from notes the scale also contains two “rests” indicating that no note should be played.
  • instead of guitars (as in the experiment) the computer uses piano's (because notes can be maintained longer on this instrument)
  • any matrix that ends up with less than 6 changes in bass and 12 changes in melody is discarded
  • the matrix is filled three times instead of two