User:Francg/expub/specialissue2/dev2: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
No edit summary
 
(24 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; color:#00FF40; margin: 0 15px 0 0; width: 550px; font-size:120%; line-height: 1.3em; letter-spacing: 0.8px;">
<div style="float: left; color:#00FF40; margin: 0 15px 0 0; width: 850px; font-size:120%; line-height: 1.3em; letter-spacing: 0.8px;">
<br>Synapse + Kinect
<br>Synapse + Kinect


<br>
<br>
[[File:2 Synapse-Kinect.png]]
https://pzwiki.wdka.nl/mw-mediadesign/images/1/11/2_Synapse-Kinect.png


<br>
<br>
Line 12: Line 12:
<br>
<br>


[[File:Ableton.png]]
https://pzwiki.wdka.nl/mw-mediadesign/images/1/18/Ableton.png
 
<br>
<br>
<br>
<br>
 
* * Meeting notes / Feedback * *
 
 
- How can a body be represented into a score?<br>
 
- "Biovision Hierarchy" = file format - motion detection.<br>
 
-    [http://thursdaynight.hetnieuweinstituut.nl/en/activities/femke-snelting-reads-biovision-hierarchy-standard Femke Snelting reads the Biovision Hierarchy Standard]<br>
 
- Systems of notations and choreography - Johanna's thesis in the wiki<br>
 
 
Raspberry Pi *
 
- Floppy disk: contains a patch from Pd.
 
- Box: Floppy Drive, camera, mic...
 
- Server: Documentation such as images, video, prototypes, resources... 
 
<br>- There are two different research paths that could be more interestingly further explored separately; <br><br>
1 - on one hand, * motion capture * by employing tools/software like "Kinect", "Synapse" app, "Max MSP", "Ableton", etc...
 
2 - on the other hand, there is data / information reading
* This can be further developed and simplified.
* However, motion capture using Pd and an ordinary webcam to make audio effects could be efficiently linked.
 


<br>
<br>
Line 21: Line 55:
<br>
<br>


[[File:Vide-help.png]]
https://pzwiki.wdka.nl/mw-mediadesign/images/e/e5/Vide-help.png


<br>
<br>
Line 30: Line 64:
<br>
<br>


[[File:Motion-detection.png]]
https://pzwiki.wdka.nl/mw-mediadesign/images/1/11/Motion-detection.png


<br>
<br>
<br>
<br>
<br>
<br>
Color tracking inside square with “spigot” + “blob” object
Color tracking inside square with “spigot” + “blob” object. This can be achieved in rgba or grey.
<br>
<br>
<br>
<br>


[[File:Square-motion-2.png]]
[[File:Track-squares 2.png]]


<br>also by combining "grey" and "rgba" video messages simultaneously, the screen's color seems to collapse
<br>
<br>
<br>
<br>also by combining "grey" and "rgba" simultaneously, or any other screen noise effect.
<br>
<br>
<br>
<br>


[[File:Grey-rgba.png]]
https://pzwiki.wdka.nl/mw-mediadesign/images/0/0e/Grey-rgba.png


<br>Same process can be performed with self-generated imported audio files
<br>
<br>
<br>
<br>Same process can be performed with self-generated imported audio files. It's important to ensure that their sample rate is the same as in Pd's media/audio settings, in order to avoid errors.
<br>
<br>
<br>
<br>


[[File:Audiofeedback-voice.png]]
https://pzwiki.wdka.nl/mw-mediadesign/images/5/5d/Audiofeedback-voice.png
<br>
<br>
[[File:Audacity.png]]


<br>It's important to check that the file's sample rate is the same as in Pd's audio configuration in order to avoid errors.
<br>
<br>
<br>
<br>In order to better understand how audio feedback works, I have self-generated a series of feedback loops by "screen recording" my prototypes, using Quicktime and the microphone/s input, along with specific system audio settings. They were later edited in Audacity.  
<br>
<br>
<br>
<br>


[[File:Audacity.png]]
[[File:Audio1.ogg|Feedback-loop1]]
<br>
[[File:Audio2.ogg|Feedback-loop2]]


<br>
<br>Audio recording + audio play by using [tabwrite~] and [tabplay~] objects. This allows to create a loop by recording multiple audios (which can also be overlapped depending on their length)
<br>Audio recording + audio play by using [tabwrite~] and [tabplay~] objects. This allows to create a loop by recording multiple audios (which can also be overlapped depending on their length)
<br>
<br>
<br>
<br>


[[File:Audio-recording.png]]
https://pzwiki.wdka.nl/mw-mediadesign/images/c/ce/Audio-recording.png


<br>Feedback Loop = Score?
<br>
<br>
<br>
<br>
[[File:22diagram2.png]]
<br>
<br>
[[File:20diagram.png]]
Audio recording  [tabwrite~] + [tabplay~] looped + [r Channel-Push]
 
<br>
<br>an alternative could be:
<br>
https://pzwiki.wdka.nl/mw-mediadesign/images/4/4c/Audio-feedback1.png
<br>
<br>
https://pzwiki.wdka.nl/mw-mediadesign/images/e/e7/Audio-feedback2.png
<br>
<br>
<br>
Python Opt_flow_py + video input <br>
<br>
https://pzwiki.wdka.nl/mw-mediadesign/images/a/a2/Opt-flow-py-3.png<br>
<br>
https://pzwiki.wdka.nl/mw-mediadesign/images/e/e9/Opt-flow-py.png
<br>
<br>
https://pzwiki.wdka.nl/mw-mediadesign/images/e/e1/Opt-flow-py-2.png
<br>
<br>
<br>
<br>
Python Opt_flow_py + OSC + Pd + tabwrite + tabplay <br>
<br>
<br>


[[File:21 option2.png ]]
https://pzwiki.wdka.nl/mw-mediadesign/images/7/7e/Osc_recorder-2.png
 
https://pzwiki.wdka.nl/mw-mediadesign/images/0/0e/Osc_recorder.png


</div>
</div>

Latest revision as of 15:56, 7 March 2017


Synapse + Kinect


2_Synapse-Kinect.png




Synapse + Kinect + Ableton + Patches to merge and synnchronize Ableton's audio samples with the body limbs

Ableton.png





* * Meeting notes / Feedback * *


- How can a body be represented into a score?

- "Biovision Hierarchy" = file format - motion detection.

- Femke Snelting reads the Biovision Hierarchy Standard

- Systems of notations and choreography - Johanna's thesis in the wiki


Raspberry Pi *

- Floppy disk: contains a patch from Pd.

- Box: Floppy Drive, camera, mic...

- Server: Documentation such as images, video, prototypes, resources...


- There are two different research paths that could be more interestingly further explored separately;

1 - on one hand, * motion capture * by employing tools/software like "Kinect", "Synapse" app, "Max MSP", "Ableton", etc...

2 - on the other hand, there is data / information reading * This can be further developed and simplified. * However, motion capture using Pd and an ordinary webcam to make audio effects could be efficiently linked.





Detecting video input from my laptop's webcam in Pd

Vide-help.png




Motion Detection - “blob” object and oscillators

Motion-detection.png




Color tracking inside square with “spigot” + “blob” object. This can be achieved in rgba or grey.

Track-squares 2.png





also by combining "grey" and "rgba" simultaneously, or any other screen noise effect.

Grey-rgba.png





Same process can be performed with self-generated imported audio files. It's important to ensure that their sample rate is the same as in Pd's media/audio settings, in order to avoid errors.

Audiofeedback-voice.png

Audacity.png





In order to better understand how audio feedback works, I have self-generated a series of feedback loops by "screen recording" my prototypes, using Quicktime and the microphone/s input, along with specific system audio settings. They were later edited in Audacity.

File:Audio1.ogg
File:Audio2.ogg



Audio recording + audio play by using [tabwrite~] and [tabplay~] objects. This allows to create a loop by recording multiple audios (which can also be overlapped depending on their length)

Audio-recording.png




Audio recording [tabwrite~] + [tabplay~] looped + [r Channel-Push]

Audio-feedback1.png

Audio-feedback2.png


Python Opt_flow_py + video input

Opt-flow-py-3.png

Opt-flow-py.png

Opt-flow-py-2.png


Python Opt_flow_py + OSC + Pd + tabwrite + tabplay

Osc_recorder-2.png

Osc_recorder.png