User:Fako Berkers/project3: Difference between revisions

From XPUB & Lens-Based wiki
Line 25: Line 25:
There are quite a few options and going into ambiguous computing or gadgets (like mobiles) will bring a lot of meaning to the work that need to be thought through thoroughly. I also need to be aware that the system as a whole is the work and not so much the performance, because this will depend on the specifics of the simulation that the system is embodying.
There are quite a few options and going into ambiguous computing or gadgets (like mobiles) will bring a lot of meaning to the work that need to be thought through thoroughly. I also need to be aware that the system as a whole is the work and not so much the performance, because this will depend on the specifics of the simulation that the system is embodying.


My first go at a system will be the embodiment of a Markov chain which is not really a simulation, but quite close. The input for the chain will be a cheesy love story and the computer will filter all emotions from this text. These emotions are then somehow outputted to the actor who will perform these emotions in relation to the audience. An example sequence of emotions that the chain will produce is this:
My first go at a system will be the embodiment of a Markov chain which I turned into a simulation of love. Love is said to be a drive and not an emotion. While in love you can feel emotions like sadness and joy. I want the system to simulate emotions that I can go through while being in love with different audience members. The input for the chain will be a cheesy love story and the computer will filter all emotions from this text. These emotions are then somehow outputted to the actor who will perform these emotions in relation to the audience as being in love with them. An example sequence of emotions that the chain will produce is this:


''sorry shame sadness sadness sadness grief sorry serenity angry scared sarcastic glum happy happy happy happy happy happy happy happy happy happy''
''sorry shame sadness sadness sadness grief sorry serenity angry scared sarcastic glum happy happy happy happy happy happy happy happy happy happy''
I am still to decide how to instruct the actor (myself). Code for the Markov chain as well as the list of emotions used and the cheesy romantic novel chapter can be found on my prototype page.

Revision as of 15:22, 23 May 2011

History Will Repeat Itself

Both in the essay and in the practical assignment I want to study how re-enactment relates to simulation. I think these concepts are very similar, but they are also different in a few ways. By creating an enacted simulation and writing about what others have said about these two phenomena I hope to learn a great deal about how computers and performance art can relate to each other.

Practise

In my practical assignment I want to focus upon the embodiment of simulations. An important aspect of re-enactments is the opportunity for the audience/participants to live through an important moment in history. A simulation is used to analyse a system and make tactical decisions based on this analysis. It is never used as a tool to give an experience. Embodiment is a first step to such a purpose for simulations.

At the moment I'm thinking about how the simulation will be handling input and output. For output I think there are the following options:

  • Projection on a screen or the ground. I have a 4x3 meter projection screen and projection on different fabrics might also be interesting.
  • SMS gateway. This allows a computer to send messages to mobile phones. If you hand out devices you can control the sound they make. Tim Etchells has worked with instruction over SMS.
  • Mobile internet. You can give instructions over a smartphone compatible site. I will be able to control sound (and video) on those devices.
  • Analogue. A person can call out the instructions he/she reads on a screen, or transfer them in some other analogue way.
  • DMX. A computer can control the lighting in a theatre and give signals this way.
  • Headphone. Computer speech can go through a (bluetooth) headphone.

For input there are these options to consider:

  • Opencv. A computer vision library that can read camera images, but is far from perfect.
  • Trained monkey. A colleague who watches the performance and gives input through mouse and keyboard on certain cues (works well according to Stock)
  • RFID chips. Creating a ambiguous computer environment like Blast Theory has done in the past is a high-tech solution
  • Mobile internet. Through form submission or clicking links a user can give feedback to a system. Security = important
  • QR codes. Another way to dress up an environment to allow for feedback.

There are quite a few options and going into ambiguous computing or gadgets (like mobiles) will bring a lot of meaning to the work that need to be thought through thoroughly. I also need to be aware that the system as a whole is the work and not so much the performance, because this will depend on the specifics of the simulation that the system is embodying.

My first go at a system will be the embodiment of a Markov chain which I turned into a simulation of love. Love is said to be a drive and not an emotion. While in love you can feel emotions like sadness and joy. I want the system to simulate emotions that I can go through while being in love with different audience members. The input for the chain will be a cheesy love story and the computer will filter all emotions from this text. These emotions are then somehow outputted to the actor who will perform these emotions in relation to the audience as being in love with them. An example sequence of emotions that the chain will produce is this:

sorry shame sadness sadness sadness grief sorry serenity angry scared sarcastic glum happy happy happy happy happy happy happy happy happy happy

I am still to decide how to instruct the actor (myself). Code for the Markov chain as well as the list of emotions used and the cheesy romantic novel chapter can be found on my prototype page.