Martin (XPUB)-project proposal: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
Line 1: Line 1:
<div style='  
<div style='  
width: 50%;   
width: 50%;   
font-size:14px;
font-size:16px;
background-color: white;
background-color: white;
color:black;
color:black;
Line 14: Line 14:
===<p style="font-family:helvetica">What do you want to make?</p>===
===<p style="font-family:helvetica">What do you want to make?</p>===


I want to <b>explore the key roles of digital and physical architectures in the representation and transmission of knowledge</b> by merging them under the form of a <b>physical exhibition interface</b>. To be more specific, I wish to build an exhibition space inspired from the <b>elastic (plastic)</b> display/render of online contents</b> that mostly adapt to the user/viewers perspective. In that sense, I want to build an exhibiting device willing to <b>put the spectator in the position of being the curator and user of it's own physical exhiibition space</b> and <b>allow a single representation/content/artwork to be displayed under a wide range of possible settings</b>. Comparably as a pointer or a cursor, the spectator/user can move inside the space (SpectatorX/SpectatorY = MouseX/MouseY) and resize this same space (SpaceWidth & SpaceHeight = WindowWidth & Windowheight) by pushing or pulling a movable wall fixed on some rails/wheels. The number of spectators, its/their position inside the space, as well as the interactions engaged with it will affect various display factors</b> such as the lighting, sound, projection format, information layout, etc. Such interactions could also <b>give space to some accidents, unforeseen, bugs, deadzones and glitches</b> that might have been intentionaly left there). This is an attempt to <b>speculate on how the cybernetics(?) affects alls aspect of our lifes and could also transform the exhibition space, the curatiorial practice, our experience of art and the nature of representation itself</b>.
I want to build a dystopian cybernetic exhibition space reflecting on the increasing presence of the digital/virtual in our culture. This work will be speculating on how the modes of representation, inside the exhibition spaces, as well as the agencies, behaviors and circulations of its visitors could be affected by the growing translation of our physical/digital behaviors into informational units . The idea is to make use of digital technologies (ultrasonic sensors, microcontrollers, screens) and get inspired by the inherent mechanisms of the Web digital interfaces (responsive, Web events, @media queries) in order to create an exhibition space explicitly willing to offer a customizable perspective to its visitors. In this regard, the number of visitors, their position within the space, their actions or inactions as well as their movements and trajectories will be mapped (made interdependent) to various settings of the space itself, such as the room size, the lighting, the audio/sound, the information layout and format, etc.  
 
<br><br>
* I am still questionning if this space should be completely empty of information or not. In my mind, inside the space should be displayed with the help of one or several beamer(s)) (or screens) the live values of the variable exhibition space (width/lenght; spectator position; luminosity; information font; sound frequency, etc), in order to enhance the idea of being inside a exhibiting device. The way these values are displayed (format, font, layout) should also change depending on the values themselves (I will make some sketches about that).
In order to enlighten the invisible, silent and often poorly considered dynamics that can co-exist in-between both digital and physical spaces, the data captured inside of this space will be displayed on screens and be the main content of the exhibition. Ultimately, the graphic properties of this data (typography, layout, font-size, letter-space, line-height screen luminosity) will also be affected by the indications given by these same information units.
<br><br>
Far from wanting to glorify the use of technology or to represent it as an all-powerful evil, this space will be subject to accidents, unforeseen, bugs, dead zones and glitches, among some of them will have been intentionally left there.


===<p style="font-family:helvetica">How do you plan to make it?</p>===
===<p style="font-family:helvetica">How do you plan to make it?</p>===


While working with <b>Arduino Mega</b> and <b>Rasperry Pi</b>, my aim is to start <b>from the smallest and most simple prototype</b>, <b>gradually increase its scale/technicality until reaching human scale</b> and getting closer from emulating the properties of a Web window (see: [[XPUB2_Research_Board_/_Martin_Foucaut#Prototyping|prototyping]]). Once an <b>exhibtion space will be found/determined</b> for the graduation, I will <b>build a custom mobile wall</b>, or use a preexisting one, add some handles to it, and fix it on a rail system that will allow to reduce or expend the size of a space from in between a maximum and a minimum range. This wall will include on on the back at least one sensor that will help to determine the size of the room in real time. (see [[XPUB2_Research_Board_/_Martin_Foucaut#Creating_an_elastic_exhibition_space|schema]]). With the help of an array of sensors putted on the 3 static walls of the exhibition space, the space will be mapped into a grid, and will allow to know the real-time position of the spectator(s) inside of it.  
While working with Arduino Mega and P5.js my aim is to start from the smallest and most simple prototype and gradually increase its scale/technicality until reaching human/architectural scale (see: prototyping).
<br><br>
<br><br> 
In order to better <b>define my audience, the issues, and the direction of this work, I will take advantages of the different venues organized by XPUB</b> to involve people into <b>testing and reflecting on various prototypes of this work in progress.</b> (see [[XPUB2_Research_Board_/_Martin_Foucaut#Venues|venue1]])
Once an exhibition space will be determined for the assessment, I will introduce my project to the wood and metal stations of the school in order to get help to build at least one mobile wall fixed on a rail system. This wall will include handle(s) on the interior side, allowing visitors to reduce or expend the size of the space (by pushing or pulling the wall) from a minimum and to maximum range (estimated in between 5m2 to 15m2). On the exterior side of this wall, at least one ultrasonic sensor will be implemented in order to determine the surface of the room in real time. (see schema). With the help of an array of ultrasonic sensors putted on interior of the 4 surrounding walls, the space will be mapped into an invisible grid that will detect the exact position of the visitor(s) in real-time, as well as their number. With an extensive use of other sensors such as temperature sensors, light sensors, motion sensor, more information will be gathered, and assigned to specific parameters of the exhibition display.
(see: [[XPUB2_Research_Board_/_Martin_Foucaut#Sketch_3:_Arduino_Uno_.2B_Sensor_.2B_LCD_.2B_2_LED.29_.3D_Physical_vs_Digital_Range_detector|simulation]])
<br><br>
One or various screens will be placed in the space itself, they will be displaying the data captured by the various sensors. Serial communications will allow to transfer the information gathered by the Arduinos to P5.js, allow variable displays of the units.  Resizing the space will specifically affect the lighting of the space, the luminosity of the screens and the size of the informations displayed. The number of visitors will affect the number of active screens as well as the room temperature display. The position in the room will trigger different voice instructions or/and textual instructions if the visitor is not placed in a meaningful way toward the displayed contents. (Ref: Speaking wall (https://www.muhka.be/programme/detail/1405-shilpa-gupta-today-will-end/item/30302-speaking-wall))
<br><br> 
In order to allow myself to take a step back on the making of this project, I will take advantage of the different venues organized by XPUB2 and create mini-workshops that will relate more  (see venue1) (see: simulation)  


[[File:SensorSpace.gif|200px|thumb|left|Sensor Test VS Elastic Space<br><b>Click to watch</b>]]
[[File:SensorSpace.gif|200px|thumb|left|Sensor Test VS Elastic Space<br><b>Click to watch</b>]]
Line 32: Line 37:
===<p style="font-family:helvetica">What is your timetable?</p>===
===<p style="font-family:helvetica">What is your timetable?</p>===


* <b>1st semester</b> Prototyping with Arduino all long, getting started with Raspery, and finding a space to set up
1st semester Prototyping mainly with Arduino, connecting Arduino to P5.js , finding a space to set up the installation for final assesment
# 1st prototype: mini arduio + light sensor (<b>understanding arduino basics</b>) [[https://pzwiki.wdka.nl/mediadesign/XPUB2_Research_Board_/_Martin_Foucaut#Prototype_1_:_Arduino_.2B_Resistor|link]]
* 1st prototype: mini arduio + light sensor (understanding arduino basics / connecting a sensor to a servo motor) [[1]]
# 2nd prototype: arduino uno + utlrasonic sensor (<b>working with ultrasonic sensors</b>) [[https://pzwiki.wdka.nl/mediadesign/XPUB2_Research_Board_/_Martin_Foucaut#Prototype_2:_Arduino_.2B_Ultrasonic_sensor|link]]
* 2nd prototype: Arduino uno + utlrasonic sensor (working with ultrasonic sensors / display values of the sensor on serial monitor) [[2]]
# 3rd prototype: arduino uno + utlrasonic sensor + LCD screen (<b>working with values display</b>) [[https://pzwiki.wdka.nl/mediadesign/XPUB2_Research_Board_/_Martin_Foucaut#Prototype_3:_Arduino_Uno_.2B_Sensor_.2B_LCD_.28.2B_LED.29|link]]
* 3rd prototype: Arduino uno + utlrasonic sensor + LCD screen (working with values display on a small digital screen) [[3]]
# 4th prototype: arduino uno + utlrasonic sensor + 2 LEDS (<b>working with in-between distance range values detection</b>) [[https://pzwiki.wdka.nl/mediadesign/XPUB2_Research_Board_/_Martin_Foucaut#Prototype_4:_Arduino_Uno_.2B_Sensor_.2B_LCD_.2B_2_LED_.3D_Physical_vs_Digital_Range_detector|link]]
* 4th prototype: Arduino uno + utlrasonic sensor + 2 LEDS (creating range distance values detection, and trigger different lights depending on distance detected) [[4]]
# 5th prototype: arduino uno + 3 utlrasonic sensor + 3 LEDS (<b>mapping range values detection in a grid and attributing signals with LEDS</b>) [[https://pzwiki.wdka.nl/mediadesign/XPUB2_Research_Board_/_Martin_Foucaut#Prototype_5:_Arduino_Uno_.2B_3_Sensor_.2B_3_LEDS|link]]
* 5th prototype: Arduino uno + 3 utlrasonic sensor + 3 LEDS (mapping range distance values in a simple 3X3 grid) [[5]]
# 6th prototype: arduino uno + 3 utlrasonic sensor + 12 LEDS (<b>mapping range values detection in a grid and attributing signals with more LEDS</b>) [[https://pzwiki.wdka.nl/mediadesign/XPUB2_Research_Board_/_Martin_Foucaut#Prototype_6:_Arduino_Uno_.2B_3_Sensor_.2B_12_LEDS|link]]
* 6th prototype: Arduino uno + 3 utlrasonic sensor + 12 LEDS (assigning a signal to each position of a person  inside the grid by adding more LEDS) [[6]]
# 7th prototype: arduino uno + 3 utlrasonic sensor + 1 buzzer + 1 LCD + 1 Potentiometer (<b>adding audio signals to the range value detection</b>) [[https://pzwiki.wdka.nl/mediadesign/XPUB2_Research_Board_/_Martin_Foucaut#Prototype_7:_Arduino_Uno_.2B_12_LEDS_.2B_3_Sensor_.2B_Buzzer_.2B_Potentiometer_.2B_LCDS|link]]
* 7th prototype: Arduino uno + 3 utlrasonic sensor + 1 buzzer + 1 LCD + 1 Potentiometer (adding audio signals to the range value detection / changing the luminosity of the screen with a potentiometer) [[7]]
# 8th prototype: arduino uno + 3 utlrasonic sensor + 1 buzzer + 1 LCD + 1 Potentiometer + mini breadboard (<b>separating sensors from each others</b>) [[https://pzwiki.wdka.nl/mediadesign/XPUB2_Research_Board_/_Martin_Foucaut#Prototype_8:_Arduino_Uno_.2B_12_LEDS_.2B_3_Sensor_on_mini_breadboards_.2B_Buzzer_.2B_Potentiometer_.2B_LCD|link]]
* 8th prototype: Arduino uno + 3 utlrasonic sensor + 1 buzzer + 1 LCD + 1 Potentiometer + mini breadboard (separating sensors from each others) [[8]]
# 9th prototype: arduino Uno + 21 LEDS + 7 Sensor + Buzzer + Potentiometer + LCD <b>expending the prototype to human scale</b>
* 9th prototype: Arduino Mega + 21 LEDS + 7 Sensor + Buzzer + Potentiometer + LCD (expending the prototype to human scale with a  7x3 grid / assigning each position within the grid to a specific led and buzzer signal)
——————————— NOW —————————————————————————————————————————
* 10th prototype: Arduino Mega + 7 Sensors + LCD + 3 buzzers + P5.js (allow muttiple sound signal at the same time if 2 people or more are in the grid)
# Upcoming: arduino uno + 3 utlrasonic sensor + 5V Relay + Lamp (<b>controlling a lamp with arduino</b>)
* 11th prototype: Arduino Mega + 7 Sensors + LCD + P5.js (connecting the prototoype to a Web page via serial communications, changing the size of a circle with distance sensors)
# Upcoming: arduino uno + 3 utlrasonic sensor + ESP8266 (WIFI) + Rasperry Pi (Self hosted website) (<b>transmit or/and control value from arduino to computer and vice versa via WIFI transmitter</b>)
——————————— NOW —————————————————————————————————————————  
# Upcoming: small room + arduino uno + 8 utlrasonic sensor + ESP8266 (WIFI) + Rasperry Pi (connect to a Web page)
* Upcoming -  Arduino Mega + 7 Sensors + P5.js (display live values on a screen, and change the display parameters depending on the values themselves)
 
* Upcoming -  Arduino Mega + 7 Sensors + P5.js (create voice commands/instruction depending on the visitors position)
 
* Upcoming - Arduino Mega + 7 Sensors + LCD + P5.js (play sounds and affect pitch/tone depending on position one Web page)
* <b>2nd semester</b>: Find what will be graduation space, build the mobile wall, and translate the setup installation to human/spectator scale.  
* Optional: Arduino uno + ESP8266 (WIFI) (transmit or/and control value from arduino to computer and vice versa via WIFI transmitter / not necessary anymore, since I found another way to do that via USB serial communications)
# Show prototype and schemas of the wall to wood and metal workshops in order to get advices until final validation to build (<b>starting to build physical elements</b>)
# Search, find and validate what will be the space used for the installation during the graduation.
2nd semester: Find what will be graduation space, build the mobile wall, and translate the setup installation to human/spectator scale.
# Start building of the movable wall by considering the characteristic of the space used for graduation.
* Show prototype and schemas of the wall to wood and metal workshops in order to get advices until final validation to build (starting to build physical elements)
# Implement the sensors inside the movable wall, and the other devices in the fixed space
* Search, find and validate what will be the space used for the installation during the graduation.
* Start building of the movable wall by considering the characteristic of the space used for graduation.
* Implement the sensors inside the movable wall, and the other devices in the fixed space


===<p style="font-family:helvetica">Why do you want to make it?</p>===
===<p style="font-family:helvetica">Why do you want to make it?</p>===


In opposition to the physical exhibition space, <b>the Web offers to each of its user/visitors a custom point of view</b> based on an <b>innumerable and everchanging array of technological factors</b>. I like to call this the technological context. Among these, we could list: the browser, the device, the explotation system, the screensize, the resolution, the user configuration and defaults settings, the updates, the IP adress, etc.. This technological complexity <b>diffracts the possible renders of a same Web page in an almost infinite array of user perspectives</b>. Therefore, <b>this is the nature and meaning of representation itself that is redifined by the Web</b>. Web representations are sort of <b>plastic/elastic</b>, they <b>demultiplies</b> and <b>transforms themselves</b> as much as needed in order to be <b>rendered in an optimal way through our own user perspective/interface</b>. By implementing these notions and properties into the phyiscal exhibition spaces, I would like to <b>put a step in the curatorial practice</b>.
At the origin of this project and previous works over the past years lies the desire to make the invisible visible. In my opinion, the better a medium mediates, the more it becomes invisible and unconsidered.  This paradox stimulates a need to reflect and enlighten the crucial role of the media in the way we create, receive, perceive and interpret a content, a subject or a work, but also in the way we behave and circulate in relation to it/them. It is probably not so common to appreciate an artwork  for its frame or for the quality of the space in which it is displayed. It is however more common to let ourselves (as spectators/observers) be absorbed by the content itself and to naturally make abstraction of all mediating technologies. This is why I often try to  « mediate the media » (see: Mediatizing the media), which means to put the media at the center of our attention, transform it as the subject itself. In that sense my graduation project as well as some of my previous works could be considered as meta-works. I want to give users/visitors/spectators occasions to reflect on what is containing, surrounding, holding or hosting a representation.
<br><br>
<br><br>
From our own user perspective/point of view, behind our own screen, <b>this technological complexity and the infinite spectrum of perspectives that it leads to can hardly be considered</b> (expect [http://whatyouseeiswhatyouget.net/ here] for example). This brings us to <b>uncounsioulsy forget about the singularity and fragility of what we is being seen/experienced/interpretated</b>. By creating a physical interface conceived on the model of a responsive Web page, I want to give to the visitors the power to <b>manipulate and diffract this spectrum of perspectives by their own hands</b> and <b>to consider the role and effects of these mediating technologies on the visitor's behaviors and perception of an artwork/content.</b>
On the other hand, I have been more recently attached to the idea of reversing the desktop metaphor. The desktop metaphor refers to the terms and objects that the Web borrowed from the physical world in order to make its own concepts more familiar and intelligible to its users. Now, largely democratized and widely spread in modern society, people may have now more clear understanding and concrete experiences of the digital/Web interface. Museums, hotels, houses, cars interiors, restaurants are themselves becoming more and more comparable to digital interface where everything is optimized, and where our behaviors, actions and even inactions are being detected and converted into commands in order to offer a more customized (and lucrative) experience to each of us. In that sense, we are getting closer from becoming users of our own interfaced/augmented physical realities. By creating a exhibition spaces explicitly merging the concepts of digital Web interface with the concept of exhibition space, I wish to create a specific space dedicated to the experience of cybernetics, and to questioning what could be the future of exhibition space. It is also about asking and displaying what are the vulnerabilities of such technologies that we sometimes tend to glorify or demonize. In that sense, the restitution of this exhibition space will intentionally leave bugs, glitches and other accidents that may have been encountered in the making of this work.
<br><br>
<br><br>
On the other hand, I am attached to the idea of <b>reversing the desktop metaphor</b>. The desktop metaphors refers to the terms and objects that the Web borrowed from the physical world in order to make its own concepts more familiar and intelligible to its users. Now, largely democratized and widely spread in modern society, <b>people may have now more concrete experiences of the digital/Web interface than the physical space</b>. Museums, hotels, houses, cars interiors, restaurants are themselves becoming more and more comparable to digital interface where everything is optimized, and where our <b>behaviours, actions and even inactions are being detected and converted into commands in order to offer a more customized (and lucrative) experience to each of us</b>. In that sense, we are <b>getting closer from becoming users of our own interfaced/augmented physical realities</b>. By creating a exhibition spaces explicitly inspired from a desktop Web interface, I wish to question <b>what could be the future of exhibition space, and what are the vulnerabilities of such technologies</b>.
Finally, it is about putting together two layers of reality that are too often clearly opposed/seperated(IRL VS Online). This is about making the experience of their ambiguities, similarities, and differences. It is about reconsidering their modalities by making them reflect on each others, and making the user/spectator/visitor reflect on its own agencies inside of them.
<br><br>
Conceiving the exhibition space as a Web interface, and the spectator as a user is also about <b>puting together two layers of reality that are too often clearly opposed/seperated(IRL VS Online)</b>. This is about <b>experiencing their ambiguities, similarities, and differences</b>. It is about <b>reconsidering their modalities by making them reflect on each others, and <b>making the user/spectator reflect on its own agencies inside of them</b>. (see: [[Martin_(XPUB)-thesis_outline#II._Reversing_the_desktop_metaphor|Reversing the desktop metaphor]])
<br><br>
More generally, it is about <b>reflecting on media itself</b>, and deal with a paradox that I've always found interesting: The better a medium mediates, the more it becomes invisible and unconsidered. </b>  see: [[Martin_(XPUB)-thesis_outline#2._MEDIATIZING_THE_MEDIA_.28Optional.29|Mediatizing the media]]). This observation makes me want to mediate the media and to <b>give spectators more occasions to focus on what is containing, surrounding, holding or hosting a representation</b> instead of giving all our focus on the representation itself.


===<p style="font-family:helvetica">Who can help you?</p>===
===<p style="font-family:helvetica">Who can help you?</p>===


*About the <b>overall project</b>
* About the overall project
 
1. Stephane Pichard, ex-teacher and ex production-tutor for advices and help about scenography
# Stephane Pichard, ex-teacher and ex production-tutor <b>for advices and help about scenography</b>
2. Emmanuel Cyriaque: my ex-teacher and writting-tutor for advices and help contextualize my work
# Emmanuel Cyriaque: my ex-teacher and writting-tutor <b>for advices and help contextualize my work</b>
* About Arduino
 
1. XPUB Arduino Group
*About <b>Arduino</b>
2. Dennis de Bel
 
3. Aymeric Mansoux
# XPUB Arduino Group  
2. Michael Murtaugh
# Dennis de Bel  
* About creating the physical elements:
# Aymeric Mansoux  
1. Wood station (for movable walls)
 
2. Metal station (for rails)
*About <b>Rasperry Pi</b>
3. Interaction station (for arduino/rasperyPi assistance)
 
* About theory/writting practice:
# XPUB2 students (Jacopo, Camillo, Federico, etc)
1. Rosa Zangenberg: ex-student in history art and media at Leiden Universtity.
# Michael Murtaugh
2. Yael: ex-student in philosophy, getting started with curatorial practice and writtings about the challenges/modalities of the exhibition space. Philosophy of the media (?)
 
* About finding an exhibiting space:
*About <b>creating the physical elements</b>:  
1. Leslie Robbins
 
# Wood station (for movable walls)
# Metal station (for rails)
# Interaction station (for arduino/rasperyPi assistance)
 
*About <b>theory/writting practice</b>:  
 
# Rosa Zangenberg: ex-student in history art and media at Leiden Universtity.
# Yael: ex-student in philosophy, getting started with curatorial practice and writtings about the challenges/modalities of the exhibition space. Philosophy of the media (?)
 
*About <b>finding an exhibiting space</b>:  
 
# Leslie Robbins


===<p style="font-family:helvetica">Relation to previous practice</p>===
===<p style="font-family:helvetica">Relation to previous practice</p>===


During the first part of my previous studies, I really started being engaged into questioning the media by making a small online reissue of Raymond Queneau's book <i>Exercices de Style</i>. In this work called <i>Incidences Médiatiques</i>, the user/reader was encouraged to explore the 99 differents ways to tell a same story from Queneau, by putting itself in at least 99 different reading contexts. In order to suggest a more <b>non-linear reading experience, reflecting on the notion of context, perpective and point of view</b>, the user could unlock these stories by zooming-in or out the Web window, resizing it, changing the browser, going on a different device, etc. As part of my previous graduation project, I wanted to <b>reflect on the status of networked writing and reading</b>, by programming my thesis in the form of Web to Print Website. Subsequently, this website became <b>translated in the physical space</b> as a printed book, a set of meta flags, and a series of installations displayed in <b>a set of exhibition rooms that was following the online structure of thesis</b> (home page, index, part 1-2-3-4) [https://martinfoucaut.com/ESPACES-MEDIATIQUES-EN Project link]. It was my first attempt to create a physical interface inside an exhibition space, but it was focused on the structure and non-linear navigation rather than the elastic property of Web Interfaces. As a first year student of Experimental Publishing, I continued to work in that direction by eventually <b>creating a [https://issue.xpub.nl/13/TENSE/ meta-website]</b> making visible html <meta> tags in the middle of an essay. I also worked a [[SPECIAL_ISSUE_14_MARTIN_BOARD#GAME_PROTOTYPING|geocaching pinball]] <b>highligting invisible Web events</b> as well as a [[User:Martin#XPUB_Festival:_window.open_.2B_Civil_Entertainment_Sirens_.28audiovisual_performance.29|Web oscillator]] inspired from analog instruments's body size, and which amplitude and frequency range were directly related <b> to the user's device screen-size </b>.
During the first part of my previous studies, I really started being engaged into questioning the media by making a small online reissue of Raymond Queneau's book Exercices de Style. In this issue called Incidences Médiatiques, the user/reader was encouraged to explore the 99 different ways to tell a same story from Queneau, by putting itself in at least 99 different reading contexts. In order to suggest a more non-linear reading experience, reflecting on the notion of context, perspective and point of view, the user could unlock and read these stories by zooming-in or out the Web window, resizing it, changing the browser, going on a different device, etc. As part of my previous graduation project, I wanted to reflect on the status of networked writing and reading, by programming my thesis in the form of Web to Print Website. Subsequently, this website became translated in the physical space as a printed book, a set of meta flags, and a series of installations displayed in a set of exhibition rooms that was following the online structure of thesis (home page, index, part 1-2-3-4) Project link. It was my first attempt to create a physical interface inside an exhibition space, but focused on the structure and non-linear navigation . As a first year student of Experimental Publishing, I continued to work in that direction by eventually creating a meta-website making visible html <meta> tags in an essay. I also worked a geocaching pinball highlighting invisible Web events as well as a Web oscillator inspired from analog instruments's body size, and which amplitude and frequency range were directly related to the user's device screen-size.  


[[File:Incidences médiatiques .gif|250px|thumb|left|Incidences Médiatiques <br><b>click to watch GIF</b>]]
[[File:Incidences médiatiques .gif|250px|thumb|left|Incidences Médiatiques <br><b>click to watch GIF</b>]]
Line 115: Line 105:
===<p style="font-family:helvetica">Relation to a larger context</p>===
===<p style="font-family:helvetica">Relation to a larger context</p>===


<b>Curatorial Practice</b> / <b>New Media Art</b> / <b>Information Visualization</b> / <b>Software Art</b> / <b>Institutional Critique</b> / <b>Human Sciences</b> / <b>Cybernetics</b>
With the growing presence of digital tools in all aspects of our lives, people may now have more concrete experiences of the digital/Web interfaces than the physical space. The distinctions between the physical and virtual worlds are being blurred, as they gradually tend to affect & imitate each other, create interdependencies, and translate our behaviors into informational units (data).  Public spaces, institutions and governments are gradually embracing these technologies and explicitly promoting them as ways to offer us more efficient; easy of use; safer; customizable services.  However, we could also see these technologies as implicit political tools, playing around dynamics of visibility and invisibility in order to assert power and influence over publics and populations.
In a context where our physical reality is turning into a cybernetic reality, my aim is to observe and speculate on how mediating technologies could affect our modes of representation inside the exhibition spaces, as much as ask how could they redefine the agencies, behaviors and circulations of its visitors.  In order to do so, it will also be important to put this project in the framework of exhibition space history.
<br><br>
Curatorial Practice / New Media Art / Information Visualization / Software Art / Institutional Critique / Human Sciences / Cybernetics  


===<p style="font-family:helvetica">Key References</p>===
===<p style="font-family:helvetica">Key References</p>===

Revision as of 11:08, 21 November 2021


Graduate proposal guidelines

What do you want to make?

I want to build a dystopian cybernetic exhibition space reflecting on the increasing presence of the digital/virtual in our culture. This work will be speculating on how the modes of representation, inside the exhibition spaces, as well as the agencies, behaviors and circulations of its visitors could be affected by the growing translation of our physical/digital behaviors into informational units . The idea is to make use of digital technologies (ultrasonic sensors, microcontrollers, screens) and get inspired by the inherent mechanisms of the Web digital interfaces (responsive, Web events, @media queries) in order to create an exhibition space explicitly willing to offer a customizable perspective to its visitors. In this regard, the number of visitors, their position within the space, their actions or inactions as well as their movements and trajectories will be mapped (made interdependent) to various settings of the space itself, such as the room size, the lighting, the audio/sound, the information layout and format, etc.

In order to enlighten the invisible, silent and often poorly considered dynamics that can co-exist in-between both digital and physical spaces, the data captured inside of this space will be displayed on screens and be the main content of the exhibition. Ultimately, the graphic properties of this data (typography, layout, font-size, letter-space, line-height screen luminosity) will also be affected by the indications given by these same information units.

Far from wanting to glorify the use of technology or to represent it as an all-powerful evil, this space will be subject to accidents, unforeseen, bugs, dead zones and glitches, among some of them will have been intentionally left there.

How do you plan to make it?

While working with Arduino Mega and P5.js my aim is to start from the smallest and most simple prototype and gradually increase its scale/technicality until reaching human/architectural scale (see: prototyping).

Once an exhibition space will be determined for the assessment, I will introduce my project to the wood and metal stations of the school in order to get help to build at least one mobile wall fixed on a rail system. This wall will include handle(s) on the interior side, allowing visitors to reduce or expend the size of the space (by pushing or pulling the wall) from a minimum and to maximum range (estimated in between 5m2 to 15m2). On the exterior side of this wall, at least one ultrasonic sensor will be implemented in order to determine the surface of the room in real time. (see schema). With the help of an array of ultrasonic sensors putted on interior of the 4 surrounding walls, the space will be mapped into an invisible grid that will detect the exact position of the visitor(s) in real-time, as well as their number. With an extensive use of other sensors such as temperature sensors, light sensors, motion sensor, more information will be gathered, and assigned to specific parameters of the exhibition display.

One or various screens will be placed in the space itself, they will be displaying the data captured by the various sensors. Serial communications will allow to transfer the information gathered by the Arduinos to P5.js, allow variable displays of the units. Resizing the space will specifically affect the lighting of the space, the luminosity of the screens and the size of the informations displayed. The number of visitors will affect the number of active screens as well as the room temperature display. The position in the room will trigger different voice instructions or/and textual instructions if the visitor is not placed in a meaningful way toward the displayed contents. (Ref: Speaking wall (https://www.muhka.be/programme/detail/1405-shilpa-gupta-today-will-end/item/30302-speaking-wall))

In order to allow myself to take a step back on the making of this project, , I will take advantage of the different venues organized by XPUB2 and create mini-workshops that will relate more (see venue1) (see: simulation)

Sensor Test VS Elastic Space
Click to watch
Sensor Wall 01
SensorMediaQueries
Click to watch




What is your timetable?

1st semester Prototyping mainly with Arduino, connecting Arduino to P5.js , finding a space to set up the installation for final assesment

  • 1st prototype: mini arduio + light sensor (understanding arduino basics / connecting a sensor to a servo motor) 1
  • 2nd prototype: Arduino uno + utlrasonic sensor (working with ultrasonic sensors / display values of the sensor on serial monitor) 2
  • 3rd prototype: Arduino uno + utlrasonic sensor + LCD screen (working with values display on a small digital screen) 3
  • 4th prototype: Arduino uno + utlrasonic sensor + 2 LEDS (creating range distance values detection, and trigger different lights depending on distance detected) 4
  • 5th prototype: Arduino uno + 3 utlrasonic sensor + 3 LEDS (mapping range distance values in a simple 3X3 grid) 5
  • 6th prototype: Arduino uno + 3 utlrasonic sensor + 12 LEDS (assigning a signal to each position of a person inside the grid by adding more LEDS) 6
  • 7th prototype: Arduino uno + 3 utlrasonic sensor + 1 buzzer + 1 LCD + 1 Potentiometer (adding audio signals to the range value detection / changing the luminosity of the screen with a potentiometer) 7
  • 8th prototype: Arduino uno + 3 utlrasonic sensor + 1 buzzer + 1 LCD + 1 Potentiometer + mini breadboard (separating sensors from each others) 8
  • 9th prototype: Arduino Mega + 21 LEDS + 7 Sensor + Buzzer + Potentiometer + LCD (expending the prototype to human scale with a 7x3 grid / assigning each position within the grid to a specific led and buzzer signal)
  • 10th prototype: Arduino Mega + 7 Sensors + LCD + 3 buzzers + P5.js (allow muttiple sound signal at the same time if 2 people or more are in the grid)
  • 11th prototype: Arduino Mega + 7 Sensors + LCD + P5.js (connecting the prototoype to a Web page via serial communications, changing the size of a circle with distance sensors)

——————————— NOW —————————————————————————————————————————

  • Upcoming - Arduino Mega + 7 Sensors + P5.js (display live values on a screen, and change the display parameters depending on the values themselves)
  • Upcoming - Arduino Mega + 7 Sensors + P5.js (create voice commands/instruction depending on the visitors position)
  • Upcoming - Arduino Mega + 7 Sensors + LCD + P5.js (play sounds and affect pitch/tone depending on position one Web page)
  • Optional: Arduino uno + ESP8266 (WIFI) (transmit or/and control value from arduino to computer and vice versa via WIFI transmitter / not necessary anymore, since I found another way to do that via USB serial communications)


 2nd semester: Find what will be graduation space, build the mobile wall, and translate the setup installation to human/spectator scale.

  • Show prototype and schemas of the wall to wood and metal workshops in order to get advices until final validation to build (starting to build physical elements)
  • Search, find and validate what will be the space used for the installation during the graduation.
  • Start building of the movable wall by considering the characteristic of the space used for graduation.
  • Implement the sensors inside the movable wall, and the other devices in the fixed space

Why do you want to make it?

At the origin of this project and previous works over the past years lies the desire to make the invisible visible. In my opinion, the better a medium mediates, the more it becomes invisible and unconsidered. This paradox stimulates a need to reflect and enlighten the crucial role of the media in the way we create, receive, perceive and interpret a content, a subject or a work, but also in the way we behave and circulate in relation to it/them. It is probably not so common to appreciate an artwork for its frame or for the quality of the space in which it is displayed. It is however more common to let ourselves (as spectators/observers) be absorbed by the content itself and to naturally make abstraction of all mediating technologies. This is why I often try to « mediate the media » (see: Mediatizing the media), which means to put the media at the center of our attention, transform it as the subject itself. In that sense my graduation project as well as some of my previous works could be considered as meta-works. I want to give users/visitors/spectators occasions to reflect on what is containing, surrounding, holding or hosting a representation.

On the other hand, I have been more recently attached to the idea of reversing the desktop metaphor. The desktop metaphor refers to the terms and objects that the Web borrowed from the physical world in order to make its own concepts more familiar and intelligible to its users. Now, largely democratized and widely spread in modern society, people may have now more clear understanding and concrete experiences of the digital/Web interface. Museums, hotels, houses, cars interiors, restaurants are themselves becoming more and more comparable to digital interface where everything is optimized, and where our behaviors, actions and even inactions are being detected and converted into commands in order to offer a more customized (and lucrative) experience to each of us. In that sense, we are getting closer from becoming users of our own interfaced/augmented physical realities. By creating a exhibition spaces explicitly merging the concepts of digital Web interface with the concept of exhibition space, I wish to create a specific space dedicated to the experience of cybernetics, and to questioning what could be the future of exhibition space. It is also about asking and displaying what are the vulnerabilities of such technologies that we sometimes tend to glorify or demonize. In that sense, the restitution of this exhibition space will intentionally leave bugs, glitches and other accidents that may have been encountered in the making of this work.

Finally, it is about putting together two layers of reality that are too often clearly opposed/seperated(IRL VS Online). This is about making the experience of their ambiguities, similarities, and differences. It is about reconsidering their modalities by making them reflect on each others, and making the user/spectator/visitor reflect on its own agencies inside of them.

Who can help you?

  • About the overall project

1. Stephane Pichard, ex-teacher and ex production-tutor for advices and help about scenography 2. Emmanuel Cyriaque: my ex-teacher and writting-tutor for advices and help contextualize my work

  • About Arduino

1. XPUB Arduino Group 2. Dennis de Bel 3. Aymeric Mansoux 2. Michael Murtaugh

  • About creating the physical elements:

1. Wood station (for movable walls) 2. Metal station (for rails) 3. Interaction station (for arduino/rasperyPi assistance)

  • About theory/writting practice:

1. Rosa Zangenberg: ex-student in history art and media at Leiden Universtity. 2. Yael: ex-student in philosophy, getting started with curatorial practice and writtings about the challenges/modalities of the exhibition space. Philosophy of the media (?)

  • About finding an exhibiting space:

1. Leslie Robbins

Relation to previous practice

During the first part of my previous studies, I really started being engaged into questioning the media by making a small online reissue of Raymond Queneau's book Exercices de Style. In this issue called Incidences Médiatiques, the user/reader was encouraged to explore the 99 different ways to tell a same story from Queneau, by putting itself in at least 99 different reading contexts. In order to suggest a more non-linear reading experience, reflecting on the notion of context, perspective and point of view, the user could unlock and read these stories by zooming-in or out the Web window, resizing it, changing the browser, going on a different device, etc. As part of my previous graduation project, I wanted to reflect on the status of networked writing and reading, by programming my thesis in the form of Web to Print Website. Subsequently, this website became translated in the physical space as a printed book, a set of meta flags, and a series of installations displayed in a set of exhibition rooms that was following the online structure of thesis (home page, index, part 1-2-3-4) Project link. It was my first attempt to create a physical interface inside an exhibition space, but focused on the structure and non-linear navigation . As a first year student of Experimental Publishing, I continued to work in that direction by eventually creating a meta-website making visible html <meta> tags in an essay. I also worked a geocaching pinball highlighting invisible Web events as well as a Web oscillator inspired from analog instruments's body size, and which amplitude and frequency range were directly related to the user's device screen-size.

Incidences Médiatiques
click to watch GIF
Special issue 13 - Wor(l)ds for the Future
Tense screen recording montage of Tense
click to watch GIF
Media Spaces - graduation project
Media Spaces - graduation project
Media Spaces - graduation project
Media Spaces - graduation project







































Relation to a larger context

With the growing presence of digital tools in all aspects of our lives, people may now have more concrete experiences of the digital/Web interfaces than the physical space. The distinctions between the physical and virtual worlds are being blurred, as they gradually tend to affect & imitate each other, create interdependencies, and translate our behaviors into informational units (data). Public spaces, institutions and governments are gradually embracing these technologies and explicitly promoting them as ways to offer us more efficient; easy of use; safer; customizable services. However, we could also see these technologies as implicit political tools, playing around dynamics of visibility and invisibility in order to assert power and influence over publics and populations. In a context where our physical reality is turning into a cybernetic reality, my aim is to observe and speculate on how mediating technologies could affect our modes of representation inside the exhibition spaces, as much as ask how could they redefine the agencies, behaviors and circulations of its visitors. In order to do so, it will also be important to put this project in the framework of exhibition space history.

Curatorial Practice / New Media Art / Information Visualization / Software Art / Institutional Critique / Human Sciences / Cybernetics

Key References