Martin (XPUB)-project proposal: Difference between revisions

From XPUB & Lens-Based wiki
Line 30: Line 30:
In order to allow myself to take a step back on the making of this project, ,  I will take advantage of the different venues organized by XPUB2 and create mini-workshops that will relate more  (see venue1) (see: simulation)  
In order to allow myself to take a step back on the making of this project, ,  I will take advantage of the different venues organized by XPUB2 and create mini-workshops that will relate more  (see venue1) (see: simulation)  


[[File:SensorSpace.gif|200px|thumb|left|Sensor Test VS Elastic Space<br><b>Click to watch</b>]]
[[File:SensorSpace.gif|300px|thumb|left|Sensor Test VS Elastic Space<br><b>Click to watch</b>]]
[[File:Sensor Wall 01.png|200px|thumb|right|Sensor Wall 01]]
[[File:Sensor Wall 01.png|300px|thumb|right|Sensor Wall 01]]
[[File:SensorMediaQueries 01.gif|200px|thumb|center|SensorMediaQueries<br><b>Click to watch</b>]]
[[File:SensorMediaQueries 01.gif|300px|thumb|center|SensorMediaQueries<br><b>Click to watch</b>]]
[[File:ScreenPortalFront.jpg|300px|thumb|left|ScreenPortalFront]]
[[File:ScreenPortalback.jpg|300px|thumb|right|ScreenPortalback]]
 
<br><br><br>
<br><br><br>



Revision as of 22:52, 6 December 2021


Graduate proposal guidelines

What do you want to make?

I want to build a dystopian cybernetic exhibition space reflecting on the increasing presence of the digital/virtual in our culture. This work will be speculating on how the modes of representation, inside the exhibition spaces, as well as the agencies, behaviors and circulations of its visitors could be affected by the growing translation of our physical/digital behaviors into informational units . The idea is to make use of digital technologies (ultrasonic sensors, microcontrollers, screens) and get inspired by the inherent mechanisms of the Web digital interfaces (responsive, Web events, @media queries) in order to create an exhibition space explicitly willing to offer a customizable perspective to its visitors. In this regard, the number of visitors, their position within the space, their actions or inactions as well as their movements and trajectories will be mapped (made interdependent) to various settings of the space itself, such as the room size, the lighting, the audio/sound, the information layout and format, etc.

In order to enlighten the invisible, silent and often poorly considered dynamics that can co-exist in-between both digital and physical spaces, the data captured inside of this space will be displayed on screens and be the main content of the exhibition. Ultimately, the graphic properties of this data (typography, layout, font-size, letter-space, line-height screen luminosity) will also be affected by the indications given by these same information units.

Far from wanting to glorify the use of technology or to represent it as an all-powerful evil, this space will also be subject to accidents, bugs, dead zones and glitches, among some of them might have been intentionally left there.

How do you plan to make it?

While working with Arduino Mega and P5.js my aim is to start from the smallest and most simple prototype and gradually increase its scale/technicality until reaching human/architectural scale (see: prototyping).

Once an exhibition space will be determined for the assessment, I will introduce my project to the wood and metal stations of the school in order to get help to build at least one mobile wall fixed on a rail system. This wall will include handle(s) on the interior side, allowing visitors to reduce or expend the size of the space (by pushing or pulling the wall) from a minimum and to maximum range (estimated in between 5m2 to 15m2). On the exterior side of this wall, at least one ultrasonic sensor will be implemented in order to determine the surface of the room in real time. (see schema). With the help of an array of ultrasonic sensors putted on interior of the 4 surrounding walls, the space will be mapped into an invisible grid that will detect the exact position of the visitor(s) in real-time, as well as their number. With an extensive use of other sensors such as temperature sensors, light sensors, motion sensor, more information will be gathered, and assigned to specific parameters of the exhibition display.

One or various screens will be placed in the space itself, they will be displaying the data captured by the various sensors. Serial communications will allow to transfer the information gathered by the Arduinos to P5.js, allow variable displays of the units. Resizing the space will specifically affect the lighting of the space, the luminosity of the screens and the size of the informations displayed. The number of visitors will affect the number of active screens as well as the room temperature display. The position in the room will trigger different voice instructions or/and textual instructions if the visitor is not placed in a meaningful way toward the displayed contents. (Ref: Speaking wall - Shilpa Gupta, 2009 - 2010)

In order to allow myself to take a step back on the making of this project, , I will take advantage of the different venues organized by XPUB2 and create mini-workshops that will relate more (see venue1) (see: simulation)

Sensor Test VS Elastic Space
Click to watch
Sensor Wall 01
SensorMediaQueries
Click to watch
ScreenPortalFront
ScreenPortalback




What is your timetable?

1st semester Prototyping mainly with Arduino, connecting Arduino to P5.js , finding a space to set up the installation for final assesment

  • 1st prototype: mini arduio + light sensor (understanding arduino basics / connecting a sensor to a servo motor) prototype
  • 2nd prototype: Arduino uno + utlrasonic sensor (working with ultrasonic sensors / display values of the sensor on serial monitor) prototype
  • 3rd prototype: Arduino uno + utlrasonic sensor + LCD screen (working with values display on a small digital screen) prototype
  • 4th prototype: Arduino uno + utlrasonic sensor + 2 LEDS (creating range distance values detection, and trigger different lights depending on distance detected) prototype
  • 5th prototype: Arduino uno + 3 utlrasonic sensor + 3 LEDS (mapping range distance values in a simple 3X3 grid) prototype
  • 6th prototype: Arduino uno + 3 utlrasonic sensor + 12 LEDS (assigning a signal to each position of a person inside the grid by adding more LEDS) prototype
  • 7th prototype: Arduino uno + 3 utlrasonic sensor + 1 buzzer + 1 LCD + 1 Potentiometer (adding audio signals to the range value detection / changing the luminosity of the screen with a potentiometer) prototype
  • 8th prototype: Arduino uno + 3 utlrasonic sensor + 1 buzzer + 1 LCD + 1 Potentiometer + mini breadboard (separating sensors from each others) prototype
  • 9th prototype: Arduino Mega + 21 LEDS + 7 Sensor + Buzzer + Potentiometer + LCD (expending the prototype to human scale with a 7x3 grid / assigning each position within the grid to a specific led and buzzer signal) prototype
  • 10th prototype: Arduino Mega + 7 Sensors + LCD + 3 buzzers + P5.js (allow muttiple sound signal at the same time if 2 people or more are in the grid) prototype
  • 11th prototype: Arduino Mega + 7 Sensors + LCD + P5.js (connecting the prototoype to a Web page via serial communications, changing the size of a circle with distance sensors)

——————————— NOW —————————————

  • Upcoming - Arduino Mega + 7 Sensors + P5.js (display live values on a screen, and change the display parameters depending on the values themselves)
  • Upcoming - Arduino Mega + 7 Sensors + P5.js (create voice commands/instruction depending on the visitors position)
  • Upcoming - Arduino Mega + 7 Sensors + LCD + P5.js (play sounds and affect pitch/tone depending on position one Web page)
  • Optional: Arduino uno + ESP8266 (WIFI) (transmit or/and control value from arduino to computer and vice versa via WIFI transmitter / not necessary anymore, since I found another way to do that via USB serial communications)


 2nd semester: Find what will be graduation space, build the mobile wall, implement screen displays, and continue work with the arduinos

  • Show prototype and schemas of the wall to wood and metal workshops in order to get advices until final validation to build (starting to build physical elements)
  • Search, find and validate what will be the space used for the installation during the graduation.
  • Start building of the movable wall by considering the characteristic of the space used for graduation.
  • Implement the sensors inside the movable wall, and the other devices in the fixed space

Why do you want to make it?

At the origin of this project and previous works over the past years lies the desire to make the invisible visible. In my opinion, the better a medium mediates, the more it becomes invisible and unconsidered. This paradox stimulates a need to reflect on and enlighten the crucial role of the media in the way we create, perceive, receive and interpret a content, a subject or a work, but also in the way we behave and circulate in relation to it/them. It is probably not so common to appreciate an artwork for its frame or for the quality of the space in which it is displayed. It is however more common to let ourselves (as spectators/observers) be absorbed by the content itself and to naturally make abstraction of all mediating technologies. This is why I often try to « mediate the media » (see: Mediatizing the media), which means to put the media at the center of our attention, transform it as the subject itself. In that sense my graduation project as well as some of my previous works could be considered as meta-works. I want to give users/visitors/spectators occasions to reflect on what is containing, surrounding, holding or hosting a representation.

On the other hand, I have been more recently attached to the idea of reversing the desktop metaphor. The desktop metaphor refers to the terms and objects that the Web borrowed from the physical world in order to make its own concepts more familiar and intelligible to its users. Now, largely democratized and widely spread in modern society, people may have now more clear understanding and concrete experiences of the digital/Web interface. Museums, hotels, houses, cars interiors, restaurants are themselves becoming more and more comparable to digital interface where everything is optimized, and where our behaviors, actions and even inactions are being detected and converted into commands in order to offer a more customized (and lucrative) experience to each of us. In that sense, we are getting closer from becoming users of our own interfaced/augmented physical realities. By creating a exhibition spaces explicitly merging the concepts of digital Web interface with the concept of exhibition space, I wish to create a specific space dedicated to the experience of cybernetics, and to questioning what could be the future of exhibition space. It is also about asking and displaying what are the vulnerabilities of such technologies that we sometimes tend to glorify or demonize. In that sense, the restitution of this exhibition space will intentionally leave bugs, glitches and other accidents that may have been encountered in the making of this work.

Finally, it is about putting together two layers of reality that are too often clearly opposed/seperated(IRL VS Online). This is about making the experience of their ambiguities, similarities, and differences. It is about reconsidering their modalities by making them reflect on each others, and making the user/spectator/visitor reflect on its own agencies inside of them.

Who can help you?

About the overall project

  1. Stephane Pichard, ex-teacher and ex production-tutor for advices and help about scenography/exhibition space
  2. Emmanuel Cyriaque: my ex-teacher and writting-tutor for advices and help to contextualize my work
  3. Manetta
  4. Michael

About Arduino

  1. XPUB Arduino Group (knowledge sharing)
  2. Dennis de Bel (ex-XPUB)
  3. Aymeric Mansoux

About the physical elements of the exhibition:

  1. Wood station (for movable walls)
  2. Metal station (for rail system)
  3. Interaction station (for arduino/P5.js)

About theory/writting practice:

  1. Rosa: ex-student in history art and media at Leiden Universtity.
  2. Yael: ex-student in philosophy, getting started with curatorial practice and writtings about the challenges/modalities of the exhibition space. Philosophy of the media (?)

About finding an exhibiting space:

  1. Leslie Robbins

Relation to previous practice

During the first part of my previous studies, I really started being engaged into questioning the media by making a small online reissue of Raymond Queneau's book Exercices de Style. In this issue called Incidences Médiatiques, the user/reader was encouraged to explore the 99 different ways to tell a same story from the author, by trying to put itself in different online reading contexts. In order to suggest a more non-linear reading experience, reflecting on the notion of context, perspective and point of view, the user could unlock and read these stories by zooming-in or out the Web window, resizing it, changing the browser, going on a different device, etc. As part of my previous graduation project called Media Spaces, I wanted to reflect on the status of networked writing and reading, by programming my thesis in the form of Web to Print Website. Subsequently, this website became translated in the physical space as a printed book, a set of meta flags, and a series of installations displayed in a set of exhibition rooms that was following the online structure of thesis (home page, index, part 1-2-3-4). It was my first attempt to create a physical interface inside an exhibition space, but focused on the structure and non-linear navigation . As a first year student of Experimental Publishing, I continued to work in that direction by eventually creating a meta-website making visible html <meta> tags in an essay. I also worked on a geocaching pinball game highlighting invisible Web events and inviting users to drift in the city of the Hague to find hotspots. More recently I conceived an performed on a Web oscillator inspired from analog instruments's body size, and which amplitude and frequency range were directly related to the user's device screen-size.

Incidences Médiatiques
click to watch GIF
Special issue 13 - Wor(l)ds for the Future
Tense screen recording montage of Tense
click to watch GIF
Media Spaces - graduation project
Media Spaces - graduation project
Media Spaces - graduation project
Media Spaces - graduation project
Media Spaces - graduation project
Web oscillator













































Relation to a larger context

With the growing presence of digital tools in all aspects of our lives, people may now have more concrete experiences of the digital/Web interfaces than the physical space. The distinctions between the physical and virtual worlds are being blurred, as they gradually tend to affect & imitate each other, create interdependencies, and translate our behaviors into informational units (data). Public spaces, institutions and governments are gradually embracing these technologies and explicitly promoting them as ways to offer us more efficient; easy of use; safer; customizable services. However, we could also see these technologies as implicit political tools, playing around dynamics of visibility and invisibility in order to assert power and influence over publics and populations. In a context where our physical reality is turning into a cybernetic reality, my aim is to observe and speculate on how mediating technologies could affect our modes of representation inside the exhibition spaces, as much as ask how could they redefine the agencies, behaviors and circulations of its visitors. In order to do so, it will also be important to put this project in the historical framework of exhibition space and user infterfacers, and observe on what point they might be merging.

Curatorial Practice / New Media Art / Information Visualization / Software Art / Institutional Critique / Human Sciences / Cybernetics

Selected References