Riccardo's second draft

From XPUB & Lens-Based wiki
Revision as of 18:46, 20 March 2024 by Rick (talk | contribs) (Created page with "80 words to much -- This year I have been researching on the ways of perceiving and the ways of deploying colours and light. I did so by carrying on different approaches. In the work For Seven Time We Woke Up developed for the Eye Research Labs, I have been studying the technological staging of colours - as editable materialities in the context of software editing and software development - and their effects on perception. I started this process after founding, durin...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

80 words to much

--

This year I have been researching on the ways of perceiving and the ways of deploying colours and light. I did so by carrying on different approaches.

In the work For Seven Time We Woke Up developed for the Eye Research Labs, I have been studying the technological staging of colours - as editable materialities in the context of software editing and software development - and their effects on perception. I started this process after founding, during a bike tour in Rotterdam organised by Leslie Robbins, Becoming Invisible, a book from Ian Whittlesea where in a meditation exercise based on breathing techniques and the imaginative synthesis of colours clouds allows the practitioner to reach the esoteric purpose of becoming invisible. At the age of 11 I had a similar experience. At that time my mother was carrying on a variation of this type of meditation aimed at healing, successfully she later said, a disease. Under her guide I experienced the exercise myself, which, I recall, resulted in a feeling of deep relaxation and strong physical awareness. With the idea of translating the book of Ian Whittlesea into a moving-image piece, I firstly tried to have on-screen text that would give directions on the steps to follow, but after trying this route, I decided to spoil the vision of any direct instruction and to focus solely on the evolution of a colour choreography and the composition of a layered sonic drone. Rather than giving to the audience the feeling of being following a meditation, I decided to work toward the presentation of a time-based experience - an immersive journey that I would describe in terms of ‘atmospheric storytelling’.

Another instance of this research on light and colours conducted this year, is the film Yellow Message, composed of three shoots based on the observation of atmospheric changes. How lights changes over the course of an hour? How is it refracted when the leafs are moved by wind? How it encounters the textile of a flag upon which the shadows of branches of a nearby tree are cast? This film combines footage that I took in Sansepolcro (IT) during a period of study to learn the practice of flag waving, with two other different recordings made in a little swamp situated in the south of Rotterdam. The three scenes that forms the short movie are formally associated by the presence of the colour yellow, while the main subject is the light and its way of interacting with different materialities.

Throughout this two movies I feel to be exploring concept as opacity and transparency and how these physical dynamics of overlaying and overlapping construct images. I feel these do be metaphors of an approach oriented toward the appreciation of phenomenas that take place all around us, from site-specific locations to synthetic landscapes, from the light emitted by the sun to the one rendered by computational engines. This approach makes me able to celebrate rituals, the coming together in a place to watch and listen.

Both light and sound acts in the range of a spectrum of waves, the first through electromagnetic waves, the second through acoustic waves. As composer, in the past years I have been working with sounds by developing my own software and implementing digital signal processing techniques as filters, reverbs, resonators and distortions. This year I developed a new sound performance (you can listen to excerpts of it with the mp3 file Live Electronics @ Sonoscopia) where I treat sounds recordings of a french horn being played by musician Luca Medioli as if they are light waves filtered by prisms. Inspired by the music made by composer Phil Niblock, based on sampling techniques of musicians, and more in general by drone music, I am expanding my research toward the creation of audio and visual explorations that creates slowly evolving atmospheres.

In similar ways, this year, with For Seven Times We Woke Up, I have been working with light as I would with sound: through digital filters that alters the perception of an original image. This workflow starts with the mathematical effort of putting together the pieces of code that enables me to manipulate and alterate in expressive ways sounds and images. After this developing stage, I explored the combination of these techniques and composed the piece by tuning the parameters that defines the system behaviour. In this way I was able to create a score composed of different settings, and to move between these through smooth evolutions of automation and movements. I first started to compose and build instruments in this way back in 2018 when collaborating with choreographer Ariella Vidach on dance pieces that involved other than dancers, the combination of different medias - sounds, light, videos, robots - that had to be synchronised in order to reach predefined scenes, as well as to be able to last indefinitely, as the dancer where no machines and they could not be as precise as the other medias. Developing software to process media, I’m still embedding in my works dynamics that are not totally controllable. The system that I developed for For Seven Times We Woke Up combine these two temporalities (predictable and unpredictable), involving both generative parameters (the movement of the spheres is generated by mathematical functions that distributes partially-random values) and precise automations that follows a predetermined path.

Exporting high quality images proved particularly challenging, as the computation efforts required by high resolutions settings is too high for a computer to be able to compute, let’s say, 24 frame per second. For this reason I came up with my own code to render the video output in “non-real time”. This means: instead of generating 24fps or more and recording the result, the computer computes each frame taking all the time it needs for it to be properly exported. I could then combine the resulting series of frames together inside a video editing software. This is a workflow that I am working on since two years and during the creation of this movie I reached a stage of usability that gives me a good base for further use and development.

(connection with dance and theater, and the movement toward narration? Add reference to seminars!) Inspired by practices as dance and music I am interest in the movements between postures, and the pace with which these changes are carried: sometimes this lead to animated movies, others to virtual photographies.

As a continuation of my research in the field of digital animation, and with the desire of combining it with a narrative structure, I am planning on working on a project that combines all these aspects, in particular I want continue a research started with with the movie Enter the ♥. It is an animation movie that I presented last summer in Bruges as an installation in loop about the “Pentolaccia” tradition, mostly known as Piñata. With this movie I wanted to talk about the story we hear so many times: there is something to gain by excavating value, or, there is something to loot by breaking containers. The Pentolaccia is a container filled with treats that is broken as part of a celebration that occurs in different cultures for different occasions. The essence of this practice is that of celebrating abundance.
The movie that I produced ended up being a visual and sonic meditation, an experience without words and meaning being conveyed, and I now feel to have the awareness and the tools required to tell the story behind it. (See the video for a reference of how it looks and sound like). Three years ago I was watching my brother playing a video game wherein one of the main aim is that of building up the power of your avatar by finding rare items that you can wear, from weapons to amors and gems. The boss fights (fights against particularly strong characters) would resemble the ‘Pentolaccia’ celebration, as the body of the slain opponent, often monstrous, would spill coins and precious artefacts as much as graphic blood: treasures and treats all around, labeled with color codes that highlights the rarity of these items. I am in the process of thinking how to approach this topic, without addressing video games as necessarily bad, but relating with our dreams of being heroes, making great deeds and slaying monsters, wanting to suggest another approach: no need to break the jar, because it is beautiful, better to look at it. During the seminar with Cihad Caner I got inspired to challenge myself into approaching narrative practices, being critical about colonialism and getting acquaintance with artistic practices that takes in account the origin and reproducibility of violence. During Kate Briggs seminar I found relevant our talks about Ursula K. Le Guin “The Carrier Bag Theory of Fiction” about the need to gather bundles rather than casting arrows, and got to think about the pentolaccia as an object that conveys this urge. In the first half of this work the relation between a mother watching her kid playing videogames is explored (see the script in the text folder for further explanations on the camera movement, the kind of relation between these two figures and more), while the second half would be composed by an audiovisual experience based on color and sounds (an expansion on the video previously made). I imagine a work that is sad, serene, meaningful, boring and cathartic, to speak about our use of simulation engines and challenge the use we make out of them.