Note's on Invisible Images

From XPUB & Lens-Based wiki

https://thenewinquiry.com/invisible-images-your-pictures-are-looking-at-you/


“The advent of screen-based media in the latter half of the 20th century wasn’t so different: cathode ray tubes and liquid crystal displays emitted light at frequencies our eyes perceive as color, and densities we perceive as shape.”

—> Isn’t this just the same as coding tho? I understand that visual images were from the beginning on tactile, but i think that the change that they described here was not originally the tactile tot not-tactile (coding ,machine making images etc.) but the ‘not understanding of the process’, as the process was not visible anymore, and maybe not even ours.

“ Invisible images are actively watching us, poking and prodding, guiding our movements, inflicting pain and inducing pleasure. But all of this is hard to see.”

“One problem is that these concerns still assume that humans are looking at images, and that the relationship between human viewers and images is the most important moment to analyze–but it’s exactly this assumption of a human subject that I want to question.”

In this piece of writing the author addresses how ‘digital files’ can not be seen by human eyes without equipment and software but can always be seen by ‘computers’. A digital format remains a digital format, readable. According to the writer this is different from for example undeveloped film, since it’s not readable by any machine or human.

In a later part the author talks about how camera’s are reading us, license plates etc. Referred to as the invisible visual culture. And how we feed these alghorhitms and neural networks by uploaden pictures to the internet (FB etc.).

The text further explains the (biased) way Neural Networks work, and how we can not read images as humans anymore but must learn to ‘read’ them as a machine.

“We might think of these synthetic activations and other “hallucinated” structures inside convolutional neural networks as being analogous to the archetypes of some sort of Jungian collective unconscious of artificial intelligence–a tempting, although misleading, metaphor. Neural networks cannot invent their own classes; they’re only able to relate images they ingest to images that they’ve been trained on.“

For me it’s unclear how the writer switches throughout the text between image-capturers, images and image operations.

“Because image operations function on an invisible plane and are not dependent on a human seeing-subject (and are therefore not as obviously ideological as giant paintings of Napoleon) they are harder to recognize for what they are: immensely powerful levers of social regulation that serve specific race and class interests while presenting themselves as objective.”

I feel like the arguments in the text are valid but not cohesent.. On one hand it’s a critical view on how bias the neural networks are, how we share our own images with these machine learning processes, and the problems relating to privacy, on the other hand the commodification of images or the ‘evil capatalist side’ of it, and how it exsists because of the easiness of image operations (going back to the image-capturing & analyzing). Also, this argument can be made for any digital input, search-terms, website-analyzes etc. etc. This could all be linked to your identity. It has not so much to do with digital images as with ‘the digital’ in general.

“Machine-machine systems are extraordinary intimate instruments of power that operate through an aesthetics and ideology of objectivity, but the categories they employ are designed to reify the forms of power that those systems are set up to serve. As such, the machine-machine landscape forms a kind of hyper-ideology that is especially pernicious precisely because it makes claims to objectivity and equality.”

The text moves on to make the claim “There’s no obvious way to intervene in machine-machine systems using visual strategies developed from human-human culture.” That artists and other cultural producers have always found a way to counter-balance human-human visual culture through own produced images etc. But i do think nowadays there is a lot of art that work with for example bias alghorithms or machine learning ways. The text also gives some examples of artworks relating to this subject but then the writer says: “These are noteworthy projects that help humans learn about the existence of ubiquitous sensing. But these tactics cannot be generalised.” ((How, what is different from the other images??))

“Entire branches of computer vision research are dedicated to creating “adversarial” images designed to thwart automated recognition systems. These adversarial images simply get incorporated into training sets used to teach algorithms how to overcome them.”
—> But isn’t this the same way in human to human visual culture where images get appropriated, commodified and commercialised?
“To mediate against the optimizations and predations of a machinic landscape, one must create deliberate inefficiencies and spheres of life removed from market and political predations–“safe houses” in the invisible digital sphere. It is in inefficiency, experimentation, self-expression, and often law-breaking that freedom and political self-representation can be found.”

I totally agree with the text but i think just thinkin about 'images' in this way is a bit shortsighted (lol)