User:Alexander Roidl/new-new-new-projectproposal: Difference between revisions

From XPUB & Lens-Based wiki
Line 3: Line 3:
I want to work on how we can understand machine learning algorithms that re–shape our visual culture. How can we learn from errors or mis-functions of software producing images that help making the inner functions graspable. I want to test these new algorithms in a playful way, I want to produce new visual material, that helps to understand – or at least helps to question these systems in a critical way. I want to investigate on the use of reverse engineering on such machine learning algorithms. Especially those that don’t provide a dataset. Is it possible to gain deeper insight and understanding of such systems by reverse engineering them? (hacking = understanding?)
I want to work on how we can understand machine learning algorithms that re–shape our visual culture. How can we learn from errors or mis-functions of software producing images that help making the inner functions graspable. I want to test these new algorithms in a playful way, I want to produce new visual material, that helps to understand – or at least helps to question these systems in a critical way. I want to investigate on the use of reverse engineering on such machine learning algorithms. Especially those that don’t provide a dataset. Is it possible to gain deeper insight and understanding of such systems by reverse engineering them? (hacking = understanding?)
[[File:Screen Shot 2018-10-04 at 11.15.29.png|thumbnail|Error in satellite imagery]]
[[File:Screen Shot 2018-10-04 at 11.15.29.png|thumbnail|Error in satellite imagery]]
I plan to make experiments going along with the research that will eventually accumulate to one final work. In a playful approach I want to bend models of machine learning generators, try to make them fail, produce new material, mislead algorithms. I want to examine the development of the image, from a 2D dimensional communication tool to a zero-dimensional array of points (Flusser, 2011), that is being transformed by complex algorithms.
I plan to make experiments going along with the research that will eventually accumulate to one final work. In a playful approach I want to bend models of machine learning generators, try to make them fail, produce new material, mislead algorithms. But I don’t want to leave it as a simple generation produced by pre-made algorithms playing with syntax and form. Through learning the fundamentals of neural networks I want to intervene in their inner working and therefore create a work expressing the deeper semiotics of those algorithms. While keeping its relation to the visual, examining the development of the image, from a 2D dimensional communication tool to a zero-dimensional array of points (Flusser, 2011), that is being transformed by complex algorithms.


====On the visible and invisible====
====On the visible and invisible====

Revision as of 19:31, 11 November 2018

First prototype, reverse engineering image classification

What do you want to make?

I want to work on how we can understand machine learning algorithms that re–shape our visual culture. How can we learn from errors or mis-functions of software producing images that help making the inner functions graspable. I want to test these new algorithms in a playful way, I want to produce new visual material, that helps to understand – or at least helps to question these systems in a critical way. I want to investigate on the use of reverse engineering on such machine learning algorithms. Especially those that don’t provide a dataset. Is it possible to gain deeper insight and understanding of such systems by reverse engineering them? (hacking = understanding?)

Error in satellite imagery

I plan to make experiments going along with the research that will eventually accumulate to one final work. In a playful approach I want to bend models of machine learning generators, try to make them fail, produce new material, mislead algorithms. But I don’t want to leave it as a simple generation produced by pre-made algorithms playing with syntax and form. Through learning the fundamentals of neural networks I want to intervene in their inner working and therefore create a work expressing the deeper semiotics of those algorithms. While keeping its relation to the visual, examining the development of the image, from a 2D dimensional communication tool to a zero-dimensional array of points (Flusser, 2011), that is being transformed by complex algorithms.

On the visible and invisible

During the research I encountered a weird set of images that deal with the shades between visibility and invisibility. On the one hand we find machine learning algorithms that auto-remove objects from images (Deep Angel, 2018) on the other hand there are new algorithms that detect if an image is manipulated (Haridy, 2018). For the second one this means computers detect something in images that is hardly visible for us (small repetitions of pixels that arrive from using the stamp tool in photoshop).

Another example that I found on google earth are »satellite calibration targets«, that were put up in the dessert to calibrate satellite vision. (see screenshot from google earth below). So this huge targets are invisible for most of us, but are physical places, that you can visit. And through the very same technology these images become visible for anyone again through google earth. So we can get a glimpse into satellite technique by the way it produces images, manifested in a physical space.

Satellite target found on google earth

(Note: Programming Languages use NONE / NULL to set if something is not there -> could relate to invisibility)

How do you plan to make it?

I want start researching history of algorithmic image & form generation and want to connect it to recent technological developments. I want to draw connections from the technical to the cultural, which means outlining machine learning in relation to visual culture. In a comparison between existing image generation techniques and new A.I.-enhanced forms, I hope to find parallels and differences in aesthetic, use and function.

I will learn about the inner workings of machine learning, and especially machine learning algorithms like Generative Adversarial Networks. These are neural networks that follow a certain architecture to generate new images from a database. Then I can on the one hand create my own algorithm to create visual material and on the other hand I will be able to use existing ones critically with my knowledge.

What is your timetable?

Along the writing of the thesis which will be based on careful reading and research, I want to make experiments with image generation algorithms.

Basic outline of planning (Very sketchy)

  1. October / November: Frame work + outline, research on history of algorithmic image generation & machine learning, small sketches in image generation
  2. December: Further research on machine learning related to images as well as image culture. Starting with more elaborate prototyping on images using machine learning
  3. January: Connecting machine learning with image culture. Deeper Research and writing on image culture and the implications of new machine learning images.
  4. February: Further research and writing + prototyping
  5. March: Finish and finetune writing. Translate prototypes into final project.
  6. April / Mai / June: Finish & fine-tune final project.

Why do you want to make it?

Online you constantly see news about new machine learning algorithms. I encountered a lot of technical papers in the past, where technicians are talking about images (quite strangely). But instead of talking about the technical improvements that can be done on these images, I want to think about what these images mean for fields traditionally concerned with image-making, especially arts and design.

Generating images from text descriptions

New advanced algorithms allow for new methods of visual form production. We see ourselves confronted with a weird set of new phenomena: algorithms that generate endless new photo-realistic images from a certain data-set, deeply weird shapes emerging from deep dream and computer vision or images that manipulate themselves from text-input.

Even before computers were invented artists where working on algorithmic form creation. Later, with the use of program code, images where generated in ever more diverse forms. Programmers and technicians have only recently developed images that are generated by machine learning that imitate photo-realistic material. Images, that follow more than only simple rules, but are far from pure randomness. Images, that are created based on a model that is not readable by humans, a model, which is feed by a database of existing images. How can these images be categorised? Are they photos, drawing, renderings or collages? I want to investigate on the implications of this new kind of images.

Insertion I just recently came across the link between Software Arts and Generative Arts. And how Software Arts emphasises a deeper engagement with software than it was the case with generative approaches.(Cox, 2007) I think it can be a useful method for me to review methods of Software Art.


I’m looking to answer the question: What does algorithmic image generation mean for cultural production in times of machine learning?

In order to answer this question I need to ask: How does machine learning change the generation of image. Why and how is it different from other forms of image generation? What are the implications on culture and art production using those generated forms. I want to understand how these images come to be and how I as an artist / designer can make use of them. These kind of algorithms have been used to generate image alike Van Gogh or other famous artists, but I want to challenge these algorithms to generate new, more »native digital« images. Why would you try to replicate traditional art forms while the nature of these techniques is so different?

Who can help you and how?

  • PZI Tutors > Research / Prototyping
  • AI Now Institute https://ainowinstitute.org/ I reached out to them and hope to be able to get some insights in their research about the social impact of artificial intelligence
  • XPUB Gang
  • Interaction Station: Javier (They are researching about machine learning in creative practices right now)
  • V2 AI-Lab? (Although their understanding of AI in the field of Arts seems different)

Relation to previous practice

I'm trained as a graphic designer and have been fascinated by different kinds of visual material. In addition to that I gained interest in new media and technology that would enhance humans. I am interested in understanding these new phenomena and their effects. Furthermore I have been researching about database and database-art. The models of machine learning algorithms are taking this idea of the database onto another level. A level where only machines can read this databases anymore. »Machine is talking to machine – the keyboard or user can be plugged in if needed« (Kittler, 2001) Furthermore I already did a few experiments that were related to experimental creative use of machine learning. Without prior knowledge of machine learning I generated new images of coastlines out of an accumulated dataset.

My previous work on machine learning


Relation to a larger context

Nowadays machine learning is embedded in many contemporary digital systems that drive our world. These systems and models of machine learning are really hard to understand, to the level where even the creator of such algorithms is not even able to have an insight, why machines make certain decisions. New methods of machine learning have also found their way into the arts, where the latter are trying to make sense out of these new algorithms. They are focusing on the outcomes and consequences of machine learning, but there is no interaction with the inner workings of these algorithms, how we see it the realm of Software Art for instance.

While Walter Benjamin saw himself confronted with a mechanical reproducibility of art with the rise of photography, we are now facing endless digital self-production and are even challenged in the way we see by machines. (Walter, 2008) So I think images are becoming a more important tool to make visible what these algorithms are doing and also to make visible where they fail. It is important to understand these systems and their implications in order to be able to influence them.

1 enOf0BEuyn YDdFWKp86Uw.gif

References

Literature (most important selection)

  • Benjamin, W. (2008). The Work of Art in the Age of Mechanical Reproduction. London: Penguin.
  • Berger, J. (2008). Ways of Seeing (01 edition). London: Penguin Classics.
  • Bridle, J. (2018). New Dark Age: Technology and the End of the Future. London ; Brooklyn, NY: Verso.
  • Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512. https://doi.org/10.1177/2053951715622512
  • Charly Onrop. (n.d.). TV Doku Spiegel Unberechenbarkeit 1/3. Retrieved from https://www.youtube.com/watch?v=AavTap5FgSQ&feature=youtu.be&t=275
  • Cox, G. (2007). ‘Generator: The Value of Software Art’, in Rugg, J., & Sedgwick, M. (ed.) Issues in Curating Contemporary Art and Performance. Intellect Books, pp. 147-162.
  • Deep Angel, The Artificial Intelligence of Absence. (n.d.). Retrieved 1 November 2018, from http://deepangel.media.mit.edu/
  • Finn, E. (2017). What Algorithms Want: Imagination in the Age of Computing. Cambridge, MA: The MIT Press.
  • Flusser, V. (2011). Into the Universe of Technical Images. (N. A. Roth, Trans.) (1 edition). Minneapolis: Univ Of Minnesota Press.
  • Gere, C. (2008). Digital Culture (2nd Revised edition edition). London: Reaktion Books.
  • Gerstner, K. (2007). Karl Gerstner: Designing Programmes (3., rd, and enlarged ed.). Baden: Lars Müller.
  • Haridy, R. (2018). Adobe’s new AI can identify altered images. Retrieved 1 November 2018, from https://newatlas.com/adobe-ai-detect-image-manipulation/55179/
  • Hoelzl, I., & Marie, R. (2015). Softimage: Towards a New Theory of the Digital Image. Bristol: Intellect Ltd.
  • Hu, T.-H. (2016). A Prehistory of the Cloud (Reprint edition). Cambridge, Massachusetts: The MIT Press.
  • Mitchell, W. J. T. (1980). The Language of Images (Reprint edition). Chicago: University of Chicago Press Journals.
  • Müller, A. C., & Guido, S. (2016). Introduction to Machine Learning with Python: A Guide for Data Scientists (1 edition). Sebastopol, CA: O’Reilly Media.
  • Postdigital Aesthetics - Art, Computation And Design | D. Berry | Palgrave Macmillan. (n.d.). Retrieved from //www.palgrave.com/gp/book/9781137437198
  • Press, T. M. (n.d.). The Allure of Machinic Life. Retrieved 15 October 2018, from https://mitpress.mit.edu/books/allure-machinic-life

Relations

  • In 2012 James Bridle established the term »New Aesthetic« in 2012, an ongoing collection of images on a tumblr blog.

Code