User:Alexander Roidl/algorithmicthesis

From XPUB & Lens-Based wiki

Set of experiments on generating the thesis topic by a set of rules / with a game.

20181010 143308.jpg 20181010 143714.jpg


Method: Print | What: Collection | Action: transform

There is a huge collection of images on http://image-net.org/ It is an attempt to classify and categories images into a huge database

I picked a random set of images:


Clouds & sky | 1399 pictures

A visible mass of water or ice particles suspended at a considerable altitude


Screen Shot 2018-10-10 at 16.32.48.png
Screen Shot 2018-10-10 at 16.34.05.png
20181010 162612.jpg

Interestingly the set also features some cloudy explosions.

As a first attempt to transform I overprinted explosion images with those of clouds.

Scan MFP-529 0915 001.jpg


But I wondered if transformation can also happen on a more cognitive level. By putting this images into a certain context. So I decided to run it through an algorithm, an image detection software. As I result I figured out that the software would predict quite similar values.

Screen Shot 2018-10-10 at 16.36.49.png

This is interesting as in my personal (human) opinion these images feature radical different objects.

Screen Shot 2018-10-10 at 16.56.20.png

Screen Shot 2018-10-10 at 16.56.41.png

So what would the machine see if I labeled the images myself.

It didn’t change its opinion a lot, but still the image detection wouldn’t output the same result

PREDICTED CONCEPT
 PROBABILITY
 nature
0.989 
landscape
0.988
 sky
0.987
 desert
0.962 
travel
0.952
 outdoors
0.942 
no person
0.933
 summer
0.923
 volcano
0.922 
sun
0.917
 sand
0.916
 tree
0.913
 soil
0.909
 cloud
0.907
 fair weather
0.902 
hill
0.893 
hot
0.891
 mountain
0.884
 heat
0.875
 eruption
0.863


PREDICTED CONCEPT
 PROBABILITY 
nature
0.993 
landscape
0.991 
sky
0.990
 travel
0.954 
desert
0.951
 summer
0.948
 outdoors
0.943
 sand
0.940
 soil
0.939
 cloud
0.935 
volcano
0.935 
tree
0.926
 sun
0.921
 fair weather
0.916
 mountain
0.915
 hill
0.911 
rock
0.910
 no person
0.899
 eruption
0.885
 lava
0.882


Thoughts and questions on print, collection, transform

What do I see when I look at something?

Why is an explosion so different from a cloud and why are both connected for a machine. What do they have in common (outdoors, no person, dramatic)

Also these labels like no person, remind me little bit of fortune tellers that tell you things that most probably fit to every person, but are still relatable to your personal situation. That there is no person in this image is something I wouldn’t have thought of when looking at this image.

Thinking about: who classifies those images? How does the algorithm classify the images?

> So this took somewhat far from printing :D


Method: Program | What: Data | Action: generate

So how do you generate Data?

Program that generates random data:

from random import randint

f= open("file.txt","w+")

for i in range(100000):
     f.write("{}".format(randint(0,255)))

f.close()

This generates random numbers between 0 and 255 and appends it to a file. To make it readable I extend .txt, so it is readable by any text-editor. You can generate GB of data with this scripts and crash your computer – so quite powerful lines of code.

Screen Shot 2018-10-10 at 18.31.37.png

Screen Shot 2018-10-10 at 18.10.39.png

But what does this mean? To visualize it I mapped all these values to pixels, so this random data looks like the following:

Result image.png

Outputting it as a gif:

Random.gif