User:Alexander Roidl/everything: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
Line 51: Line 51:
Things that we can not explain with only words anymore, there is no language for it.  
Things that we can not explain with only words anymore, there is no language for it.  
(drone shadows, making it visible)
(drone shadows, making it visible)
Image Manipulation: Small things that we cannot see, but computers can. (Clones image zones)
https://theblog.adobe.com/spotting-image-manipulation-ai/
Invisible for humans


Main question: what makes these aesthetics so different from
Main question: what makes these aesthetics so different from

Revision as of 14:24, 14 October 2018

Sorting ideas out

the computational model

From the database the computer is being trained. While feeding these datasets into huge models we lose sight of it's connections, that make sense for only the computer now on. A model that tries to describe reality in order to generate, to analyze or predict.

I want to put a special focus on generative models and discriminative models of the world.

While these models simplify reality by trying to calculate probabilities and reduce to features, in the same time it makes reality more complicated. These computed models are black boxes. Mostly it is even impossible to see the data it is being trained on.

From features to reduced reality

So an image is being reduced to its most contrasting points, to its pixels that hold a certain array of color values. But what is it that makes an image? If I was to describe an image, I wouln't say: Oh, there is some contrast going on in the left corner, lots of brightness in the middle and

A sentence is being reduced to its words and connections. But can we describe the value of a sentence by only this features?

Training against myself

It becomes even weirder when we look at algorithms that create models by learning from themselves. So we do not only encounter the problem of creating an abstract model, even the database, that otherwise enables us to gather insights on why models act how they act, is incomplete.

How models are built

  • model is built by computer scientists
  • often incomplete / lacking / biased databases
  • selecting features > generalizing

unsemantic everything

Semantic sorting can be filtered trough models > Unsemantic web

Models turn chaos into sense (for a machine)

Bending the model - an experimental approach in understanding

How can we understand machine learning models by their mistakes and missunderstandings

> what can we learn from that different view on reality?

How can we bring models to fail or to its limits, how can you abuse them?


new aesthetics

Language of images

• Flusser > image reduced to 0 dimensions

new visual forms

computer vision

How visual material becomes more important in order to understand these technologies

Errors in images > what do they mean?

Things that we can not explain with only words anymore, there is no language for it. (drone shadows, making it visible)

Image Manipulation: Small things that we cannot see, but computers can. (Clones image zones) https://theblog.adobe.com/spotting-image-manipulation-ai/


Invisible for humans

Main question: what makes these aesthetics so different from



Prototyping ideas

Every image ever

constantly changing image that would generate every image ever that is possible with that set of pixels

(an image recognition algorithm could run over it and save all images that contain something)

new earth

Generate new images from satellite imagery

endless production

producing endless material (removing it after to free resources)