User:Alexander Roidl/neuralnetsanddeeplearning

From XPUB & Lens-Based wiki
< User:Alexander Roidl
Revision as of 16:38, 8 December 2018 by Alexander Roidl (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Machine Learning

Machine Learning is a term used to describe certain algorithms that »learn«. But learning can be seen as a quite broad term, for some applications it really comes down to a statistical calculation

Common Terms:

  • Generalisation
    • means how well the algorithm performs on unknown data, meaning how well can the algorithm generalise?
  • Overfitting
    • To many features, so the algorithms will only work on the sample data
  • Unterfitting
    • To less features, so the algorithms will apply to any data.
  • Features
    • Mostly the input data. Certain aspects of the investigated object that is specific about the problem.
  • Error
    • How wrong the algorithm is, connected to:
  • Loss
    • function that calculates the error
  • Feed-forward
    • One run through the network
  • Back-propagation
    • adjusting the weights of the network according to the derivative function of the estimated and the actual output.

K–Nearest Neighbour

K = Variable

Plots Features as Vectors and sorts them in space > Clusters them

The new datapoint is also put in this space and the K nearest points are used to categorise the new point.

Linear models

Draw a linear border between 2 or more classes

Decision Tree

Cain of if / else statements


Neural Networks

Also known as Deeplearning

Uses different Layers and a networked structure to find the best values for the wanted result. It is working in »loops« using Feedforward -> Backpropagation -> FFD -> BPG … in x many epochs.

Neurons

It consists of a lot of so called neurons. Each neuron holds a value, that is being calculated according to the activation function.

There are different kind of neurons:

  • Perceptron: outputs 0 or 1
  • Sigmoid: outputs a value between 0 and 1: f(x) = 1/1+e^(-x)
  • ReLU: same as Sigmoid but cuts values: -1<x>1
  • … other functions, that basically map a value to -1 to +1 (tanh)

Feed-forward

the calculation of the neuron:

  1. taking all input–weights and multiple with the input
  2. take the sum of that (add bias)
  3. take that sum and run it through the activation function

Back-propagation

  1. Take the current output of the network and substract from wished value (from dataset)
  2. calculate the derivative of the sumfunction (see FFD) and the expected value
  3. map that to the values of the weights of the previous layer and substract the difference (delta)
  4. do the same for all the previous layers


Architectures

So if you take different or multiple of those networks you end up with one of the following popular concepts:

Feed forward networks

Simple implementation of a neural network, see above.

Recurrent Neural Networks

Connects different neurons to implement some sort of memory.

LSTM: Long Short-Term Memory

Stores last results in a neural network itself. Good for text-generation (prevents repeating)

CNN: Convolutional Neural Networks

Especially for images. Divides the data in smaller bits to recognise patterns in different contexts.

GAN: Generative Adversarial Network

Uses two neural networks that oppose each other. One generates random data, the other one figures if it is fake or real. So the first one is trained to fool the discriminator.


Frameworks

Resources

Databases:

Artworld:

Books: