Skip to content

Single Layer Perceptron

cukinhou edited this page May 22, 2017 · 3 revisions

What is it?

The Perceptron is an algorithm for supervised learning of binary classifiers. This kind of classifiers decide whether an input, represented by a vector of numbers, belongs to some specific class (1) or not (0). It is a type of linear classifier, which means that the classification algorithm that makes its predictions is based on a linear predictor function (activation function) combining a set of weights with the feature vector, just like in linear regression. The algorithm allows for online learning, in that it processes elements in the training set one at a time [1].

How does it work?

The Perceptron is inspired by the information processing of a single neural cell called a neuron. A neuron accepts input signals via its dendrites, which pass the electrical signal down to the cell body. In a similar way, the Perceptron receives input signals from examples of training data that we weight and combine in the activation function. The activation is then transformed into an output value or prediction using a transfer function, such as the step transfer function or sigmoid function. In this way, the Perceptron is a classification algorithm for problems with two classes (0 and 1) where a linear equation (hyperplane) can be used to separate the two classes. It is closely related to linear regression and logistic regression that make predictions in a similar way (e.g. a weighted sum of inputs). The weights of the Perceptron algorithm must be estimated from the training data using stochastic gradient descent [2].

How is it implemented?

What is it for?

  • The backbone of Neural Networks
  • Edge detection

References

  1. Perceptron - Wikipedia-
  2. The perceptron algorithm in python
  3. The most basic form of Neural Network
  4. Further reading: Weighted Networks – The Perceptron , Perceptron Learning
Clone this wiki locally