
Artificial neurons – a brief glimpse into the early history of machine learning
Before we discuss the perceptron and related algorithms in more detail, let us take a brief tour through the early beginnings of machine learning. Trying to understand how the biological brain works to design artificial intelligence, Warren McCullock and Walter Pitts published the first concept of a simplified brain cell, the so-called McCullock-Pitts (MCP) neuron, in 1943 (W. S. McCulloch and W. Pitts. A Logical Calculus of the Ideas Immanent in Nervous Activity. The bulletin of mathematical biophysics, 5(4):115–133, 1943). Neurons are interconnected nerve cells in the brain that are involved in the processing and transmitting of chemical and electrical signals, which is illustrated in the following figure:

McCullock and Pitts described such a nerve cell as a simple logic gate with binary outputs; multiple signals arrive at the dendrites, are then integrated into the cell body, and, if the accumulated signal exceeds a certain threshold, an output signal is generated that will be passed on by the axon.
Only a few years later, Frank Rosenblatt published the first concept of the perceptron learning rule based on the MCP neuron model (F. Rosenblatt, The Perceptron, a Perceiving and Recognizing Automaton. Cornell Aeronautical Laboratory, 1957). With his perceptron rule, Rosenblatt proposed an algorithm that would automatically learn the optimal weight coefficients that are then multiplied with the input features in order to make the decision of whether a neuron fires or not. In the context of supervised learning and classification, such an algorithm could then be used to predict if a sample belonged to one class or the other.
More formally, we can pose this problem as a binary classification task where we refer to our two classes as 1
(positive class) and -1
(negative class) for simplicity. We can then define an activation function that takes a linear combination of certain input values
and a corresponding weight vector
, where
is the so-called net input (
):

Now, if the activation of a particular sample , that is, the output of
, is greater than a defined threshold
, we predict class 1 and class -1, otherwise, in the perceptron algorithm, the activation function
is a simple unit step function, which is sometimes also called the Heaviside step function:

For simplicity, we can bring the threshold to the left side of the equation and define a weight-zero as
and
, so that we write
in a more compact form
and
.
Note
In the following sections, we will often make use of basic notations from linear algebra. For example, we will abbreviate the sum of the products of the values in and
using a vector dot product, whereas superscript T stands for transpose, which is an operation that transforms a column vector into a row vector and vice versa:

For example: .
Furthermore, the transpose operation can also be applied to a matrix to reflect it over its diagonal, for example:

In this book, we will only use the very basic concepts from linear algebra. However, if you need a quick refresher, please take a look at Zico Kolter's excellent Linear Algebra Review and Reference, which is freely available at http://www.cs.cmu.edu/~zkolter/course/linalg/linalg_notes.pdf.
The following figure illustrates how the net input is squashed into a binary output (-1 or 1) by the activation function of the perceptron (left subfigure) and how it can be used to discriminate between two linearly separable classes (right subfigure):

The whole idea behind the MCP neuron and Rosenblatt's thresholded perceptron model is to use a reductionist approach to mimic how a single neuron in the brain works: it either fires or it doesn't. Thus, Rosenblatt's initial perceptron rule is fairly simple and can be summarized by the following steps:
- Initialize the weights to 0 or small random numbers.
- For each training sample
perform the following steps:
- Compute the output value
.
- Update the weights.
- Compute the output value
Here, the output value is the class label predicted by the unit step function that we defined earlier, and the simultaneous update of each weight in the weight vector
can be more formally written as:

.
The value of , which is used to update the weight
, is calculated by the perceptron learning rule:

Where is the learning rate (a constant between 0.0 and 1.0),
is the true class label of the
th training sample, and
is the predicted class label. It is important to note that all weights in the weight vector are being updated simultaneously, which means that we don't recompute the
before all of the weights
were updated. Concretely, for a 2D dataset, we would write the update as follows:



Before we implement the perceptron rule in Python, let us make a simple thought experiment to illustrate how beautifully simple this learning rule really is. In the two scenarios where the perceptron predicts the class label correctly, the weights remain unchanged:


However, in the case of a wrong prediction, the weights are being pushed towards the direction of the positive or negative target class, respectively:


To get a better intuition for the multiplicative factor , let us go through another simple example, where:

Let's assume that , and we misclassify this sample as -1. In this case, we would increase the corresponding weight by 1 so that the activation
will be more positive the next time we encounter this sample and thus will be more likely to be above the threshold of the unit step function to classify the sample as +1:

The weight update is proportional to the value of . For example, if we have another sample
that is incorrectly classified as -1, we'd push the decision boundary by an even larger extend to classify this sample correctly the next time:

It is important to note that the convergence of the perceptron is only guaranteed if the two classes are linearly separable and the learning rate is sufficiently small. If the two classes can't be separated by a linear decision boundary, we can set a maximum number of passes over the training dataset (epochs) and/or a threshold for the number of tolerated misclassifications—the perceptron would never stop updating the weights otherwise:

Tip
Downloading the example code
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
Now, before we jump into the implementation in the next section, let us summarize what we just learned in a simple figure that illustrates the general concept of the perceptron:

The preceding figure illustrates how the perceptron receives the inputs of a sample and combines them with the weights
to compute the net input. The net input is then passed on to the activation function (here: the unit step function), which generates a binary output -1 or +1—the predicted class label of the sample. During the learning phase, this output is used to calculate the error of the prediction and update the weights.