Backpropagation (or BP), stands for Error Backward Propagation, constitutes a core technique used in training models for graph embeddings.
The BP algorithm encompasses two main stages:
- Forward Propagation: Input data is fed into the input layer of a neural network or model. It then passes through one or multiple hidden layers before generating output from the output layer.
- Backpropagation: The generated output is compared with the actual or expected value. Subsequently, the error is conveyed from the output layer through the hidden layers and back to the input layer. During this process, the weights of the model are adjusted using the gradient descent technique.
The iterative weight adjustments constitute the training process of the neural network. We will further explain with a concrete example.
Preparations
Neural Network
Neural networks are typically composed of several essential components: an input layer, one or multiple hidden layers, and an output layer. Here, we present a simple example of a neural network architecture:
In this illustration, is the input vector containing 3 features, is the output. We have two neurons and in the hidden layer. The sigmoid activation function is applied in the output layer.
Furthermore, the connections between layers are characterized by the weights: ~ are weights between the input layer and hidden layer, and are weights between the hidden layer and output layer. These weights are pivotal in the computations performed within the neural network.
Activation Function
Activation functions empowers the neural network to conduct non-linear modeling. Without activation functions, the model can only express linear mappings, limiting their capability. A diverse range of activation functions exists, each serving a unique purpose. The sigmoid function used in this context is depicted by the following formula and graph:
Initial Weights
The weights are initialized with random values. Let's assume the initial weights are as follows:
Training Samples
Let's consider three sets of training samples as outlined below, where the superscript indicates the order of the sample:
- Inputs: , ,
- Outputs: , ,
The primary objective of the training process is to adjust the model's parameters (weights) so that the predicted/computed output () closely aligns with the actual output () when the input () is provided.
Forward Propagation
Input Layer → Hidden Layer
Neurons and are calculated by:
Hidden Layer → Output Layer
The output is calculated by:
Below is the calculation of the 3 samples:
2.4 | 1.8 | 2.28 | 0.907 | 0.64 | |
0.75 | 1.2 | 0.84 | 0.698 | 0.52 | |
1.35 | 1.4 | 1.36 | 0.796 | 0.36 |
Apparently, the three computed outputs () are very different from the expected ().
Backpropagation
Loss Function
A loss function is used to quantify the error or disparity between the model's outputs and the expected outputs. It is also referred to as the objective function or cost function. Let's use the mean square error (MSE) as the loss function here:
where is the number of samples. Calculate the error of this round of forward propagation as:
A smaller value of the loss function corresponds to higher model accuracy. The fundamental goal of model training is to minimize the value of the loss function to the greatest extent possible.
Consider the input and output as constants, while regarding the weights as variables within the loss function. Then the objective is to adjust the weights that result in the lowest value of the loss function - this is where the gradient descent technique comes to play.
In this example, the batch gradient descent (BGD) is used, i.e., all samples are involved in the calculation of the gradient. Set the learning rate .
Output Layer → Hidden Layer
Adjust the weights and respectively.
Calculate the partial derivative of with respect to with the chain rule:
where,
Calculate with values:
Then,
Since all samples are involved in computing the partial derivative, when calculating and , we take the sum of these derivatives across all samples and then obtain the average.
Therefore, is updated to .
The weight can be adjusted in a similar way by calculating the partial derivative of with respect to . In this round, is updated from to .
Hidden Layer → Input Layer
Adjust the weights ~ respectively.
Calculate the partial derivative of with respect to with the chain rule:
We already computed and , below are the latter two:
Calculate with values:
Then, .
Therefore, is updated to .
The remaining weights can be adjusted in a similar way by calculating the partial derivative of with respect to each of them. In this round, they are updated as follows:
- is updated from to
- is updated from to
- is updated from to
- is updated from to
- is updated from to
Training Iterations
Apply the adjusted weights into the model and proceed with forward propagation using the same three samples. In this iteration, the resulting error is reduced to .
The Backpropagation algorithm iteratively performs the forward and back-propagation steps to train the model. This process continues until either the designated training count or time limit is reached, or when the error decreases to a predefined threshold.