When defining a layer of an artificial neural network (ANN), it is essential to understand how weights and biases interact with input data to produce the desired outputs. The process of defining such a layer does not involve multiplying the input data matrices by the sums of weights and biases. Instead, it involves a series of operations that include matrix multiplication and the addition of bias terms.
To elucidate this, consider a single layer in an ANN. This layer performs a linear transformation followed by a non-linear activation function. The linear transformation can be represented mathematically as:
![]()
where:
–
is the resulting matrix after the linear transformation.
–
is the weight matrix.
–
is the input data matrix.
–
is the bias vector.
The weight matrix
contains the parameters that the model learns during training. Each element in
represents the strength of the connection between a neuron in the previous layer and a neuron in the current layer. The input data matrix
consists of the features of the input data. The bias vector
allows the model to fit the data better by providing each neuron with a trainable offset.
The operation
denotes the matrix multiplication between the weight matrix
and the input data matrix
. This multiplication results in a new matrix where each element is a weighted sum of the input features. After this multiplication, the bias vector
is added to each column of the resulting matrix
. This addition is performed element-wise, meaning that each element of the bias vector
is added to the corresponding element in each column of the matrix
.
It is important to note that the bias is not multiplied by the input data matrix. Instead, it is added after the matrix multiplication. This distinction is important because the bias term allows each neuron to have a baseline value that it can adjust independently of the input data. Without the bias term, the model would be constrained to pass through the origin, limiting its flexibility and potentially reducing its ability to fit the data accurately.
To provide a concrete example, consider a simple neural network layer with the following parameters:
– Weight matrix
of shape (3, 2):
![Rendered by QuickLaTeX.com \[ W = \begin{bmatrix} 0.2 & 0.4 \\ 0.6 & 0.8 \\ 1.0 & 1.2 \\ \end{bmatrix} \]](https://dev-temp3.eitca.eu/wp-content/ql-cache/quicklatex.com-d74f746cfef7a1b38a461512011fb8aa_l3.png)
– Input data matrix
of shape (2, 4):
![]()
– Bias vector
of shape (3, 1):
![Rendered by QuickLaTeX.com \[ b = \begin{bmatrix} 0.1 \\ 0.2 \\ 0.3 \\ \end{bmatrix} \]](https://dev-temp3.eitca.eu/wp-content/ql-cache/quicklatex.com-a20d6c882786d3d9d5115615b2a67c61_l3.png)
First, we perform the matrix multiplication
:
![Rendered by QuickLaTeX.com \[ W \cdot X = \begin{bmatrix} 0.2 & 0.4 \\ 0.6 & 0.8 \\ 1.0 & 1.2 \\ \end{bmatrix} \cdot \begin{bmatrix} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ \end{bmatrix} = \begin{bmatrix} 0.2 \cdot 1 + 0.4 \cdot 5 & 0.2 \cdot 2 + 0.4 \cdot 6 & 0.2 \cdot 3 + 0.4 \cdot 7 & 0.2 \cdot 4 + 0.4 \cdot 8 \\ 0.6 \cdot 1 + 0.8 \cdot 5 & 0.6 \cdot 2 + 0.8 \cdot 6 & 0.6 \cdot 3 + 0.8 \cdot 7 & 0.6 \cdot 4 + 0.8 \cdot 8 \\ 1.0 \cdot 1 + 1.2 \cdot 5 & 1.0 \cdot 2 + 1.2 \cdot 6 & 1.0 \cdot 3 + 1.2 \cdot 7 & 1.0 \cdot 4 + 1.2 \cdot 8 \\ \end{bmatrix} = \begin{bmatrix} 2.2 & 2.8 & 3.4 & 4.0 \\ 4.6 & 6.0 & 7.4 & 8.8 \\ 7.0 & 9.2 & 11.4 & 13.6 \\ \end{bmatrix} \]](https://dev-temp3.eitca.eu/wp-content/ql-cache/quicklatex.com-5068e560d57e8518b029990ddbccd97d_l3.png)
Next, we add the bias vector
to each column of the resulting matrix:
![Rendered by QuickLaTeX.com \[ Z = \begin{bmatrix} 2.2 & 2.8 & 3.4 & 4.0 \\ 4.6 & 6.0 & 7.4 & 8.8 \\ 7.0 & 9.2 & 11.4 & 13.6 \\ \end{bmatrix} + \begin{bmatrix} 0.1 \\ 0.2 \\ 0.3 \\ \end{bmatrix} = \begin{bmatrix} 2.3 & 2.9 & 3.5 & 4.1 \\ 4.8 & 6.2 & 7.6 & 9.0 \\ 7.3 & 9.5 & 11.7 & 13.9 \\ \end{bmatrix} \]](https://dev-temp3.eitca.eu/wp-content/ql-cache/quicklatex.com-2f09b28c798e7a8a1e246b1b28fe3942_l3.png)
This final matrix
represents the output of the linear transformation for the given input data. The next step in a neural network layer typically involves applying an activation function to this output matrix. Common activation functions include the sigmoid function, the hyperbolic tangent (tanh) function, and the rectified linear unit (ReLU) function. These functions introduce non-linearity into the model, enabling it to learn more complex patterns.
For instance, applying the ReLU activation function to the matrix
would result in:
![]()
Thus, the output matrix
would be:
![Rendered by QuickLaTeX.com \[ A = \begin{bmatrix} 2.3 & 2.9 & 3.5 & 4.1 \\ 4.8 & 6.2 & 7.6 & 9.0 \\ 7.3 & 9.5 & 11.7 & 13.9 \\ \end{bmatrix} \]](https://dev-temp3.eitca.eu/wp-content/ql-cache/quicklatex.com-d4e82957c18635573f88e975d1112d9a_l3.png)
In TensorFlow, defining a layer with biases included is straightforward. TensorFlow provides high-level APIs such as `tf.keras.layers.Dense` that handle the creation of weights and biases, as well as their application to the input data. For example, to define a dense layer with 3 units and biases included, one would use the following code:
python import tensorflow as tf # Define a dense layer with 3 units dense_layer = tf.keras.layers.Dense(units=3, use_bias=True) # Example input data input_data = tf.constant([[1.0, 2.0], [3.0, 4.0]], dtype=tf.float32) # Applying the dense layer to the input data output = dense_layer(input_data) print(output)
In this example, the `Dense` layer automatically initializes the weight matrix and bias vector. When the `input_data` is passed through the layer, TensorFlow performs the matrix multiplication and bias addition internally. The resulting output is the transformed data, ready for further processing or for use in subsequent layers of the neural network.
Understanding the role of weights and biases is fundamental to grasping how neural networks learn and make predictions. The weight matrix captures the relationships between input features and the output, while the bias vector allows each neuron to adjust its output independently of the input. This combination of weights and biases enables neural networks to approximate complex functions and perform tasks such as classification, regression, and more.
Other recent questions and answers regarding EITC/AI/DLTF Deep Learning with TensorFlow:
- Does a Convolutional Neural Network generally compress the image more and more into feature maps?
- Are deep learning models based on recursive combinations?
- TensorFlow cannot be summarized as a deep learning library.
- Convolutional neural networks constitute the current standard approach to deep learning for image recognition.
- Why does the batch size control the number of examples in the batch in deep learning?
- Why does the batch size in deep learning need to be set statically in TensorFlow?
- Does the batch size in TensorFlow have to be set statically?
- How does batch size control the number of examples in the batch, and in TensorFlow does it need to be set statically?
- In TensorFlow, when defining a placeholder for a tensor, should one use a placeholder function with one of the parameters specifying the shape of the tensor, which, however, does not need to be set?
- In deep learning, are SGD and AdaGrad examples of cost functions in TensorFlow?
View more questions and answers in EITC/AI/DLTF Deep Learning with TensorFlow
More questions and answers:
- Field: Artificial Intelligence
- Programme: EITC/AI/DLTF Deep Learning with TensorFlow (go to the certification programme)
- Lesson: TensorFlow (go to related lesson)
- Topic: TensorFlow basics (go to related topic)

