Categories
Advanced Artificial Intelligence Deep Learning Machine Learning

Manually Calculate a Neural Network Output and Weights: A Step-by-Step Guide Using the Neural Net Formula

Understanding the intricacies of neural networks is essential for anyone diving into the world of AI. One of the best ways to grasp how a neural network functions is to manually calculate the output and weights. While software tools like TensorFlow and PyTorch automate these processes, doing it by hand gives you a clearer understanding of the neural net formula and how different elements interact.

In this post, we’ll walk you through the steps to manually calculate a simple neural network’s output and update the weights using basic Neural Net Formula. By the end of this guide, you’ll have a better understanding of the neural net formula, which will serve as a foundation for more complex neural network models.

1. The Basics: What is a Neural Network?

Before diving into the calculations of the Neural Net Formula, it’s essential to understand what a neural network is. In essence, a neural network is a series of algorithms that attempt to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. This process involves layers of neurons (or nodes), each connected by weights. The output of each neuron is determined by applying an activation function to a weighted sum of its inputs.

If you’re new to neural networks, you can check out our beginner’s guide to neural networks on The Robot Camp. Additionally, this Wikipedia page on neural networks provides a comprehensive overview.

2. A Simple Neural Network Example for Understanding the Neural Net Formula

Let’s consider a basic neural network with:

  • 2 input neurons
  • 1 hidden layer with 2 neurons
  • 1 output neuron

We’ll assume the following:

  • Inputs: x1=0.5x_1 = 0.5×1​=0.5, x2=0.2x_2 = 0.2×2​=0.2
  • Weights for the connections between input and hidden layer: w11=0.4w_{11} = 0.4w11​=0.4, w12=0.3w_{12} = 0.3w12​=0.3, w21=0.6w_{21} = 0.6w21​=0.6, w22=0.7w_{22} = 0.7w22​=0.7
  • Weights for the connections between hidden and output layer: wh1=0.2w_{h1} = 0.2wh1​=0.2, wh2=0.5w_{h2} = 0.5wh2​=0.5
  • Biases: b1=0.1b_1 = 0.1b1​=0.1, b2=0.2b_2 = 0.2b2​=0.2, bo=0.3b_o = 0.3bo​=0.3

3. Step-by-Step Calculation Using the Neural Net Formula

Step 1: Calculate the Weighted Sum for the Hidden Layer Using the Neural Net Formula

For each neuron in the hidden layer, the weighted sum is calculated as:

z1=(x1×w11)+(x2×w21)+b1z_1 = (x_1 \times w_{11}) + (x_2 \times w_{21}) + b_1z1​=(x1​×w11​)+(x2​×w21​)+b1​

z2=(x1×w12)+(x2×w22)+b2z_2 = (x_1 \times w_{12}) + (x_2 \times w_{22}) + b_2z2​=(x1​×w12​)+(x2​×w22​)+b2​

Substituting the values:

z1=(0.5×0.4)+(0.2×0.6)+0.1=0.32z_1 = (0.5 \times 0.4) + (0.2 \times 0.6) + 0.1 = 0.32z1​=(0.5×0.4)+(0.2×0.6)+0.1=0.32

z2=(0.5×0.3)+(0.2×0.7)+0.2=0.46z_2 = (0.5 \times 0.3) + (0.2 \times 0.7) + 0.2 = 0.46z2​=(0.5×0.3)+(0.2×0.7)+0.2=0.46

Step 2: Apply the Activation Function

Let’s use the sigmoid activation function, which is defined as:

σ(z)=11+e−z\sigma(z) = \frac{1}{1 + e^{-z}}σ(z)=1+e−z1​

Applying this to each neuron in the hidden layer:

h1=σ(z1)=11+e−0.32=0.579h_1 = \sigma(z_1) = \frac{1}{1 + e^{-0.32}} = 0.579h1​=σ(z1​)=1+e−0.321​=0.579

h2=σ(z2)=11+e−0.46=0.613h_2 = \sigma(z_2) = \frac{1}{1 + e^{-0.46}} = 0.613h2​=σ(z2​)=1+e−0.461​=0.613

Step 3: Calculate the Output Neuron’s Weighted Sum

Now, we calculate the weighted sum for the output neuron:

zo=(h1×wh1)+(h2×wh2)+boz_o = (h_1 \times w_{h1}) + (h_2 \times w_{h2}) + b_ozo​=(h1​×wh1​)+(h2​×wh2​)+bo​

Substituting the values:

zo=(0.579×0.2)+(0.613×0.5)+0.3=0.737z_o = (0.579 \times 0.2) + (0.613 \times 0.5) + 0.3 = 0.737zo​=(0.579×0.2)+(0.613×0.5)+0.3=0.737

Step 4: Apply the Activation Function to the Output

Finally, apply the sigmoid function to the output neuron:

y=σ(zo)=11+e−0.737=0.676y = \sigma(z_o) = \frac{1}{1 + e^{-0.737}} = 0.676y=σ(zo​)=1+e−0.7371​=0.676

This is the final output of the neural network.

4. Updating Weights Using Gradient Descent with the Neural Net Formula

Once you have the output, the next step is to adjust the weights to minimize the error. This process is known as backpropagation, and it uses gradient descent to update the weights. For a detailed guide on how to implement gradient descent manually, check out our advanced tutorial on backpropagation.

5. Conclusion: Mastering the Neural Net Formula

Understanding the neural net formula by manually calculating the output and adjusting the weights is a powerful exercise for anyone looking to deepen their understanding of AI. Although most of this process is automated in real-world applications, having a solid grasp of the fundamentals will enable you to better understand and troubleshoot complex neural network models.

If you’re interested in learning more about neural networks, AI, and robotics, explore our full range of tutorials. To stay updated on the latest developments in AI, don’t forget to check our news section.

Learn, build, and innovate at The Robot Camp, where the future of technology meets passion.

2 replies on “Manually Calculate a Neural Network Output and Weights: A Step-by-Step Guide Using the Neural Net Formula”

Leave a Reply

Your email address will not be published. Required fields are marked *