Introduction to Neural Networks

Introduction to Neural Networks

the way machines think...

ยท

4 min read

Your brain doesn't manufacture thoughts. Your thoughts shape neural networks.

It was fun predicting house prices from the area, ain't it? But the real fun is in cooking food than eating it. I've used Neural Networks in predicting Housing Prices from the area, and now let's know how Neural Networks work. Here's my previous post which is Linear Regression: Housing Prices Prediction.

What is it?

A neural network is a collection of layers, where each layer holds a specific number of neurons that help in predictions. The following is a basic diagram of a neural network. Neural Networks fall under the category of Supervised Machine Learning.

It contains an input layer, a hidden layer, and an output layer. In every layer, there are circular representations which are neurons. The input layer takes the input, makes some calculations and outputs the result which is given as input to the second layer. The output layer takes the input(which is the output of the input layer), makes calculations and outputs the result, which is again fed to the output layer to finally get the output. There are two important points here to notice:

  1. Though there are multiple lines from one neuron, they are all the same, fed to multiple neurons in the following layer.

  2. The hidden layer is called the hidden layer because you don't play with it, i.e. it is neither the input layer nor the output layer.

Let's zoom into the network,

The Neuron

The neuron also called a Perceptron, is a very important component in a Neural Net.

Assume we have a situation, "You wanted to go Shopping ๐Ÿ›๏ธ", and It depends on three factors: You are accompanied by your friends; You own a vehicle; The weather is good.

Now, these three factors contribute to your decision-making about going shopping. If the weather is bad, you don't walk out of your house. This means the weather has more weight on your decision as compared to the other two factors. These are called weights. weight is a parameter that in simple terms is defined as the extent of importance an input has in making the decision.

Now everyone doesn't feel the same about shopping...Some might feel it exciting, some might feel it boring. This is where the bias comes in. The person who is more interested in shopping will consider a greater bias and those who feel it boring consider a less bias.

Since we already have historic data to work with, it'll help us in understanding the user's interest in shopping and hence for the model's prediction to match the actual value, the bias is a very helpful element.

๐Ÿ’ก
The Weights always range between 0 & 1 whereas Bias can be anything(integers).

Let's take an example, and the weights are as follows,

Weight of the decision that

  • you are accompanied by your friends: 0.7

  • you own a vehicle: 0.9

  • the weather is good: 0.6

and, the inputs are as follows:

You are not accompanied by your friends, so: 0
You own a vehicle, so: 1
The weather is awesome!, so: 1

now, you calculate the total weight of the neuron by,

$$sumof(weight * input) + bias$$

which leaves us,

$$[ 0(0.7) + 1(0.9) + 1(0.6) ] - 1.3$$

I personally am not more into shopping, so I randomly gave a bias of -1.3. Upon solving the above, we get a value of 0.2. I only go shopping if I ran out of chips, which is very rare so let's say I have a threshold of 3.9 to go shopping. As my weight(which is 0.2) is less than my threshold, I will not go shopping.

While training your model, the weights, and the biases are automatically adjusted to make predictions close to the actual values, which we call accuracy. The threshold, sometimes, is defined by the model or by the programmer himself. You might be wondering how these weights are defined in the first place and how they are manipulated, let's see how.

The Hidden Magic

The magic lies in the hidden layers of your neural network which simply is the layers standing in between your input and output layer. Once you have given the input to the layer, it randomly assigns weights and biases to your inputs, then calculates the weights and passes them to the next layer. In this way, deep through your layers, the data becomes narrower and decisions become easier. The model adjusts the weights and biases with the help of the Activators & Loss Functions and this way of adjusting them is called Back Propagation.

Wrapping Up

Neural Nets simply are mathematical fellows that decide the output on how much you care about an input. I think it's too much knowledge for today, so I'll talk about Activators and Loss Functions in my next article.

Until next time, Sree Teja Dusi.

Did you find this article valuable?

Support Sree Teja Dusi by becoming a sponsor. Any amount is appreciated!

ย