cover photo

BLOG · 16/6/2025

Neural Networks

The very basics!!

Nilima Sharma
Nilima Sharma
OP
Neural Networks
This Article is yet to be approved by a Coordinator.

NEURAL NETWORKS

Neural networks are derived from the idea of a human brain, which forms the base of Deep Learning, a subset of Machine Learning. Based on the intake of a given type of data, they analyze it to train themselves over it repeatedly to recognize patters and then predict the output, or even produce a similar set of the same kind of data. Neural Networks are made up of several neurons that work as the base processing unit of the network.

It has an input layer that takes in the data to train itself and an output layer to predict the end output. The layers between the input and output layers known as the ‘hidden layers’ carry out most of the inner computations required by the network. Input is first fed to the first layer of neurons, and these neurons are connected to the next layer through something called ‘channel’.

Each and every channel is assigned a numerical value known as the ‘weight’. The fed inputs are multiplied with their corresponding weights and all these are added and fed to the next layer of neurons present in the hidden layers. To that input sum, a corresponding bias of each neuron is added. This summation is added to a threshold function called the ‘activation function’, of whose result concludes if the particular neuron works or not. An activated layer of neurons then transmits data through another set of neurons over the channels. This is the mechanism of ‘forward propagation’ that represents forward movement of the data through the neuron layers over channels.

In the output layer of neurons, the neuron with the highest value determines the output. These values are actually probabilities of each neuron to predict the right output. Hence, they are the outputs predicted by the neural networks. Sometimes, the neural network can make a wrong prediction which maybe due to the fact that it hasn’t been trained yet. Hence, during the training process, along with the input, even the output is fed to it, and this output value is compared with the predicted output. The magnitude and sign of the error determine how high or low we are from the actual output.

Now, the process of ‘Back Propagation’ occurs where the same values of errors and their signs are sent backward and again through the neural network. Based on these values, the weights of the neural network are adjusted. This cycle of Forward and Backward propagation is iteratively performed along with a set of fed inputs. This occurs until the prediction can be nearly as close to the actual outcome, which ends the training process.

Neural networks make take months or even years to train, depending on how well the outcome must be predicted.

Mathematical Implication: A feedforward neural network with LLL layers can be written as:

                f(x)=fL(fL−1(…f1(x)))f(x)

Each layer is:

                  fl(x)=σ(Wlx+bl)f_l(x)

Where:

-   Wl: matrix
-   bl​: Bias vector
-   σ: Activation function (e.g., ReLU, sigmoid)
    

TYPES OF NEURAL NETWORKS

ARTIFICIAL NEURAL NETWORK[ANN]:

It is the type of neural network consisting of multiple perceptrons or neurons at each layer. It is also called a Feed-Forward Neural Network because inputs are processed only in the ‘forward’ direction. This type of neural network is the simplest variant of all, since it processes information only in the forward direction through various input nodes until it makes to the output node. This type of neural network may or may not have hidden layers, making their functioning more interpretable.

-Advantages

  • They store the entire information on the neural network.
  • They have the ability to work with incomplete knowledge.
  • They offer fault tolerance and have distributed memory.
  • They offer us the ability to work with incomplete knowledge as well.

-Disadvantages

  • They have huge hardware dependency.
  • They sometimes have unexplained behavior which can leave us tormented with results.
  • There is no specific rule for determining the structure of artificial neural networks.
  • Appropriate structure is achieved through experience, trial and error.

CONVOLUTIONAL NEURAL NETWORK[CNN]:

This type of neural network computational model uses a variation of multilayered neurons and contains one or more convolutional layers that can either be entirely connected or computed in a common group. Further, these convolutional layers create feature maps that record a region of an image that is broken into rectangles, later on sent to non-linear processing.

-Advantages

  • This type of neural network offers very high accuracy in image recognition problems.
  • They are capable of automatically detecting important features without human intervention.
  • This neural network offers weight sharing.

-Disadvantages

  • CNN’s don’t encode the position nor the orientation of the object.
  • They lack the ability to be spatially invariant to the input data.
  • A lot of training data is required in order to work efficiently.

RECURRENT NEURAL NETWORK[RNN]:

This kind of neural network is much more complex than the other two. They save the output of processing nodes and feed the result back into the model and hence, they don’t pass information in one direction only. This is what is known as front and back propagation. This is how the model is trained to predict the outcome of a layer. Here, each node acts as a ‘memory cell’ continuing the computation and implementation of operations and if the neural network makes an incorrect prediction, the system self-learns and continues working towards the correct prediction during the back propagation.

-Advantages

  • This type of neural network remembers every information through time.
  • An RNN can also be used with convolutional layers to extend the effective pixel neighbourhood.

-Disadvantages

  • They have gradient vanishing and exploding problems.
  • Training RNN’s are pretty hard.
  • They cannot process very long sequences.

Applications

Facial Recognition: These days, cameras can actually recognize a person’s age based on their facial features. They can distinguish a face from the background and draw lines on the face through multiple points.
Forecasting: Neural Networks are trained to predict the probability of rainfall, stock rise or fall as well.
Music Composition: They can even be trained on music to predict music patters, helping a neural network form an entirely new tune.

UVCE,
K. R Circle,
Bengaluru 01