BLOG · 14/9/2025
The ML tasks were updated during my level 0 coursework so I made a report of them seperately
Objective : To learn how to build the most simplest ML model, Linear Regression.
We are given a number of data points such that you can't fit a line through all of them, but we can find a best fit line, which has some small bit of extra distance from the given data point value to the line's value at that point.
So regression is all about reducing that error as far as possible, this can be used for prediction as it does do a pretty good job in predicting.
Regression is the most simplest Prediction algorithm.
Inorder to truly understand, we must dive into the Math part of it.
We already said:
We want to draw a straight line through a bunch of points.
That line is y = ωx + b.
We first just guess some slope (ω₁) and intercept (b₁).
Now here’s the problem:
When we put that line on the graph, the line won’t pass exactly through every point. Some points are above the line, some are below it.
The distance between where the line predicts and where the actual point is that’s the error for that point.
But we don’t just want to know the error of one point. We want to know: "How bad is my line overall?"
That’s where the Loss Function comes in.
Small loss → line fits well.
Big loss → line fits poorly.
y
and actual y
).Loss Function = a “mistake calculator” that tells us how far off our line is from the real data.
There are teo types of loss function :
So why is mean squared error used commonly? The reason is simple which will be explained below.
Optimizer Algorithm is used to reduce error.It is solely built on the loss function.
What does it optimize? It optimizes the parameters,i.e. ω and b in this case.
Let us go with the most simplest Optimiser algorithm, Gradient descent.
In math, whenever we want to find the lowest point or highest point of a function, we use differentiation. The same idea applies in linear regression, where we want to find the lowest point of the loss function so that our line fits the data as best as possible. Since both the slope (ω) and the intercept (b) affect the line, we use partial differentiation, which lets us see how the loss changes with respect to each of them. For the loss function, we prefer using Mean Squared Error (MSE) instead of Mean Absolute Error (MAE), because MAE involves a modulus and makes differentiation messy, while MSE is smoother and easier to work with. Once we know how to move toward the minimum, we need an optimizer. The optimizer uses two important settings: the learning rate, which decides how big a step we take toward the minimum each time (small steps are careful but slow, large steps are fast but risky), and the number of epochs, which tells us how many times the model will go through the entire dataset during training.
The link for the code along with the result is Click here
Objective : To solve the puzzle using NumPy and Matplotlib.pyplot and to reveal the image.
NumPy is a library/framework of python which is used for array manipulation,reshaping, resizing and for transposing.
Matplotlib is a library/framework of python which is used to plot graphs and waves, using plot functions like plt.imshow(),plt.show etc. matplotlib also has features for labelling X and Y axes.
Both the libraries are important for Machine Learning, as Numpy is used widely for array manipulations, and Matplotlib is used for plotting graphs