Level-3
5 / 10 / 2024
Level 3 Report
Task 1: Understanding the OSI Model
The OSI (Open Systems Interconnection) model is a conceptual framework that standardizes the functions of a telecommunication or computing system into seven distinct layers. It enables interoperability between diverse communication systems, irrespective of their underlying technology. The term "open" indicates that it is non-proprietary.
The 7-layer OSI model consists of:
- Physical Layer
- Data Link Layer
- Network Layer
- Transport Layer
- Session Layer
- Presentation Layer
- Application Layer
In this task, I focused on three crucial layers:
-
Application Layer (Layer 7): The interface between the network and end-user applications. This layer provides services like email (SMTP), web browsing (HTTP), and file transfers (FTP).
-
Transport Layer (Layer 4): Responsible for delivering messages error-free, in sequence, and with no losses or duplications. Protocols like TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) operate here, ensuring reliable data transmission.
-
Network Layer (Layer 3): This layer is responsible for determining the best physical path for data transfer, using IP (Internet Protocol) addresses for routing and forwarding packets across network nodes.
Understanding these layers enhances how we troubleshoot network issues, ensuring efficient communication between systems.
Task 2: Exploring Serverless Architecture
Serverless computing is a cloud-based model where developers focus solely on writing code while the cloud provider automatically manages the infrastructure. It removes the need to manually provision and maintain servers. Functions are executed in response to events, such as HTTP requests, and resources are only used when necessary.
Key Concepts:
-
Deployment: Code is packaged and deployed to cloud providers like Vercel or AWS Lambda, which automatically execute the code upon receiving events.
-
Cold Starts: Serverless environments are stateless and boot up upon invocation. A "cold start" occurs when the system is idle for an extended period and takes additional time to initialize the function.
-
Termination: Once the function completes, the server is shut down to save resources. The temporary instances run only for the duration of the event.
-
Automatic Booting: When a new event occurs, the serverless platform dynamically reboots to handle the request, minimizing cost and scaling resources efficiently.
This model is ideal for infrequent workloads and enables businesses to scale applications seamlessly without server management.
Task 3: Real-Time Chat Application using Sockets
In this task, I developed a real-time chat application using Socket.io, a JavaScript library that enables real-time, bi-directional communication between clients and servers. The server-side framework used was Express, and MongoDB served as the database to persist chat messages.
Workflow:
- Users can join a single chat room, where they can communicate with each other in real time.
- Socket.io provides real-time updates by maintaining an open connection between the client and the server.
- MongoDB stores chat history, ensuring that previous conversations are not lost.
This project gave me valuable insights into real-time communication protocols, database integration, and user interface design.
Task 4: Docker Fundamentals
Docker is a platform that simplifies the process of creating, deploying, and managing applications within isolated environments called containers. Containers are lightweight, portable, and run consistently across various environments (local machines, cloud servers, etc.).
In this task, I explored Docker's core functionalities, focusing on:
-
Building Docker Images: Creating a Docker image for an existing Node.js web application. Docker allows for packaging the application and its dependencies into a single image, ensuring portability and consistency.
-
Running Docker Containers: I successfully ran the web application within a Docker container on my local machine. This containerized approach ensures that the application behaves the same in any environment.
This task helped me grasp the fundamental concepts of containerization, which improves deployment speed and reliability.
Task 5: Creating a Docker-based File Monitoring Service
In this task, I developed a Docker container that monitors a specific folder for changes, particularly focusing on the addition of new images. To achieve this, I used:
- Node.js as the runtime framework.
- Chokidar, a Node package for watching file system changes.
- Axios for sending HTTP requests when a new image is detected in the monitored folder.
The program continuously watches a directory (specified within the Docker container), and when a new image is uploaded, it triggers a notification or sends data to another service. This kind of setup is useful for monitoring critical directories or automating tasks like uploading images to a cloud storage service.
Task 6: Web Scraping and Instagram Automation Bot
In this task, I built a bot to automate interactions on Instagram. While Selenium is a popular tool for browser automation, I opted for Puppeteer, a Node.js library for controlling Chrome browsers programmatically, as it integrates smoothly with my Node.js expertise.
Key Features of the Bot:
- Web Scraping: The bot can scrape Instagram profiles, retrieve user data, and analyze post engagement.
- Automation: It automates interactions, such as liking posts of followers and sending messages. Additionally, if the user does not respond within a given timeframe, it sends an automatic message indicating the user's unavailability.
This project required a deep understanding of browser automation, DOM manipulation, and Node.js event handling. Puppeteer provided an efficient alternative to Selenium and offered faster, smoother interactions for this specific use case.