13 / 1 / 2026
TASK 8: Cloud Computing
Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-you-go pricing. Instead of buying, and maintaining physical servers and data centers, you can access computing power, storage, databases, and other services from a cloud provider like Amazon Web Services (AWS) whenever you need them.
The three main types of cloud computing include Infrastructure as a Service, Platform as a Service, and Software as a Service.
1. Infrastructure as a Service (IaaS)
- IaaS provides virtualized computing resources, such as servers, storage, and networking, over the internet.
- You can rent infrastructure on demand, paying only for what you use.
- Users are responsible for managing the operating system, applications, and data, while the provider handles the physical hardware.
- IaaS is highly flexible and scalable, making it suitable for testing, development, or hosting websites and applications. Example: Amazon EC2
2. Platform as a Service (PaaS)
- PaaS provides a ready-to-use platform for building, testing, and deploying applications.
- Developers can focus on coding and app functionality while the provider manages the underlying infrastructure, OS, and runtime environment.
- PaaS is ideal for web application development, API integrations, and automated deployment.
- Example: AWS Elastic Beanstalk
3. Software as a Service (SaaS)
- SaaS delivers fully functional software applications over the internet.
- Users can access software through a web browser without worrying about installation, updates, or maintenance.
- The provider manages everything, including infrastructure, platform, security, and application updates.
- SaaS is perfect for email, collaboration tools, CRM, and office productivity applications. Example: AWS WorkMail, Google Workspace, Microsoft 365
TASK 7: OSI Model
What is the OSI Model?
The OSI Model is a conceptual framework that standardizes and defines how data is transferred from one computer to another in a computer network.
It was introduced by the International Organization for Standardization (ISO) in 1984 to create a common reference for successful communication between network components, especially computers with different operating systems or architectures (Windows and macOS).
It breaks down the process of network communication into seven distinct layers.
The 7 Layers of the OSI Model
1. Application Layer
This is the layer that the user directly interacts with. It provides the interface and services for user applications to access the network. Examples of protocols at this layer include HTTP (web browsing), FTP (file transfer), and SMTP (email).
2. Presentation Layer
Often called the “Syntax Layer,” this layer ensures that data sent from the application layer of one system is readable by the application layer of another system. It handles:
- Translation: Converting data into a common format.
- Encryption/Decryption: Securing data for transmission.
- Compression: Reducing the size of the data.
3. Session Layer
This layer establishes, manages, and terminates the connection between the applications on the two communicating devices. It is responsible for synchronization and checkpointing the communication, ensuring that if a connection fails, the session can be resumed from the last checkpoint.
4. Transport Layer
This layer is responsible for the reliable delivery of data segments between applications. It handles:
- Segmentation: Breaking down data from the session layer into smaller units (segments).
- Reassembly: Putting the segments back together at the destination.
- Error Control: Ensuring all segments arrive correctly.
Protocols:
- TCP (Transmission Control Protocol, which is connection oriented and reliable)
- UDP (User Datagram Protocol, which is connectionless and faster)
5. Network Layer
The core function of this layer is logical addressing and path determination (routing) to move packets from the source network to the destination network.
- Logical Addressing: It uses IP addresses (Internet Protocol) to uniquely identify devices across different networks.
- Routing: Devices like routers operate at this layer to select the best path for data to travel.
6. Data Link Layer
This layer is responsible for the reliable transfer of data frames between two devices on the same network segment. It performs two basic functions:
- Framing: It allows upper layers to access the physical media.
- Media Access Control (MAC): It uses MAC addresses for physical addressing and controls how data is placed on and received from the media, including error detection.
7. Physical Layer
This is the lowest layer and deals with the physical connection between devices. It is responsible for transmitting raw bit streams over the physical medium (cable, air, etc.).
- Media: The physical medium can be a copper cable (LAN cable) with electrical signals, optical fiber with light signals, or air with radio signals.
- Devices: Hardware devices like cables, hubs, and network interface cards (NICs) primarily operate at this layer.

TASK 1: Working with Version Control
Version control is the practice of tracking and managing changes to a set of files over time. As mentioned in the task, I learned various Git commands and how to work with them. This also taught me how to manage the complete software development cycle, from initial setup right to the maintenance.
-
git branch
Used to make, see, or delete different versions (branches) of your project. -
git merge
Joins two branches together, combining their changes. -
git revert
Undoes a specific change safely by making a new “fixing” commit. -
git cherry-pick
Takes one specific commit from another branch and adds it to your current branch. -
git push
Sends your changes from your computer to the online repo (like GitHub). -
git pull
Brings the latest changes from the online repo to your computer and updates your branch.
Task 10 – Web Scraping and IP Addressing
Web scraping is the technique of automatically extracting information from websites using programming tools. It is commonly used to collect data such as text, links, and job listings without manually copying content from web pages.
In this task, I performed web scraping using Python. I used the Requests library to send an HTTP request to a website and retrieve its HTML content. The BeautifulSoup library was then used to parse the HTML structure and locate specific elements from which the required data was extracted. The extracted information was displayed as output, helping me understand how web data can be programmatically accessed and processed. This task also helped me gain basic knowledge of how web protocols and IP-based communication are involved in fetching web content.

Task 6 – Socket.IO
Socket.IO is a library used to enable real-time communication between a client and a server. It allows messages to be sent and received instantly, making it useful for applications like chat systems and live notifications.
In this task, I created a simple real-time chat application using Node.js and Socket.IO. The application allows multiple users to connect at the same time and exchange messages instantly. When a user sends a message, it is sent to the server and immediately shared with all connected users. Through this task, I understood how real-time communication works, how client–server interaction happens without page refreshes, and how Socket.IO makes builds interactive applications.
LINK: https://github.com/AnanyaRamesh23/socket-io-chat.git

Task 3 – EC2 Instance
Amazon EC2 (Elastic Compute Cloud) is a service that provides virtual servers in the cloud, allowing applications to be deployed and run without the need for physical hardware. It offers flexibility and scalability, making it suitable for hosting both simple and complex applications.
In this task, I created an EC2 instance using the AWS console and configured it to host a dynamic application. After launching the instance, I set up the required environment by installing the necessary software and adjusting security group rules.
Through this task, I learned how EC2 instances function, how applications are hosted on cloud servers, and how AWS services can be used to deploy and manage dynamic web applications.
LINK : http://13.234.110.211:3000
Task 4 – AWS S3 Bucket
Amazon S3 (Simple Storage Service) is an object storage service provided by AWS that is used to store and retrieve data such as files, images, and web content. It is highly scalable, secure, and commonly used for storing static resources. In this task, I created an S3 bucket using the AWS console and uploaded files into it. The bucket was configured to store and manage data efficiently, and appropriate permissions were set to control access to the stored objects. This task helped me understand how cloud storage works, how data can be uploaded and managed in AWS, and how S3 can be used to store files. Through this activity, I gained hands-on experience with AWS storage services and learned how S3 plays an important role in cloud based applications.

Task 2 – DynamoDB Login System
In this task, I implemented a simple user login system using Amazon DynamoDB, a fully managed NoSQL database service. I created a table named Users with username as the partition key and configured AWS CLI to connect my Python application program to DynamoDB. Using the boto3 library, I developed functionality to register users by storing credentials and validate login by retrieving user data and comparing passwords. This task helped me handling DynamoDB and the difference between relational databases like MySQL and NoSQL databases.
SQL
SQL (Structured Query Language) is a standard language used to manage and manipulate data in relational databases. It is used to perform operations like inserting, updating, deleting, and retrieving data.
MySQL
MySQL is a relational database management system (RDBMS) that uses SQL to store and manage data. It organizes data in tables with rows and columns and follows a fixed schema. It is commonly used for web applications and structured data systems.
NoSQL
NoSQL refers to non-relational databases that store data in flexible formats such as key-value, document, or graph structures. Unlike MySQL, it does not require a fixed schema and is designed for high scalability and distributed systems. Examples include DynamoDB and MongoDB.

Task 9: Encryption Techniques – Secure Messaging App
In this task, I learned different encryption techniques to understand how secure communication works.I started with classical ciphers like the Caesar cipher and Vigenère cipher to learn how plaintext is converted into ciphertext using key-based shifting methods. These techniques helped me understand the basic working principle of encryption.
For implementation, I used the Vigenère cipher, which is a polyalphabetic substitution cipher that uses a repeating keyword. Unlike the Caesar cipher, it does not use a fixed shift, making it comparatively stronger. The same key is used for both encryption and decryption, which shows symmetric encryption.
I also understood the difference between encryption and hashing. Encryption is reversible and is used to securely transmit messages, whereas hashing (such as SHA256) is a one-way function mainly used for password storage and data integrity verification.
Additionally, I studied asymmetric encryption (RSA) using the PyCryptodome library. RSA works with a public key and a private key, and its security depends on large prime numbers and the difficulty of factoring them.
Finally, I developed a Python-based secure messaging application. Messages are encrypted before being sent and decrypted upon receipt, allowing multiple secure messages to be exchanged. This implementation helped me connect cryptography concepts with practical communication.
LINK: https://github.com/AnanyaRamesh23/Encryption-Chat.git

Task 5: Kali Linux
In this task, I performed a basic penetration test using Kali Linux on a vulnerable virtual machine. I began by scanning the target system using Nmap with advanced scanning options to identify open ports, running services, and the operating system. During the scan, I discovered that an FTP service was running, which is known to have a backdoor vulnerability.
After identifying this vulnerability, I used the Metasploit Framework to load the corresponding exploit module and simulate an attack on the vulnerable service. This helped me understand the practical steps involved in penetration testing, including information gathering, vulnerability identification, and exploitation, while highlighting the importance of securing systems against known threats.

