What did I learn?
So i learnt that Docker is used to package, distribute, and run applications in isolated environments called containers. It ensures that an application runs the same way on different machines, avoiding issues like "it works on my machine but not on yours. "
It packs all the necessary dependencies in a box and helps it to parcel to different clients so they can unbox and use it.
So to help understand docker this is the example and analogy i thought of:
I also learned the difference between VM's and Containers :
Commands I learnt for using docker
docker ps
→ List running containers.docker ps -a
→ List all containers (including stopped ones).docker run
→ Run a container from an image.docker start
→ Start a stopped container.docker stop
→ Stop a running container.docker restart
→ Restart a container.docker kill
→ Forcefully stop a running container.docker rm
→ Remove a stopped container.docker logs
→ View the logs of a container.docker images
→ List all available images.docker pull
→ Download an image from Docker Hub.docker push
→ Push an image to Docker Hub.docker build -t .
→ Build an image from a Dockerfile.docker tag
→ Tag an image with a new name.docker rmi
→ Remove an image.docker history
→ Show the history of an image.docker inspect
→ Get detailed information about an image.I learnt about docker compose :
docker-compose.yml
). Instead of running multiple docker run
commands manually, you can define all services, networks, and volumes in one file and start them with a single command.
docker-compose up
→ Start services defined in docker-compose.yml
.docker-compose down
→ Stop and remove services.docker-compose ps
→ List running services.docker-compose logs
→ View logs of services.docker-compose restart
→ Restart all services.I learnt the difference between images and containers:
I learnt what a Dockerfile is and the use cases :
docker pull
commanddocker push
commandInstead of manually setting up software dependencies, a Dockerfile automates this process.
What did I learn?
SSH scp
first_server
and second_server
using the AWS websitefirst_server
and installed docker and created a dockerfile that runs the monitor script and automates the process // Use a lightweight Linux base image
FROM ubuntu:latest
//Install necessary dependencies
RUN apt-get update && apt-get install -y \
inotify-tools \
openssh-client \
&& rm -rf /var/lib/apt/lists/*
//Set environment variables
ENV WATCH_DIR=/watched_folder
ENV REMOTE_USER=ubuntu
ENV REMOTE_HOST=34.224.174.148
ENV REMOTE_PATH=/home/ubuntu/images
//Create necessary directories
RUN mkdir -p $WATCH_DIR
//Copy the monitor script into the container
COPY monitor.sh /monitor.sh
RUN chmod +x /monitor.sh
//Copy the SSH private key (for SCP)
COPY jeethan-key.pem /root/.ssh/jeethan-key.pem
RUN chmod 400 /root/.ssh/jeethan-key.pem
//Start monitoring when the container runs
CMD ["/bin/bash", "/monitor.sh"]
mkdir -p ~/watched_folder
mkdir -p ~/images
monitor.sh
Script**
- This script monitors the watched_folder
for new images and automatically transfers them to SECOND_EC2
using scp
.#!/bin/bash
WATCH_DIR="/home/ubuntu/watched_folder"
REMOTE_USER="ubuntu"
REMOTE_HOST=""
REMOTE_PATH="/home/ubuntu/images"
KEY_PATH="/home/ubuntu/jeethan-key.pem"
# Ensure the watched directory exists
mkdir -p "$WATCH_DIR"
echo "Monitoring $WATCH_DIR for new files..."
# Start monitoring directory for new files
inotifywait -m -e create "$WATCH_DIR" --format "%f" |
while read FILE; do
echo "New file detected: $FILE"
scp -i "$KEY_PATH" "$WATCH_DIR/$FILE" "$REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH"
if [ $? -eq 0 ]; then
echo "File successfully transferred: $FILE"
else
echo "Error transferring file: $FILE"
fi
done