📚 Welcome to Cloud Computing
Cloud Computing is the delivery of computing services — servers, storage, databases, networking, software and analytics — over the internet. It is the backbone of every modern enterprise, startup and government IT system. Cloud skills are the #1 most requested technical competency in IT job postings globally in 2026.
India's cloud market is growing at 30% annually — ₹3.2 lakh crore by 2026. AWS, Azure and GCP certified professionals earn ₹12–45 LPA. This course takes you from zero to job-ready.
💡 How to use this page: Click any topic in the left sidebar. All notes load instantly — no page reloads. Use Prev / Next buttons to go in order from beginner to advanced.
☁️ Cloud Computing Fundamentals
BeginnerBefore AWS or Azure — you must understand the fundamental concepts that ALL cloud platforms are built on. These are the building blocks that appear in every cloud interview and certification exam.
Cloud Service Models — IaaS, PaaS, SaaS
Cloud Deployment Models
| Model | Where Hosted | Who Uses It | Examples | Pros |
|---|---|---|---|---|
| Public Cloud | Provider's data centres | Any organisation | AWS, Azure, GCP | Low cost, infinite scale, no maintenance |
| Private Cloud | Your own data centre or dedicated hardware | Large enterprises, banks, government | VMware, OpenStack, on-prem SAP | Full control, data sovereignty, compliance |
| Hybrid Cloud | Mix of public + private | Most large enterprises | AWS Outposts + on-prem | Best of both — sensitive data on-prem, burst on cloud |
| Multi-Cloud | Multiple public clouds | Enterprises avoiding vendor lock-in | AWS + Azure together | Resilience, best-of-breed services, negotiating power |
| Community Cloud | Shared private cloud for specific industry | Healthcare, government, finance | Government clouds | Shared costs + compliance for same industry |
Key Cloud Benefits — The 6 Advantages
📺 AWS — Amazon Web Services Core Services
IntermediateAWS is the world's largest cloud provider with 33% market share. Launched in 2006, it has 200+ services across 31 geographic regions. Understanding AWS core services is the foundation of the most popular cloud certification — AWS Solutions Architect Associate (SAA-C03).
AWS Global Infrastructure
AWS Core Services — Must Know for Every Interview
AWS Shared Responsibility Model
| Layer | AWS Responsible "OF the Cloud" | Customer Responsible "IN the Cloud" |
|---|---|---|
| Physical | Data centre security, hardware, network infrastructure | Nothing — AWS owns all physical assets |
| Hypervisor/Compute | EC2 hypervisor, host OS patching | EC2 guest OS patching, instance security |
| Storage | Physical storage infrastructure | Encryption settings, bucket policies, access controls |
| Database (RDS) | Physical DB infrastructure, patching | DB user accounts, password management, encryption |
| Networking | Physical network, global infrastructure | VPC config, Security Groups, NACLs, firewall rules |
| IAM | IAM service availability | User accounts, MFA, strong passwords, access key rotation |
💡 Interview Gold: The Shared Responsibility Model is asked in almost EVERY cloud interview and certification exam. Remember: AWS secures the cloud infrastructure. You secure everything you put IN the cloud — your data, your OS patches, your IAM configuration, your application security.
💻 Microsoft Azure Core Services
IntermediateAzure is Microsoft's cloud platform with 23% market share — the #1 choice for enterprises already using Microsoft products (Office 365, Active Directory, SQL Server). Azure has 200+ services across 60+ regions — the largest global footprint of any cloud provider.
Azure Core Services — Must Know
Azure vs AWS — Service Comparison
| Service Category | AWS | Azure | GCP |
|---|---|---|---|
| Virtual Machines | AWS EC2 | Azure Azure VMs | GCP Compute Engine |
| Object Storage | AWS S3 | Azure Blob Storage | GCP Cloud Storage |
| Managed Kubernetes | AWS EKS | Azure AKS | GCP GKE |
| Serverless Functions | AWS Lambda | Azure Azure Functions | GCP Cloud Functions |
| Managed SQL DB | AWS RDS / Aurora | Azure Azure SQL Database | GCP Cloud SQL |
| NoSQL Database | AWS DynamoDB | Azure Cosmos DB | GCP Firestore / Bigtable |
| CDN | AWS CloudFront | Azure Azure CDN | GCP Cloud CDN |
| DNS | AWS Route 53 | Azure Azure DNS | GCP Cloud DNS |
| Identity & Access | AWS IAM | Azure Azure AD + RBAC | GCP Cloud IAM |
| Monitoring | AWS CloudWatch | Azure Azure Monitor | GCP Cloud Monitoring |
| Data Warehouse | AWS Redshift | Azure Azure Synapse | GCP BigQuery |
| Message Queue | AWS SQS | Azure Service Bus | GCP Pub/Sub |
🌏 Google Cloud Platform (GCP)
IntermediateGCP is Google's cloud platform with 11% market share — growing fastest of the three. Google's strength is in data analytics, AI/ML and Kubernetes (Google invented Kubernetes). GCP runs on the same infrastructure that powers Google Search, Gmail and YouTube.
GCP Core Services
🎯 Which Cloud to Learn First? AWS for job market breadth (largest number of job postings). Azure if your company/clients use Microsoft stack. GCP if targeting data engineering, AI/ML or Kubernetes roles. Best strategy: get AWS certified first, then add Azure or GCP. Most enterprise projects use 2+ clouds.
🔒 Cloud Security
AdvancedSecurity is the #1 concern in cloud adoption. Understanding cloud security architecture is critical for every cloud professional — whether you are a developer, architect, DevOps engineer or security specialist.
IAM Best Practices — The 10 Golden Rules
Network Security in Cloud — VPC Architecture Best Practice
| Layer | Component | Purpose | Rule |
|---|---|---|---|
| Public Subnet | Internet Gateway + ALB | Accept internet traffic — load balancer only | Never put databases or backend servers here |
| Private Subnet | Application Servers (EC2) | Run your backend application code | Access internet via NAT Gateway only (outbound) |
| Database Subnet | RDS / ElastiCache | Store sensitive data | No direct internet access — ever. Only App subnet can connect |
| Security Groups | Stateful firewall | Instance-level traffic control | Whitelist only specific IPs and ports. Deny all by default |
| Network ACLs | Stateless firewall | Subnet-level traffic control | Additional defence layer — block known malicious IPs |
⚙️ DevOps & CI/CD
Intermediate → AdvancedDevOps is the union of development and operations — a culture, set of practices and tools that increases an organisation's ability to deliver applications faster and more reliably. CI/CD (Continuous Integration / Continuous Delivery) is the technical backbone of DevOps.
CI/CD Pipeline — Complete Flow
Infrastructure as Code (IaC) — Terraform
# main.tf — Provision AWS EC2 instance with Terraform
provider "aws" {
region = "ap-south-1" # Mumbai region
}
resource "aws_instance" "web_server" {
ami = "ami-0f5ee92e2d63afc18"
instance_type = "t3.micro"
key_name = "cuesys-keypair"
vpc_security_group_ids = [aws_security_group.web_sg.id]
tags = {
Name = "CuesysWebServer"
Environment = "Production"
Owner = "Cuesys Infotech"
}
}
resource "aws_security_group" "web_sg" {
name = "web-sg"
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Commands:
# terraform init — download provider plugins
# terraform plan — preview what will be created
# terraform apply — create the resources
# terraform destroy — delete all resources
GitHub Actions — CI/CD Pipeline Example
# .github/workflows/deploy.yml
name: Deploy to AWS
on:
push:
branches: [main]
jobs:
test-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Run unit tests
run: |
npm install
npm test
- name: Build Docker image
run: docker build -t cuesys-app:latest .
- name: Push to Amazon ECR
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
aws ecr get-login-password --region ap-south-1 | \
docker login --username AWS --password-stdin $ECR_REGISTRY
docker push $ECR_REGISTRY/cuesys-app:latest
- name: Deploy to ECS
run: aws ecs update-service --cluster prod --service cuesys-svc \
--force-new-deployment
💫 Docker — Containers Complete Guide
IntermediateDocker packages applications and all their dependencies into a container — a lightweight, portable, self-sufficient unit that runs identically everywhere. "Works on my machine" becomes a thing of the past. Docker is the foundation of modern cloud-native development.
Virtual Machine vs Container
| Aspect | Virtual Machine | Container |
|---|---|---|
| Size | GBs (includes full OS) | MBs (shares host OS kernel) |
| Startup Time | Minutes | Seconds or milliseconds |
| Isolation | Complete OS-level isolation | Process-level isolation |
| Resource Usage | Heavy — dedicated CPU/RAM | Lightweight — shares host resources |
| Portability | Moderate — hypervisor dependent | Excellent — runs anywhere Docker is installed |
| Use Case | Full environment isolation, legacy apps | Microservices, CI/CD, cloud-native apps |
Dockerfile — Build Your Own Image
# Dockerfile for a Python Flask application
FROM python:3.11-slim # Base image
WORKDIR /app # Set working directory
COPY requirements.txt . # Copy dependencies file first
RUN pip install -r requirements.txt # Install dependencies
COPY . . # Copy application code
EXPOSE 5000 # Document the port
ENV FLASK_ENV=production # Environment variable
CMD ["python", "app.py"] # Default command
Essential Docker Commands
# Build image from Dockerfile
docker build -t cuesys-app:1.0 .
# Run container
docker run -d -p 8080:5000 --name my-app cuesys-app:1.0
# -d = detached (background), -p host:container port mapping
# View running containers
docker ps
# View logs
docker logs my-app -f # -f = follow (like tail -f)
# Execute command inside container
docker exec -it my-app bash
# Stop and remove
docker stop my-app && docker rm my-app
# Push to Docker Hub / ECR
docker tag cuesys-app:1.0 username/cuesys-app:1.0
docker push username/cuesys-app:1.0
Docker Compose — Multi-Container Applications
version: '3.8'
services:
web:
build: .
ports:
- "8080:5000"
environment:
- DB_HOST=db
- DB_PASSWORD=secret123
depends_on:
- db
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: secret123
MYSQL_DATABASE: cuesys_db
volumes:
- db_data:/var/lib/mysql
volumes:
db_data: # Persistent named volume
# Run everything: docker-compose up -d
# Stop everything: docker-compose down
☸️ Kubernetes (K8s) — Container Orchestration
AdvancedKubernetes is the open-source system for automating deployment, scaling and management of containerised applications. Originally developed by Google — now the industry standard for running containers at scale. Every major cloud has a managed K8s service: EKS (AWS), AKS (Azure), GKE (GCP).
Core Kubernetes Concepts
| Object | Definition | Example |
|---|---|---|
| Pod | Smallest deployable unit in K8s. Contains 1+ containers that share network and storage. Ephemeral — if it dies, K8s creates a new one. | One Pod running your Flask API container |
| Deployment | Manages a set of identical Pods. Ensures desired number are running. Handles rolling updates and rollbacks. | deployment.yaml with replicas: 3 for high availability |
| Service | Stable network endpoint for a set of Pods. Provides load balancing and service discovery. Types: ClusterIP (internal), NodePort (external), LoadBalancer (cloud LB). | Service exposes your Flask Pods on port 80 |
| Namespace | Virtual cluster within a cluster. Isolates resources by team or environment. | namespaces: dev, staging, production |
| ConfigMap | Store non-sensitive configuration data as key-value pairs. Inject into Pods as env vars or files. | Database hostname, API endpoint URLs |
| Secret | Store sensitive data (passwords, API keys) encrypted. Base64 encoded (not encrypted by default — use Sealed Secrets or Vault). | DB password, AWS access keys |
| Ingress | HTTP/HTTPS routing rules to Services. Allows multiple services behind one load balancer with path/host-based routing. | route /api to backend-service, / to frontend-service |
| PersistentVolumeClaim | Request for storage. Pod claims storage; K8s provisions it from available PersistentVolumes. | MySQL database needs persistent disk storage |
Kubernetes Deployment YAML
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cuesys-api
namespace: production
spec:
replicas: 3 # 3 identical pods
selector:
matchLabels:
app: cuesys-api
template:
metadata:
labels:
app: cuesys-api
spec:
containers:
- name: api
image: cuesys/api:2.1.0
ports:
- containerPort: 5000
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
---
# service.yaml — expose the deployment
apiVersion: v1
kind: Service
metadata:
name: cuesys-api-svc
spec:
type: LoadBalancer
selector:
app: cuesys-api
ports:
- port: 80
targetPort: 5000
💡 kubectl Essential Commands: kubectl get pods — list pods | kubectl describe pod <name> — debug | kubectl logs <pod> -f — live logs | kubectl apply -f deployment.yaml — deploy | kubectl rollout undo deployment/cuesys-api — rollback
⚡ Serverless Computing
IntermediateServerless means you write and deploy code without managing servers, OS, capacity, scaling or availability. The cloud provider handles all of it. You pay only when your code runs — zero cost when idle. Perfect for event-driven, intermittent or unpredictable workloads.
AWS Lambda — Complete Guide
# Lambda function — triggered by API Gateway (REST API)
import json
import boto3
def lambda_handler(event, context):
"""
event: the trigger data (API request, S3 event etc.)
context: runtime info (function name, timeout remaining)
"""
# Extract data from API Gateway event
body = json.loads(event.get('body', '{}'))
student_name = body.get('name', 'Unknown')
# Business logic
message = ff"Welcome to Cuesys, {student_name}!"
# Store to DynamoDB
dynamodb = boto3.resource('dynamodb', region_name='ap-south-1')
table = dynamodb.Table('Students')
table.put_item(Item={'name': student_name, 'enrolled': True})
# Return HTTP response
return {
'statusCode': 200,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({'message': message})
}
Serverless — When to Use and When NOT to
| ✅ Use Serverless When | ❌ Avoid Serverless When |
|---|---|
| Event-driven tasks — process S3 uploads, handle webhooks | Long-running tasks (>15 min for Lambda — use EC2 or Fargate) |
| Unpredictable or spiky traffic — scales from 0 to millions instantly | High-performance computing needing consistent GPU/CPU resources |
| Minimal ongoing requests — pay per use saves cost vs 24/7 servers | Applications needing persistent connections (WebSockets, long polling) |
| Background processing — image resize, email sending, data transform | Cold start latency is unacceptable — first request after idle adds ~100ms-3s |
| APIs with low to medium traffic | Complex stateful workflows — use AWS Step Functions instead |
📦 Cloud Migration Strategy
AdvancedCloud migration is the process of moving applications, data and infrastructure from on-premises to the cloud. The AWS "6 R's" framework is the industry standard for planning migration — used by architects at every major enterprise.
The 6 R's of Cloud Migration
Move application as-is to cloud with no code changes. Fastest migration. Use AWS Server Migration Service or Azure Migrate.
80% of migrations start here. Good for getting to cloud quickly. No cloud-native benefits yet.
Minor optimisations without changing core architecture. Example: Move on-prem MySQL to AWS RDS (managed) without changing app code.
Some cloud benefit — reduced ops. Still relatively fast. Database migrations often use this.
Move from custom/legacy application to a cloud-native SaaS product. Example: Replace on-prem CRM with Salesforce.
High cloud benefit. Reduces maintenance. Requires change management and data migration.
Redesign the application to use cloud-native features — microservices, serverless, containers. Most expensive but highest cloud ROI.
Best long-term outcomes. Improves scalability, performance and cost. Requires significant investment.
Identify applications that are no longer needed and decommission them. Typically 10–20% of an estate can be retired.
Pure cost saving. Always do a thorough audit before migration — you may be surprised what nobody uses.
Keep applications on-premises that are not ready for migration — compliance, latency, recent capital investment.
Hybrid is the reality. Not everything moves to cloud. Revisit every 12–18 months.
🎯 Migration Best Practice: Don't try to migrate everything at once. Start with low-risk, non-critical applications (the "easy wins"). Build cloud skills and confidence. Then tackle tier-1 critical systems. Most successful migrations take 18–36 months for large enterprises. Run a Proof of Concept (PoC) for your most complex application before committing the full budget.
❓ Cloud Computing Interview Q&A
Most frequently asked questions in cloud architect, DevOps engineer, cloud administrator and developer interviews. Click to reveal model answers.
Model Answer:
IaaS (Infrastructure as a Service) provides virtualised computing infrastructure over the internet. You manage the OS, runtime, applications and data. The provider manages virtualisation, servers, storage and networking. Examples: AWS EC2, Azure VMs, GCP Compute Engine — you get virtual servers and control everything above the hypervisor. PaaS (Platform as a Service) provides a platform for developers to build, run and manage applications without dealing with infrastructure. You manage only your application code and data. Examples: AWS Elastic Beanstalk, Heroku, Azure App Service — deploy your code and the platform handles the rest. SaaS (Software as a Service) delivers complete, ready-to-use applications over the internet. You manage nothing technical. Examples: Gmail, Salesforce, Microsoft 365, Zoom — you just use the software. The key question: "How much do you want to manage?" IaaS=most control, SaaS=least control, PaaS=in between.
Model Answer:
The Shared Responsibility Model divides security duties between AWS and the customer. AWS is responsible "OF" the cloud — the physical data centres, hardware, network infrastructure, and the managed services themselves (the hypervisor for EC2, the storage hardware for S3, etc.). The customer is responsible "IN" the cloud — everything they build and configure on top: their data and its encryption, their IAM users and policies, their EC2 operating system patching, their Security Group rules, their application security, and their network configurations. The split changes by service type: for EC2 (IaaS), you manage the OS and above. For RDS (managed service), AWS manages the database engine — you manage user accounts and data. For S3, AWS manages infrastructure, you manage bucket policies and encryption settings. A common interview follow-up: "Who is responsible for patching an EC2 instance?" Answer: The customer — it is IaaS, so OS patching is your responsibility.
Model Answer:
A VPC (Virtual Private Cloud) is your logically isolated private network within AWS (or Azure VNet, GCP VPC). Key components: Subnets — divisions of the VPC IP range. Public subnets have a route to the Internet Gateway (internet-facing resources like load balancers). Private subnets have no direct internet route (databases, application servers). Internet Gateway — connects your VPC to the internet; attach one per VPC for public subnet internet access. NAT Gateway — allows private subnet resources to make outbound internet requests (for patches, API calls) without being directly accessible from the internet. Route Tables — control where network traffic is directed; every subnet has one. Security Groups — stateful virtual firewall at the instance level; you specify allow rules only, deny is implicit. Network ACLs — stateless firewall at the subnet level; supports both allow and deny rules; processed in order. VPC Peering — connects two VPCs privately without internet transit.
Model Answer:
Security Groups are stateful — if you allow inbound traffic, the return traffic is automatically allowed. They operate at the instance/resource level. You can only create ALLOW rules. They evaluate all rules before deciding. Use Security Groups for instance-level firewall control. Network ACLs (NACLs) are stateless — inbound and outbound rules are evaluated separately; you must explicitly allow both directions. They operate at the subnet level and apply to all resources in the subnet. They support both ALLOW and DENY rules. Rules are evaluated in number order — lowest first, and processing stops at the first match. Use NACLs as an additional subnet-level layer, typically to block known malicious IPs. In practice: Security Groups cover 95% of security needs. NACLs are an extra layer of defence for network perimeter security.
Model Answer:
Kubernetes (K8s) is an open-source container orchestration platform originally created by Google. It automates deployment, scaling, load balancing, self-healing and management of containerised applications. Why it is needed: Docker can run containers on one machine, but production systems need containers running across many machines. When a container crashes, something must restart it. When traffic increases, something must scale up more containers. When you deploy a new version, something must do a rolling update without downtime. Kubernetes handles all of this. Key capabilities: Self-healing (restarts failed containers, replaces and reschedules them), Horizontal scaling (scale based on CPU/memory metrics), Rolling updates and rollbacks, Service discovery and load balancing, Secret and configuration management. AWS EKS, Azure AKS and GCP GKE provide managed Kubernetes where the control plane is managed for you.
Model Answer:
Serverless computing allows you to run code without provisioning or managing servers. You write functions, deploy them, and the cloud provider handles execution, scaling, and availability. You pay only per invocation and compute time — zero cost when idle. AWS Lambda, Azure Functions and GCP Cloud Functions are the main offerings. Key benefits: No server management, automatic scaling to zero (and back), pay per use (very cost effective for low/intermittent traffic), event-driven architecture support. Limitations: (1) Cold start latency — first invocation after idle period can take 100ms to 3 seconds depending on runtime and code size. (2) Maximum execution time — AWS Lambda has a 15-minute maximum. Not suitable for long-running processes. (3) Limited local state — functions are stateless; use DynamoDB, S3 or ElastiCache for state. (4) Debugging complexity — distributed tracing (AWS X-Ray) needed for debugging. (5) Vendor lock-in — Lambda code is often tightly coupled to AWS services.
Model Answer:
Infrastructure as Code means managing and provisioning infrastructure (servers, networks, databases) through machine-readable configuration files rather than manual clicks in a console. Tools: Terraform (vendor-agnostic, most popular), AWS CloudFormation (AWS-specific JSON/YAML), Azure ARM/Bicep templates, Pulumi (code in Python/TypeScript). Why it matters: (1) Version control — your infrastructure is in Git; you can see every change, who made it, and when. (2) Repeatability — spin up identical environments for dev, staging and production consistently. (3) Speed — provision complex infrastructure in minutes, not days. (4) Documentation — the code IS the documentation. (5) Disaster recovery — rebuild your entire infrastructure from scratch if needed. (6) Cost control — destroy non-production environments at night and recreate them in the morning. IaC is not optional in modern cloud engineering — it is the foundation of DevOps and GitOps practices.
Model Answer:
A highly available architecture on AWS uses multiple layers of redundancy. (1) Multi-AZ deployment: deploy EC2 instances in at least 2 Availability Zones. An AZ failure will not bring down the application. (2) Auto Scaling Group: automatically add instances when CPU exceeds threshold, remove when traffic drops. Ensures capacity without manual intervention. (3) Application Load Balancer: distributes traffic across healthy instances in multiple AZs. Health checks remove unhealthy instances automatically. (4) Multi-AZ RDS: primary database in one AZ with a synchronous standby in another. Automatic failover in 1–2 minutes if primary fails. (5) ElastiCache (Redis/Memcached): cache database query results to reduce DB load and improve response times. (6) CloudFront: CDN to serve static assets globally — reduces latency and origin server load. (7) S3 for static assets: 99.999999999% durable, automatically multi-AZ, no management needed. (8) Route 53 health checks: route traffic to a backup region if the primary region fails. Target: 99.99% availability (52 minutes downtime per year maximum).
🏅 Certification Tip: For AWS SAA-C03: focus on VPC architecture, IAM, EC2/S3/RDS/Lambda, high availability patterns, and the Shared Responsibility Model. For AZ-104: focus on Azure VMs, Azure AD, VNet, Storage Accounts, App Service and Azure Monitor. For GCP ACE: focus on Compute Engine, GKE, Cloud Storage, IAM and BigQuery.