Most content in this document is based on "Kubernetes In Action, Second Edition" by Marko Lukša and Kevin Conner.
Chapter 3 transitions from the theoretical underpinnings of containers to the practical reality of operating a Kubernetes cluster. It guides the reader through the process of setting up a development environment, whether locally (using Minikube, Docker Desktop, or kind) or in the cloud (GKE, EKS). The chapter introduces kubectl as the primary interface for cluster management and walks through the entire lifecycle of a first deployment: from creating a Deployment object and Pods to exposing the application via a Service and horizontally scaling the workload. This chapter establishes the “Declarative Model” of Kubernetes—where the user specifies a desired state and Kubernetes works to maintain it.
The chapter evaluates several ways to get a cluster running:
kubeadm.kubectl kubectl is the command-line tool that communicates with the Kubernetes API Server.
kubeconfig file (usually ~/.kube/config), allowing you to switch between multiple clusters.# Basic cluster inspection commands
kubectl cluster-info
kubectl get nodes
kubectl describe node <node-name>
The chapter demonstrates deploying the kiada demo application.
A Deployment is a high-level object that manages Pods.
kubectl create deployment kiada --image=luksa/kiada:0.1
kubectl sends an HTTP POST request to the API Server.sequenceDiagram
participant U as User (kubectl)
participant A as API Server
participant E as etcd
participant C as Controllers
participant S as Scheduler
participant K as Kubelet
participant R as Container Runtime
U->>A: kubectl create deployment
A->>E: Store Deployment
C->>A: Watch: New Deployment?
C->>A: Create ReplicaSet & Pod objects
A->>E: Store Pods (unscheduled)
S->>A: Watch: Unscheduled Pods?
S->>A: Assign Pod to Node X
A->>E: Update Pod status (scheduled)
K->>A: Watch: Pods for Node X?
K->>R: Pull Image & Start Container
R-->>K: Container Started
K->>A: Update Pod status (Running)
Pods are ephemeral and have internal IP addresses. To make the app reachable from the outside, you must create a Service.
kubectl expose deployment kiada --type=LoadBalancer --port=8080
graph LR
subgraph External_World [External Clients]
Client[Web Browser]
end
subgraph Cluster [Kubernetes Cluster]
LB[Load Balancer]
SVC[Service: kiada]
P1[Pod 1: IP 10.1.1.2]
P2[Pod 2: IP 10.1.1.3]
P3[Pod 3: IP 10.1.1.4]
end
Client -->|Req to Public IP| LB
LB -->|Forward to NodePort| SVC
SVC -->|Load Balance| P1
SVC -->|Load Balance| P2
SVC -->|Load Balance| P3
Scaling is trivial in Kubernetes because of the declarative model.
kubectl scale deployment kiada --replicas=3
Kubernetes detects the difference between the “desired state” (3 replicas) and “current state” (1 replica) and automatically creates 2 more Pods.
| Object | Role | | :— | :— | | Pod | The smallest unit; contains one or more containers. Shares Network/UTS namespaces. | | ReplicaSet | Ensures exactly N copies of a Pod are running. | | Deployment | Manages ReplicaSets; handles rolling updates and rollbacks. | | Service | Provides a single stable entry point (IP/DNS) for a group of Pods. |
localhost.kubectl create, the real power of K8s lies in kubectl apply -f manifest.yaml. You don’t tell K8s how to scale; you tell it what the final count should be.Chapter 3 successfully demonstrates that while the underlying machinery is complex, the user experience is streamlined through kubectl. The fundamental workflow—Deploy -> Expose -> Scale—forms the basis for almost all Kubernetes operations. Understanding the relationship between Deployments, Pods, and Services is the “Aha!” moment for any new Kubernetes engineer.