Kubernetes Pod Networking

Introduction to Pod Communication

Learn how networking works within the pod, how containers are created, et cetera. Now what about Interpod communication or basically network where pods can talk to each other? Well, Kubernetes does not come with a built in solution for this. Instead it expects you to implement a networking solution for POD to POD communication.

Container Network Interface (CNI)

🔌 Kubernetes Networking Rules

  • Kubernetes doesn’t implement pod networking itself
  • Instead, it provides a clear set of rules that define:
    • How pod networking should work in the cluster
    • What networking solutions must implement to be compatible
    • Requirements for being pluggable into Kubernetes

Container Network Interface Architecture

💡 Understanding CNI

  • CNI (Container Networking Interface) is this set of rules and definitions
  • It works similarly to the Container Runtime Interface (CRI) concept:
    • Just as CRI lets you plug in any container runtime
    • As long as it implements Kubernetes’ interface
    • CNI lets you deploy any networking plugin
    • As long as it implements the CNI specifications

📝 In Practice

  • Any networking solution that implements CNI can be:
    • Deployed in your cluster
    • Used as your networking plugin
    • Integrated seamlessly with Kubernetes

Requirements for Pod Networking

Now let’s see what are the requirements for POD networking solution that network plugin should implement?

First of all, Kubernetes expects:

  1. Every POD to get its own unique IP address across the whole cluster
  2. Pods on the same node should be able to talk to each other using IP address
  3. Pods on different nodes should be able to talk to each other without NAT (network address translation)

So basically Kubernetes expects a network plugin to implement a POD network for the whole cluster on all the nodes talk to each other as if they were on the same actual network.

Kubernetes Networking Model

One thing that Kubernetes doesn’t define or doesn’t care about is what IP address range this network gets or what IP addresses the pods will get. That’s up to the network plugin to decide. And this model of how POD networking should look like is called Kubernetes networking model.

Does the service join the same pod network?

The service cannot join the pod network. because the service is not an actual thing. It is not a container like pods, so it doesn’t have any interfaces or an actively listening process. It is a virtual component that only lives in the Kubernetes memory.

And there are many networking solutions out there or network plugins which implement this model and fulfill all these requirements. Some of the most common and popular ones are:

  • Flannel
  • Cilium
  • Weavenet from Weaveworks

How CNI Plugins implement it?

Same-Node Pod Communication

🌐 Node Network Setup

  • Each node in the cluster:
    • Has its own IP address
    • Belongs to a private network/LAN
    • In AWS: Part of a VPC with defined CIDR block
    • IP addresses assigned from VPC’s range

🔄 Pod Network Configuration

  • On each node:
    • Network plugin creates private network
    • Separate IP address range for pods
    • ⚠️ Important: Pod IP range must not overlap with node IP range
    • Pods run as isolated machines in this network

Cross-Node Pod Communication

IP Address Range Management

  • How is the IP address range for the virtual network defined on each node?

  • How do we make sure that each node gets a different set of IP addresses so that pods across the nodes will all have unique IPs?

  • Well, as I mentioned, Kubernetes does not care what range you use, so it’s up to the network plugin to define that range.

  • So network plugin will define a CIDR block for the whole cluster, and from that range, equal subsets of that IP address range is given to each node.

Gateway Communication

  • So we have virtual private networks on each node with their own sets of IP addresses.

  • Now the question is, how does my app pod from node 1 talk to my db pod or node 3 using my db pods IP address?

  • They’re in their own private isolated networks,

  • so they can’t access each other directly, so they have to communicate using gateways.

  • Let’s see what that means. So basically, route rules will be defined in the route table of the servers that will map each node’s IP address as a gateway to the POD network on that specific node.

  • it will look the ip addres that we want to send the packet. then check the route table to identify the corresponding node for that subnet(ip range).

  • then forward that packet to the specific node

;

Scaling Pod Networks

🔍 Challenge: With just three nodes, route table management is straightforward. But what happens at scale?

  • When dealing with thousands of nodes in a cluster:
    • Route table management becomes complex
    • Tracking all routes becomes difficult
    • Manual management becomes impractical

💡 Solution: Network plugins provide automated, scalable solutions

  • How network plugins handle scaling:
    • Network plugin (e.g., Weave) deploys as a pod on each node
    • These pods:
      • Form a self-discovering group
      • Enable direct communication between each other
      • Share real-time pod location information across nodes

Weavenet Implementation

💡 Overview: Weavenet is our chosen CNI plugin for Kubernetes networking

  • Key features of Weavenet:
    • Easy deployment process
    • Runs as a DaemonSet
      • Automatically schedules one Weavenet pod per node
      • Ensures cluster-wide network coverage

▶️ Next Step: We’ll proceed with installing Weavenet to establish our cluster’s pod network