Verifying Network IP Ranges
Checking for IP Range Overlap
[!IMPORTANT] The pod network CIDR block must not overlap with the node IP address range.
Node IP Range
[!INFO] Nodes get IP addresses from the VPC (private network) in AWS:
- Check VPC service to see IP address range
- Example:
172.16.x.x
(may vary by network)
Pod Network Range
[!INFO] Weavenet default configuration:
- Default range:
10.32.0.0/12
- Capacity: ~1 million IP addresses
- Range is evenly divided between cluster nodes
Validation
[!NOTE] In this case, the pod network CIDR (10.32.0.0/12
) and node CIDR do not overlap, so we can use Weavenet’s default IP range.
[!TIP] To verify CIDR ranges:
- Use any CIDR block calculator
- Input the CIDR block
- Review full IP address range
- Confirm no overlap between pod and node ranges
Installing Weavenet CNI Plugin
Finding the Installation Guide
[!INFO] Weavenet can be found in the official Kubernetes documentation under network add-ons, along with other CNI plugin options.
Installation Methods
Quick Install
[!NOTE] Weavenet provides a one-line installation command that:
- Uses
kubectl apply
to deploy the plugin - Applies a Kubernetes manifest containing all required components
Manual Installation
[!TIP] For better version control and configuration:
- Download the manifest file locally:
wget <weavenet-url> -O weave.yaml
- Review the manifest contents before applying
Understanding the Manifest
[!INFO] The manifest creates several Kubernetes components:
- Main component is a DaemonSet
- Weavenet application runs on port 6784
Configuring the CIDR Block
Default Configuration
[!NOTE] Weavenet uses 10.32.0.0/12
as the default CIDR block
Custom Configuration
[!TIP] To override the default CIDR block:
- Locate the DaemonSet component in the YAML
- Find the main
weave-kube
container - Modify the launch command parameters
[!IMPORTANT] When modifying the YAML:
- Commands are specified as a list in YAML format
- Each command option should be on a new line
- Example format:
command: - /home/weave/launch.sh - --ipalloc-range=10.32.1.0/12 - <additional-options>
And now we can apply which
will install all these components including the daemon set.
First, our master node should be in a ready state,
so let’s check that kubectl get node.
And as you see, we have status ready for master,
so that one’s fixed. And the second
one was that core DNS pods were not starting,
they were in a pending state. So that should be fixed as
well. And let’s see that get POD in
cube system namespace.
And there you go. We have coredns pods both
in a running status as well as we have a new pod
here we’ve net on our masternode. And since
we just have one node in the cluster, we have one weave
pod that basically manages the POD network in our
cluster.
And finally we’ve talked about POD network
and POD IPs. So how do we even see
the IP addresses of the pods in the cluster. Well, one way of
doing it is by getting detailed output
of the pod. So let’s check the
corednspod and describe
that pod in detail with kubectl,
describe POD command and the POD name as well
as the namespace.
And this will give us a bunch of output of containers
creating and starting inside the pod,
as well as some metadata of the pod. And as you see,
we have an IP address for corednspod which
is in the range of the CIDR block
that we defined for weavenet, which is132.03.
So that’s one way to check the IP address of a pod.
However, doing this for every pod in the cluster
is too much work. And also you don’t have a nice overview
of list of pods with their IP addresses. To do that,
we can display the pods using output
option dash O, which stands for output option
wide. So this will basically give us an extended
output of the pods. So let’s execute.
And there you go. We have additional columns with additional
information and one of them is the IP address.
And as you see, the CoreDNS pods,
which started after we’ve got deployed,
have the IP addresses both from the range of
WaveNet’s pod networks CIDR block. And you also
notice that all the other pods, the static pods
as well as Kube proxy and the weavenet itself,
have a different IP address which is the IP
address of the control plane node they are
running on. And that’s because these pods are not
part of the regular pod network like CoreDNS
for example, or any other POD application pods,
database pods that we’re going to deploy later.
And that’s why they get these static IP
address of the node. And to check that we can
also do kubectl get node with wide output
and see the internal IP address of the master node,
which is same as right here. So now we have our
cluster completely set up, we have our control plane processes
running and we have a POD network deployed in
the cluster. However, right now we just have a one node cluster
with only a master node. So it’s time to join the worker
nodes and make them part of the cluster. And installing the
POD network plugin was needed to
add the worker nodes to the cluster, otherwise we wouldn’t be able to join
them and create pod networks on them. So now
that we have everything prepared, let’s configure and join
the worker nodes.