Install and Setup Flannel #
In the previous network setup, we manually added routing tables to direct Pod traffic.
In this section, we’ll install Flannel to establish connectivity for the Pod network.
INFO
If you created Pods previously, it’s necessary to clear them before adjusting the network setup; otherwise, errors may occur.
You can delete the previous Pods by running kubectl delete deployment nginx.
Download Flannel #
First, download the Flannel daemon, flanneld, on each Node.
# Same on both nodes
wget https://github.com/flannel-io/flannel/releases/download/v0.25.7/flannel-v0.25.7-linux-amd64.tar.gz
# Same on both nodes
wget https://github.com/flannel-io/flannel/releases/download/v0.25.7/flannel-v0.25.7-linux-amd64.tar.gz
Extract the package and move the flanneld
file to /usr/local/bin
.
# Same on both nodes
tar xzvf flannel-v0.25.7-linux-amd64.tar.gz
mv flanneld /usr/local/bin
# Same on both nodes
tar xzvf flannel-v0.25.7-linux-amd64.tar.gz
mv flanneld /usr/local/bin
Next, install the Flannel CNI plugin.
# Same on both nodes
wget https://github.com/flannel-io/cni-plugin/releases/download/v1.5.1-flannel3/cni-plugin-flannel-linux-amd64-v1.5.1-flannel3.tgz
# Same on both nodes
wget https://github.com/flannel-io/cni-plugin/releases/download/v1.5.1-flannel3/cni-plugin-flannel-linux-amd64-v1.5.1-flannel3.tgz
Extract the package, move the flannel-amd64
file to /usr/local/bin
, and rename it appropriately.
# Same on both nodes
tar xzvf cni-plugin-flannel-linux-amd64-v1.5.1-flannel3.tgz
mv flannel-amd64 /opt/cni/bin/flannel
# Same on both nodes
tar xzvf cni-plugin-flannel-linux-amd64-v1.5.1-flannel3.tgz
mv flannel-amd64 /opt/cni/bin/flannel
Configure flannel #
Before switching to Flannel, you’ll need to clear the previous routing and CNI configurations we set up earlier.
# Same on both nodes
rm -f /etc/cni/net.d/*
# Same on both nodes
rm -f /etc/cni/net.d/*
Next, create /etc/cni/net.d/10-flannel.conflist
and add the following content. This configuration file is for the container runtime, indicating that we’ll be using Flannel.
Same on both nodes
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
Same on both nodes
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
Next, prepare a configuration file for flanneld. Create /etc/kube-flannel/net-conf.json
and add the following content. The configuration is the same for both Nodes.
Same on both nodes
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "vxlan"
}
}
Same on both nodes
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "vxlan"
}
}
Create /etc/systemd/system/flanneld.service
for flanneld.
[Unit]
Description=Flannel
Documentation=https://github.com/flannel-io/flannel/
[Service]
EnvironmentFile=/etc/kubernetes/flanneld.env
ExecStart=/usr/local/bin/flanneld $FLANNELD_ARGS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
[Unit]
Description=Flannel
Documentation=https://github.com/flannel-io/flannel/
[Service]
EnvironmentFile=/etc/kubernetes/flanneld.env
ExecStart=/usr/local/bin/flanneld $FLANNELD_ARGS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Prepare /etc/kubernetes/flanneld.env
and add the following contents.
两个节点配置内容不一样
NODE_NAME="node01"
FLANNELD_ARGS="-kube-subnet-mgr \
-kubeconfig-file=/etc/kubernetes/admin.kubeconfig \
-ip-masq \
-public-ip=192.168.56.11 \
-iface=eth1"
两个节点配置内容不一样
NODE_NAME="node01"
FLANNELD_ARGS="-kube-subnet-mgr \
-kubeconfig-file=/etc/kubernetes/admin.kubeconfig \
-ip-masq \
-public-ip=192.168.56.12 \
-iface=eth1"
Flannel can run independently or as a DaemonSet
in a containerized mode. In the former case, it typically stores network information in etcd
, while in the latter, it uses the Kubernetes API. Here, we’ve made a slight adjustment: by setting parameters like NODE_NAME
, kube-subnet-mgr
, and kubeconfig-file
, we allow flanneld to store network information via the Kubernetes API even when running independently.
Among these parameters, NODE_NAME
should be modified according to each specific Node: set it to node01
for the first Node and node02
for the second. Additionally, adjust the public-ip
to reflect the main IP used for interconnection on each Node.
After finalizing the flanneld
configuration files, we need to make a small adjustment to the kube-controller-manager
.
In the chapter Configure Network, we manually assigned subnets to each Node: Node01 uses 10.224.1.0/24
and Node02 uses 10.224.2.0/24
. Once Flannel is in use, the task of subnet assignment for each Node can be handled by the Kubernetes cluster.
In the startup configuration of the kube-controller-manager
on the master, add the following parameter.
This is the /etc/kubernetes/kube-controller-manager.env
file on the Master node.
KUBE_CONTROLLER_MANAGER_ARGS="... \
... \
--allocate-node-cidrs \
...
This configuration means that the subnet for each Node will be allocated by kube-controller-manager.
Save the changes, then restart kube-controller-manager on the Master and flanneld on each Node.
At this point, when you launch new Pods, their network connectivity will be managed by Flannel.