Install Control Plane Components #
On the control node, we need to install four services: etcd
, kube-apiserver
, kube-controller-manager
, and kube-scheduler
.
Additionally, we’ll install two tools, etcdctl
and kubectl
, which are used to manage etcd and Kubernetes, respectively.
All of these are binary executables, and we’ll place them in /usr/local/bin
.
If you haven’t done so already, you can add /usr/local/bin
to the PATH
in the ~/.bash_profile
file now.
echo "export PATH=$PATH:/usr/local/bin" >> ~/.bash_profile
source ~/.bash_profile
Download #
etcd
Download etcd
here:
https://github.com/etcd-io/etcd/releases/
wget https://github.com/etcd-io/etcd/releases/download/v3.4.34/etcd-v3.4.34-linux-amd64.tar.gz
Extract the files, and you’ll find etcd
and etcdctl
inside. Place both in /usr/local/bin
.
Kubernetes controle plane components
Kubernetes control plane components can be downloaded here:
https://kubernetes.io/releases/download/#binaries
Download kube-apiserver
, kube-controller-manager
, kube-scheduler
, and kubectl
, and place them in /usr/local/bin
as well.
At this point, there should be a total of 8 files in /usr/local/bin
(including the 2 cfssl tools from the previous section). Adjust their permissions accordingly.
chown root.root /usr/local/bin/*
chmod +x /usr/local/bin/*
Configure etcd #
Create a folder for etcd
to store its data.
mkdir /var/lib/etcd
Create /etc/systemd/system/etcd.service
and add the following configuration:
[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --name master \
--data-dir /var/lib/etcd \
--listen-client-urls http://0.0.0.0:2379 \
--advertise-client-urls http://0.0.0.0:2379 \
--listen-peer-urls http://0.0.0.0:2380 \
--initial-advertise-peer-urls http://0.0.0.0:2380 \
--initial-cluster master=http://0.0.0.0:2380 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster-state new
Restart=always
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
The above configuration starts a single-node etcd instance, listens on all addresses, and does not enforce authentication. Do not use this setup in a production environment.
Start etcd and enable it to start on boot.
systemctl start etcd
systemctl enable etcd
Configure kube-apiserver #
Create /etc/systemd/system/kube-apiserver.service
and add the following configuration:
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/kube-apiserver.env
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_ARGS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Create /etc/kubernetes/kube-apiserver.env
and add the following startup parameters:
KUBE_APISERVER_ARGS="--allow-privileged=true \
--apiserver-count=1 \
--authorization-mode=Node,RBAC \
--bind-address=192.168.56.10 \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--etcd-servers=http://127.0.0.1:2379 \
--event-ttl=1h \
--runtime-config='api/all=true' \
--service-account-key-file=/etc/kubernetes/pki/sa-pub.pem \
--service-account-signing-key-file=/etc/kubernetes/pki/sa-key.pem \
--service-account-issuer=https://master:6443 \
--service-cluster-ip-range=10.96.0.0/24 \
--tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem \
--v=2"
These configurations specify the kube-apiserver
authentication method, certificate locations, the etcd address, the Service network range (service-cluster-ip-range
), and other relevant details.
Start kube-apiserver
and enable it to start automatically on boot.
systemctl start kube-apiserver
systemctl enable kube-apiserver
Prepare kubeconfig
#
Next, we’ll prepare a kubeconfig file. As mentioned earlier, this file serves as the “key” to accessing the Kubernetes API.
Create /etc/kubernetes/admin.kubeconfig
and add the following configuration.
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.pem
server: https://master:6443
name: kubernetes
users:
- name: admin
user:
client-certificate: /etc/kubernetes/pki/client.pem
client-key: /etc/kubernetes/pki/client-key.pem
contexts:
- context:
cluster: kubernetes
user: admin
namespace: default
name: admin@kubernetes
current-context: admin@kubernetes
This file specifies the API access address, the CA certificate, and the client certificate/key that we’ll use as a client.
Add the kubeconfig
environment variable to ~/.bash_profile
, so kubectl
will know to use this configuration file.
echo "export KUBECONFIG=/etc/kubernetes/admin.kubeconfig" >> ~/.bash_profile
source ~/.bash_profile
Let’s test the connectivity between kubectl
and kube-apiserver
by running the following command:
kubectl version
If you see the Server Version
displayed, it means the connection between kubectl
and kube-apiserver
is successful, and there are no issues with the certificate generation and configuration.
If Server Version
is not displayed, you should stop and review the previous steps to ensure everything is set up correctly.
Configure kube-controller-manager #
Create /etc/systemd/system/kube-controller-manager.service
and add the following configuration:
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/kube-controller-manager.env
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Create /etc/kubernetes/kube-controller-manager.env
and add the following startup parameters:
KUBE_CONTROLLER_MANAGER_ARGS="--bind-address=192.168.56.10 \
--cluster-cidr=10.244.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--kubeconfig=/etc/kubernetes/admin.kubeconfig \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa-key.pem \
--service-cluster-ip-range=10.96.0.0/24 \
--use-service-account-credentials=true \
--v=2"
These configurations specify the cluster’s Pod address range, Service address range, certificate locations, and the kubeconfig path, among other settings.
Start kube-controller-manager
and enable it to start automatically on boot.
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
Configure kube-scheduler #
Create /etc/systemd/system/kube-scheduler.service
and add the following configurations.
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/kube-scheduler.env
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Create /etc/kubernetes/kube-scheduler.env
and add the following startup parameters.
KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/admin.kubeconfig \
--leader-elect=true \
--v=0"
The scheduler mainly needs to connect to the API Server, so its configuration is relatively simple.
systemctl start kube-scheduler
systemctl enable kube-scheduler
Next, let’s Install Node components