Cloud Native · Kubernetes Deploy highly available kube-controller-manager clusters

The cluster consists of 3 nodes, and once launched, a leader node will be generated through a competitive election mechanism, and the other nodes will be blocked. When the leader node becomes unavailable, the blocking node will again elect a new leader node, thus guaranteeing the availability of the service. hmi screen lcd
Communicate with the secure port of kube-apiserver;
Output metrics in prometheus format on secure ports (https, 10257);
Note: If not specifically indicated, all operations in this document are performed on the qist node.

12.1 Create a kube-controller-manager certificate and private key

To create a certificate signing request:

cd /opt/k8s/work
cat > /opt/k8s/cfssl/k8s/k8s-controller-manager.json << EOF
{
“CN”: “system:kube-controller-manager”,
“hosts”: [“”],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “$CERT_ST”,
“L”: “$CERT_L”,
“O”: “system:kube-controller-manager”,
“OU”: “Kubernetes-manual”

EOF

The hosts list contains all kube-controller-manager node IPs;
Both CN and O are system:kube-controller-manager, and kubernetes’ built-in ClusterRoleBindings
system:kube-controller-manager gives kube-controller-manager the permissions they need to work.
Generate the certificate and private key:

cd /opt/k8s/work
cfssl gencert \
-ca=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
-ca-key=/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem \
-config=/opt/k8s/cfssl/ca-config.json \
-profile=kubernetes \
/opt/k8s/cfssl/k8s/k8s-controller-manager.json | \
cfssljson -bare /opt/k8s/cfssl/pki/k8s/k8s-controller-manager
root@Qist work# ll /opt/k8s/cfssl/pki/k8s/k8s-controller-manager*
-rw——- 1 root root 1679 Dec 3 2020 /opt/k8s/cfssl/pki/k8s/k8s-controllermanager-key.pem
-rw-r–r– 1 root root 1127 Dec 3 2020 /opt/k8s/cfssl/pki/k8s/k8s-controllermanager.csr
-rw-r–r– 1 root root 1505 Dec 3 2020 /opt/k8s/cfssl/pki/k8s/k8s-controllermanager.pem
1
2
3
4
5
6
7
8
9
10
11
12
Distribute the generated certificate and private key to all master nodes:

cd /opt/k8s/work
scp -r /opt/k8s/cfssl/pki/k8s/k8s-controller-manager-*
root@192.168.2.175:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/k8s-controller-manager-*
root@192.168.2.176:/apps/k8s/ssl/k8s
scp -r /opt/k8s/cfssl/pki/k8s/k8s-controller-manager-*
root@192.168.2.177:/apps/k8s/ssl/k8s

12.2 Creating and Distributing Kubeconfig Files

the kube-controller-manager accesses the apiserver using the kubeconfig file, which provides information such as the
apiserver address, the embedded CA certificate, and the kube-controller-manager certificate:

cd /opt/k8s/kubeconfig
kubectl config set-cluster kubernetes \
–certificate-authority=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \
–embed-certs=true \
–server=https://127.0.0.1:6443 \
–kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
–client-certificate=/opt/k8s/cfssl/pki/k8s/k8s-controller-manager.pem \
–embed-certs=true \
–client-key=/opt/k8s/cfssl/pki/k8s/k8s-controller-manager-key.pem \
–kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context kubernetes \
–cluster=kubernetes \
–user=system:kube-controller-manager \
–kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context kubernetes –kubeconfig=kube-controllermanager.kubeconfig

kube-controller-manager is mixed with kube-apiserver, so kube-apiserver is accessed
directly through the node IP; Distribute kubeconfig to all master nodes:
cd /opt/k8s/kubeconfig
scp kube-controller-manager.kubeconfig root@192.168.2.175:/apps/k8s/config/
scp kube-controller-manager.kubeconfig root@192.168.2.176:/apps/k8s/config/
scp kube-controller-manager.kubeconfig root@192.168.2.177:/apps/k8s/config/

 

12.3 Create a kube-controller-manager startup configuration

cd /opt/k8s/work
cat >kube-controller-manager <<EOF
KUBE_CONTROLLER_MANAGER_OPTS=”–logtostderr=true \
–profiling \
–concurrent-service-syncs=2 \
–concurrent-deployment-syncs=10 \
–concurrent-gc-syncs=30 \
–leader-elect=true \
–bind-address=0.0.0.0 \
–service-cluster-ip-range=10.66.0.0/16 \
–cluster-cidr=10.80.0.0/12 \
–node-cidr-mask-size=24 \
–cluster-name=kubernetes \
–allocate-node-cidrs=true \
–kubeconfig=/apps/k8s/config/kube-controller-manager.kubeconfig \
–authentication-kubeconfig=/apps/k8s/config/kube-controller-manager.kubeconfig \
–authorization-kubeconfig=/apps/k8s/config/kube-controller-manager.kubeconfig \
–use-service-account-credentials=true \
–client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
–requestheader-client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
–requestheader-client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
–requestheader-allowed-names=aggregator \
–requestheader-extra-headers-prefix=X-Remote-Extra- \
–requestheader-group-headers=X-Remote-Group \
–requestheader-username-headers=X-Remote-User \
–node-monitor-grace-period=30s \
–node-monitor-period=5s \
–pod-eviction-timeout=1m0s \
–node-startup-grace-period=20s \
–terminated-pod-gc-threshold=50 \
–alsologtostderr=true \
–cluster-signing-cert-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
–cluster-signing-key-file=/apps/k8s/ssl/k8s/k8s-ca-key.pem \
–deployment-controller-sync-period=10s \
–experimental-cluster-signing-duration=876000h0m0s \
–root-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \
–service-account-private-key-file=/apps/k8s/ssl/k8s/k8s-ca-key.pem \
–enable-garbage-collector=true \
–controllers=*,bootstrapsigner,tokencleaner \
–horizontal-pod-autoscaler-sync-period=10s \
–tls-cert-file=/apps/k8s/ssl/k8s/k8s-controller-manager.pem \
–tls-private-key-file=/apps/k8s/ssl/k8s/k8s-controller-manager-key.pem \
–kube-api-qps=100 \
–kube-api-burst=100 \
–tls-ciphersuites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDH
E_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES
_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \
–log-dir=/apps/k8s/log \
–v=2″
EOF

port=0: Turns off listening on a non-secure port (http), and the –address parameter is invalid, and the –bind-address parameter is valid;
https /metrics request for secure-port=10257 port;
kubeconfig : Specifies the path to the kubeconfig file, using which kube-controller-manager uses to connect and authenticate kubeapiserver;
authentication-kubeconfig and –authorization-kubeconfig : Kube-controller-manager uses it to connect
to apiservers to authenticate and authorize client requests. kube-controller-manager no longer uses –tls-ca-file
to validate client certificates requesting https metrics. If these two kubeconfig parameters are not configured, the client’s request to connect to
the kube-controller-manager https port is denied (prompting insufficient permissions).
cluster-signing-*-file: signs the certificate created by TLS Bootstrap;
Distribute the kube-controller-manager configuration file to all master nodes:

cd /opt/k8s/work
scp kube-controller-manager root@192.168.2.175:/apps/k8s/conf/
scp kube-controller-manager root@192.168.2.176:/apps/k8s/conf/
scp kube-controller-manager root@192.168.2.177:/apps/k8s/conf/

12.4 Create a kube-controller-manager systemd unit file

cd /opt/k8s/work
cat > kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
LimitNOFILE=655350
LimitNPROC=655350
LimitCORE=infinity
LimitMEMLOCK=infinity
EnvironmentFile=-/apps/k8s/conf/kube-controller-manager
ExecStart=/apps/k8s/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

12.5 Create and distribute kube-controller-mananger systemd unit files for each node

Distribute to all master nodes:

cd /opt/k8s/work
scp kube-controller-manager.service root@192.168.2.175:/usr/lib/systemd/system/
scp kube-controller-manager.service root@192.168.2.176:/usr/lib/systemd/system/
scp kube-controller-manager.service root@192.168.2.177:/usr/lib/systemd/system/

12.6 Start the kube-controller-manager service

# 全局刷新service
systemctl daemon-reload
# 设置kube-controller-manager开机启动
systemctl enable kube-controller-manager
#重启kube-controller-manager
systemctl restart kube-controller-manager

12.7 Check the status of the Service

systemctl status kube-controller-manager|grep Active
1
Kube-controller-manager listens on port 10257 and receives https requests:

[root@k8s-master-1 conf]# netstat -lnpt | grep kube-cont
tcp6 0 0 :::10257 :::* LISTEN
24078/kube-controll

12.8 View the current leader

kubectl -n kube-system get leases kube-controller-manager
NAME HOLDER AGE
kube-controller-manager k8s-master-2_c445a762-adc1-4623-a9b5-4d8ea3d34933 1d

12.9 Test the high availability of the kube-controller-manager cluster

Stop the kube-controller-manager service for one or two nodes, and observe the logs of other nodes to see if you have obtained the leader permission.

By hmimcu