使用Kubeadm安装高可用Kubernetes v1.11.0集群(Stacked Control Plane Nodes For Baremetal)

一、高可用部署的实现方式介绍

官方文档介绍了使用Kbeadm设置高可用性Kubernetes集群的两种不同方法:

1. 堆叠master的方式(with stacked masters)

这种方法需要较少的基础设施。控制平面节点和etcd成员位于同一位置。

2. 使用外部etcd集群的方式(with an external etcd cluster)

这种方法需要更多的基础设施。控制平面节点和etcd成员是分开的。
这里重点介绍第一种方式,即堆叠master的方式。官方文档链接详见参考资料。

二、实验环境版本信息

docker 17.03.1-ce
kubeadm v1.11.0
kubelet v1.11.0
kubectl v1.11.0
calico v3.1.3

三、部署架构介绍

1. Kubernetes Master(Control Plane)

172.16.170.128 server01 -> docker kubelet keepalived haproxy etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy calico-node
172.16.170.129 server02 -> docker kubelet keepalived haproxy etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy calico-node
172.16.170.130 server03 -> docker kubelet keepalived haproxy etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy calico-node

2. Kubernetes Node

172.16.170.134 server07 -> docker kubelet kube-proxy calico-node
172.16.170.135 server08 -> docker kubelet kube-proxy calico-node

四、实现过程记录

1. 在Kubernetes Control Plane上的所有Node上部署HAProxy做为负载均衡器(由Kubelet管理以静态Pod的方式实现)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
mkdir -p /etc/haproxy/
cat <<EOF > /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local0 err
maxconn 50000
uid 99
gid 99
#daemon
nbproc 1
pidfile haproxy.pid

defaults
mode tcp
log 127.0.0.1 local0 err
maxconn 50000
retries 3
timeout connect 10s
timeout client 10m
timeout server 10m

listen stats
mode http
bind 0.0.0.0:9090
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /haproxy-status
stats realm Haproxy\ Statistics
stats auth admin:12345678
stats hide-version
stats admin if TRUE

frontend kube-apiserver-https
mode tcp
bind :8443
default_backend kube-apiserver-backend

backend kube-apiserver-backend
mode tcp
balance roundrobin
server server01 172.16.170.128:6443 weight 3 minconn 100 maxconn 50000 check inter 5000 rise 2 fall 5
server server02 172.16.170.129:6443 weight 3 minconn 100 maxconn 50000 check inter 5000 rise 2 fall 5
server server03 172.16.170.130:6443 weight 3 minconn 100 maxconn 50000 check inter 5000 rise 2 fall 5
EOF

cat <<EOF > /etc/kubernetes/manifests/haproxy.yaml
kind: Pod
apiVersion: v1
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
component: haproxy
tier: control-plane
name: kube-haproxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-haproxy
image: haproxy:1.7-alpine
resources:
requests:
cpu: 100m
volumeMounts:
- name: haproxy-cfg
readOnly: true
mountPath: /usr/local/etc/haproxy/haproxy.cfg
volumes:
- name: haproxy-cfg
hostPath:
path: /etc/haproxy/haproxy.cfg
EOF

2. 在Kubernetes Control Plane的所有Node上部署Keepalived(由Kubelet管理以静态Pod的方式实现)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
cat <<EOF > /etc/kubernetes/manifests/keepalived.yaml
kind: Pod
apiVersion: v1
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
component: keepalived
tier: control-plane
name: kube-keepalived
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-keepalived
image: osixia/keepalived:1.4.5
env:
- name: KEEPALIVED_VIRTUAL_IPS
value: 172.16.170.151
- name: KEEPALIVED_INTERFACE
value: ens33
- name: KEEPALIVED_UNICAST_PEERS
value: "#PYTHON2BASH:['172.16.170.128', '172.16.170.129', '172.16.170.130']"
- name: KEEPALIVED_PASSWORD
value: docker
- name: KEEPALIVED_PRIORITY
value: "100"
- name: KEEPALIVED_ROUTER_ID
value: "51"
resources:
requests:
cpu: 100m
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
EOF

3. 在Kubernetes Control Plane 的第一个Node(server01)上操作:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
# 生成Kubeadm初始化需要使用的配置文件
mkdir -p kubeadm/config/
cat <<EOF > kubeadm/config/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServerCertSANs:
- "172.16.170.151"
api:
controlPlaneEndpoint: "172.16.170.151:8443"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://172.16.170.128:2379"
advertise-client-urls: "https://172.16.170.128:2379"
listen-peer-urls: "https://172.16.170.128:2380"
initial-advertise-peer-urls: "https://172.16.170.128:2380"
initial-cluster: "server01=https://172.16.170.128:2380"
serverCertSANs:
- server01
- 172.16.170.128
peerCertSANs:
- server01
- 172.16.170.128
controllerManagerExtraArgs:
node-monitor-grace-period: 10s
pod-eviction-timeout: 10s

networking:
podSubnet: 10.211.0.0/16
serviceSubnet: 10.96.0.0/16

kubeProxy:
config:
mode: iptables
EOF

# 拉取Kubeadm初始化需要使用的docker镜像
kubeadm config images pull --config kubeadm/config/kubeadm-config.yaml

# 执行Kubeadm的初始化操作(注意记录输出的Node加入命令)
kubeadm init --config kubeadm/config/kubeadm-config.yaml

# 配置当前节点上的Kubectl访问权限
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# 保存输出中类似于下面的命令,供添加节点功能使用
kubeadm join 172.16.170.151:8443 --token lt0o7j.ayxwcqr8v88spzjj --discovery-token-ca-cert-hash sha256:1ad613cf114281af6eca0afeebae7185ed69218ff92b73ebe248b90cc74353a3

# 配置server01到server02和server03的ssh免密登录
ssh-keygen
ssh-copy-id -i .ssh/id_rsa.pub root@server02
ssh-copy-id -i .ssh/id_rsa.pub root@server03

# 验证server01到server02和server03的ssh免密登录
ssh server02
ssh server03

# 分发pki证书和admin.conf文件
ssh server02 'mkdir -p /etc/kubernetes/pki/etcd/'
ssh server03 'mkdir -p /etc/kubernetes/pki/etcd/'
cat <<EOF > kubeadm/config/scp-config.sh
USER=root
CONTROL_PLANE_IPS="172.16.170.129 172.16.170.130"
for host in \${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt \${USER}@\$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key \${USER}@\$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf \${USER}@\$host:/etc/kubernetes/
done
EOF
chmod 0755 kubeadm/config/scp-config.sh
./kubeadm/config/scp-config.sh

4. 在Kubernetes Control Plane 的第二个Node(server02)上操作:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
# 生成Kubeadm初始化需要使用的配置文件
mkdir -p kubeadm/config/
cat <<EOF > kubeadm/config/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServerCertSANs:
- "172.16.170.151"
api:
controlPlaneEndpoint: "172.16.170.151:8443"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://172.16.170.129:2379"
advertise-client-urls: "https://172.16.170.129:2379"
listen-peer-urls: "https://172.16.170.129:2380"
initial-advertise-peer-urls: "https://172.16.170.129:2380"
initial-cluster: "server01=https://172.16.170.128:2380,server02=https://172.16.170.129:2380"
initial-cluster-state: existing
serverCertSANs:
- server02
- 172.16.170.129
peerCertSANs:
- server02
- 172.16.170.129
controllerManagerExtraArgs:
node-monitor-grace-period: 10s
pod-eviction-timeout: 10s

networking:
podSubnet: 10.211.0.0/16
serviceSubnet: 10.96.0.0/16

kubeProxy:
config:
mode: iptables
EOF

# 拉取Kubeadm初始化需要使用的docker镜像
kubeadm config images pull --config kubeadm/config/kubeadm-config.yaml

# 通过Kubeadm phase来启动server02上的Kubelet
kubeadm alpha phase certs all --config kubeadm/config/kubeadm-config.yaml
kubeadm alpha phase kubelet config write-to-disk --config kubeadm/config/kubeadm-config.yaml
kubeadm alpha phase kubelet write-env-file --config kubeadm/config/kubeadm-config.yaml
kubeadm alpha phase kubeconfig kubelet --config kubeadm/config/kubeadm-config.yaml
systemctl restart kubelet.service
systemctl status kubelet.service

# 添加当前Node上的etcd节点到etcd集群中
CP0_IP=172.16.170.128
CP0_HOSTNAME=server01
CP1_IP=172.16.170.129
CP1_HOSTNAME=server02
KUBECONFIG=/etc/kubernetes/admin.conf kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380
kubeadm alpha phase etcd local --config kubeadm/config/kubeadm-config.yaml

# 部署Kubernetes Control Plane的相关组件,并且标记当前Node为Master
kubeadm alpha phase kubeconfig all --config kubeadm/config/kubeadm-config.yaml
kubeadm alpha phase controlplane all --config kubeadm/config/kubeadm-config.yaml
kubeadm alpha phase mark-master --config kubeadm/config/kubeadm-config.yaml

5. 在Kubernetes Control Plane 的第三个Node(server03)上操作:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
# 生成Kubeadm初始化需要使用的配置文件
mkdir -p kubeadm/config/
cat <<EOF > kubeadm/config/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServerCertSANs:
- "172.16.170.151"
api:
controlPlaneEndpoint: "172.16.170.151:8443"
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://172.16.170.130:2379"
advertise-client-urls: "https://172.16.170.130:2379"
listen-peer-urls: "https://172.16.170.130:2380"
initial-advertise-peer-urls: "https://172.16.170.130:2380"
initial-cluster: "server01=https://172.16.170.128:2380,server02=https://172.16.170.129:2380,server03=https://172.16.170.130:2380"
initial-cluster-state: existing
serverCertSANs:
- server03
- 172.16.170.130
peerCertSANs:
- server03
- 172.16.170.130
controllerManagerExtraArgs:
node-monitor-grace-period: 10s
pod-eviction-timeout: 10s

networking:
podSubnet: 10.211.0.0/16
serviceSubnet: 10.96.0.0/16

kubeProxy:
config:
mode: iptables
EOF

# 拉取Kubeadm初始化需要使用的docker镜像
kubeadm config images pull --config kubeadm/config/kubeadm-config.yaml

# 通过Kubeadm phase来启动server03上的Kubelet
kubeadm alpha phase certs all --config kubeadm/config/kubeadm-config.yaml
kubeadm alpha phase kubelet config write-to-disk --config kubeadm/config/kubeadm-config.yaml
kubeadm alpha phase kubelet write-env-file --config kubeadm/config/kubeadm-config.yaml
kubeadm alpha phase kubeconfig kubelet --config kubeadm/config/kubeadm-config.yaml
systemctl restart kubelet.service
systemctl status kubelet.service

# 添加当前Node上的etcd节点到etcd集群中
CP0_IP=172.16.170.128
CP0_HOSTNAME=server01
CP2_IP=172.16.170.130
CP2_HOSTNAME=server03
KUBECONFIG=/etc/kubernetes/admin.conf kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
kubeadm alpha phase etcd local --config kubeadm/config/kubeadm-config.yaml

# 部署Kubernetes Control Plane的相关组件,并且标记当前Node为Master
kubeadm alpha phase kubeconfig all --config kubeadm/config/kubeadm-config.yaml
kubeadm alpha phase controlplane all --config kubeadm/config/kubeadm-config.yaml
kubeadm alpha phase mark-master --config kubeadm/config/kubeadm-config.yaml

6. Kubernetes Control Plane的另外两个节点分别配置Kubectl访问权限

1
2
3
4
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

7. 高可用部署的Stack结构验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-node-ff5cv 2/2 Running 0 47s 172.16.170.130 server03
kube-system calico-node-hb782 2/2 Running 0 8m 172.16.170.128 server01
kube-system calico-node-zpwcp 2/2 Running 0 4m 172.16.170.129 server02
kube-system coredns-777d78ff6f-5n8bg 1/1 Running 0 10m 10.211.0.4 server01
kube-system coredns-777d78ff6f-wfm7d 1/1 Running 0 10m 10.211.0.5 server01
kube-system etcd-server01 1/1 Running 0 9m 172.16.170.128 server01
kube-system etcd-server02 1/1 Running 0 3m 172.16.170.129 server02
kube-system etcd-server03 1/1 Running 0 27s 172.16.170.130 server03
kube-system kube-apiserver-server01 1/1 Running 0 9m 172.16.170.128 server01
kube-system kube-apiserver-server02 1/1 Running 0 2m 172.16.170.129 server02
kube-system kube-apiserver-server03 1/1 Running 0 16s 172.16.170.130 server03
kube-system kube-controller-manager-server01 1/1 Running 0 9m 172.16.170.128 server01
kube-system kube-controller-manager-server02 1/1 Running 0 2m 172.16.170.129 server02
kube-system kube-controller-manager-server03 1/1 Running 0 16s 172.16.170.130 server03
kube-system kube-haproxy-server01 1/1 Running 0 9m 172.16.170.128 server01
kube-system kube-haproxy-server02 1/1 Running 0 4m 172.16.170.129 server02
kube-system kube-haproxy-server03 1/1 Running 0 27s 172.16.170.130 server03
kube-system kube-keepalived-server01 1/1 Running 0 9m 172.16.170.128 server01
kube-system kube-keepalived-server02 1/1 Running 0 4m 172.16.170.129 server02
kube-system kube-keepalived-server03 1/1 Running 0 27s 172.16.170.130 server03
kube-system kube-proxy-88b55 1/1 Running 0 4m 172.16.170.129 server02
kube-system kube-proxy-9n9vv 1/1 Running 0 9m 172.16.170.128 server01
kube-system kube-proxy-j7lqz 1/1 Running 0 47s 172.16.170.130 server03
kube-system kube-scheduler-server01 1/1 Running 0 9m 172.16.170.128 server01
kube-system kube-scheduler-server02 1/1 Running 0 2m 172.16.170.129 server02
kube-system kube-scheduler-server03 1/1 Running 0 16s 172.16.170.130 server03

# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
server01 Ready master 10m v1.11.0 172.16.170.128 <none> CentOS Linux 7 (Core) 3.10.0-862.11.6.el7.x86_64 docker://17.3.1
server02 Ready master 4m v1.11.0 172.16.170.129 <none> CentOS Linux 7 (Core) 3.10.0-862.11.6.el7.x86_64 docker://17.3.1
server03 Ready master 1m v1.11.0 172.16.170.130 <none> CentOS Linux 7 (Core) 3.10.0-862.11.6.el7.x86_64 docker://17.3.1

8. 为高可用集群添加两个Node

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
# 在server07上执行添加Node的命令
# kubeadm join 172.16.170.151:8443 --token lt0o7j.ayxwcqr8v88spzjj --discovery-token-ca-cert-hash sha256:1ad613cf114281af6eca0afeebae7185ed69218ff92b73ebe248b90cc74353a3
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0123 16:10:01.668746 17689 kernel_validator.go:81] Validating kernel version
I0123 16:10:01.668820 17689 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "172.16.170.151:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.16.170.151:8443"
[discovery] Requesting info from "https://172.16.170.151:8443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.16.170.151:8443"
[discovery] Successfully established connection with API Server "172.16.170.151:8443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "server07" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

# 在server08上执行添加Node的命令
# kubeadm join 172.16.170.151:8443 --token lt0o7j.ayxwcqr8v88spzjj --discovery-token-ca-cert-hash sha256:1ad613cf114281af6eca0afeebae7185ed69218ff92b73ebe248b90cc74353a3
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0123 16:10:29.832899 17706 kernel_validator.go:81] Validating kernel version
I0123 16:10:29.833038 17706 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "172.16.170.151:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.16.170.151:8443"
[discovery] Requesting info from "https://172.16.170.151:8443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.16.170.151:8443"
[discovery] Successfully established connection with API Server "172.16.170.151:8443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "server08" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

# 在任意一个Master节点上执行
# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
server01 Ready master 16m v1.11.0 172.16.170.128 <none> CentOS Linux 7 (Core) 3.10.0-862.11.6.el7.x86_64 docker://17.3.1
server02 Ready master 10m v1.11.0 172.16.170.129 <none> CentOS Linux 7 (Core) 3.10.0-862.11.6.el7.x86_64 docker://17.3.1
server03 Ready master 7m v1.11.0 172.16.170.130 <none> CentOS Linux 7 (Core) 3.10.0-862.11.6.el7.x86_64 docker://17.3.1
server07 Ready <none> 40s v1.11.0 172.16.170.134 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 docker://17.3.1
server08 Ready <none> 12s v1.11.0 172.16.170.135 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 docker://17.3.1

# 在任意一个Master节点上执行
# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-node-c8j7r 2/2 Running 0 1m 172.16.170.134 server07
kube-system calico-node-chngv 1/2 Running 0 32s 172.16.170.135 server08
kube-system calico-node-ff5cv 2/2 Running 0 7m 172.16.170.130 server03
kube-system calico-node-hb782 2/2 Running 0 15m 172.16.170.128 server01
kube-system calico-node-zpwcp 2/2 Running 0 11m 172.16.170.129 server02
kube-system coredns-777d78ff6f-5n8bg 1/1 Running 0 16m 10.211.0.4 server01
kube-system coredns-777d78ff6f-wfm7d 1/1 Running 0 16m 10.211.0.5 server01
kube-system etcd-server01 1/1 Running 0 16m 172.16.170.128 server01
kube-system etcd-server02 1/1 Running 0 10m 172.16.170.129 server02
kube-system etcd-server03 1/1 Running 0 7m 172.16.170.130 server03
kube-system kube-apiserver-server01 1/1 Running 0 16m 172.16.170.128 server01
kube-system kube-apiserver-server02 1/1 Running 0 9m 172.16.170.129 server02
kube-system kube-apiserver-server03 1/1 Running 0 7m 172.16.170.130 server03
kube-system kube-controller-manager-server01 1/1 Running 0 16m 172.16.170.128 server01
kube-system kube-controller-manager-server02 1/1 Running 0 9m 172.16.170.129 server02
kube-system kube-controller-manager-server03 1/1 Running 0 7m 172.16.170.130 server03
kube-system kube-haproxy-server01 1/1 Running 0 16m 172.16.170.128 server01
kube-system kube-haproxy-server02 1/1 Running 0 10m 172.16.170.129 server02
kube-system kube-haproxy-server03 1/1 Running 0 7m 172.16.170.130 server03
kube-system kube-keepalived-server01 1/1 Running 0 16m 172.16.170.128 server01
kube-system kube-keepalived-server02 1/1 Running 0 10m 172.16.170.129 server02
kube-system kube-keepalived-server03 1/1 Running 0 7m 172.16.170.130 server03
kube-system kube-proxy-88b55 1/1 Running 0 11m 172.16.170.129 server02
kube-system kube-proxy-9n9vv 1/1 Running 0 16m 172.16.170.128 server01
kube-system kube-proxy-g8lsj 1/1 Running 0 1m 172.16.170.134 server07
kube-system kube-proxy-j7lqz 1/1 Running 0 7m 172.16.170.130 server03
kube-system kube-proxy-qdhpj 1/1 Running 0 32s 172.16.170.135 server08
kube-system kube-scheduler-server01 1/1 Running 0 16m 172.16.170.128 server01
kube-system kube-scheduler-server02 1/1 Running 0 9m 172.16.170.129 server02
kube-system kube-scheduler-server03 1/1 Running 0 7m 172.16.170.130 server03

五、参考资料

https://v1-11.docs.kubernetes.io/docs/setup/independent/high-availability/
https://my.oschina.net/u/3433152/blog/1935402
https://www.jianshu.com/p/49a48752c1a3?utm_source=oschina-app
https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/
https://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part2/
https://blog.csdn.net/liu_qingbo/article/details/78383892