使用Kubeadm安装高可用Kubernetes v1.17.0集群(Stacked Control Plane Nodes For Baremetal)

一、高可用部署的实现方式介绍

官方文档介绍了使用Kbeadm设置高可用性Kubernetes集群的两种不同方法:

1. 堆叠master的方式(with stacked masters)

这种方法需要较少的基础设施。控制平面节点和etcd成员位于同一位置。

2. 使用外部etcd集群的方式(with an external etcd cluster)

这种方法需要更多的基础设施。控制平面节点和etcd成员是分开的。
这里重点介绍第一种方式,即堆叠master的方式。官方文档链接详见参考资料。

二、实验环境版本信息

1. 高可用工具的版本(这里记录的是docker镜像的版本)

haproxy:1.7-alpine
osixia/keepalived:1.4.5

2. Kubernetes各个组件的版本

etcd v3.4.3
kube-apiserver v1.17.0
kube-controller-manager v1.17.0
kube-scheduler v1.17.0
kubectl v1.17.0
coredns 1.6.5

docker 18.09.9
kube-proxy v1.17.0
kubelet v1.17.0
calico v3.11.1 (calico/node:v3.11.1 calico/pod2daemon-flexvol:v3.11.1 calico/cni:v3.11.1 calico/kube-controllers:v3.11.1)

三、部署架构介绍

stacked_etcd_topology

1. Kubernetes Master(Control Plane)

192.168.112.128 master01 -> docker kubelet keepalived haproxy etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy calico
192.168.112.129 master02 -> docker kubelet keepalived haproxy etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy calico
192.168.112.130 master03 -> docker kubelet keepalived haproxy etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy calico

2. Kubernetes Node

192.168.112.131 node01 -> docker kubelet kube-proxy calico(calico-node)
192.168.112.132 node02 -> docker kubelet kube-proxy calico(calico-node)

四、实现过程记录

1. 在Kubernetes Control Plane上的所有Node上部署HAProxy做为负载均衡器(由Kubelet管理以静态Pod的方式实现)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
## 在控制平面的所有Node上执行
mkdir -p /etc/haproxy/
cat <<EOF > /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local0 err
maxconn 50000
uid 99
gid 99
#daemon
nbproc 1
pidfile haproxy.pid

defaults
mode tcp
log 127.0.0.1 local0 err
maxconn 50000
retries 3
timeout connect 10s
timeout client 10m
timeout server 10m

listen stats
mode http
bind 0.0.0.0:9090
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /haproxy-status
stats realm Haproxy\ Statistics
stats auth admin:12345678
stats hide-version
stats admin if TRUE

frontend kube-apiserver-https
mode tcp
bind :8443
default_backend kube-apiserver-backend

backend kube-apiserver-backend
mode tcp
balance roundrobin
server server01 192.168.112.128:6443 weight 3 minconn 100 maxconn 50000 check inter 5000 rise 2 fall 5
server server02 192.168.112.129:6443 weight 3 minconn 100 maxconn 50000 check inter 5000 rise 2 fall 5
server server03 192.168.112.130:6443 weight 3 minconn 100 maxconn 50000 check inter 5000 rise 2 fall 5
EOF

## 仅在master01上执行
mkdir -p /etc/kubernetes/manifests/

## 在master01上需要先执行,在master02和master03上需先做完kubeadm join后再执行
cat <<EOF > /etc/kubernetes/manifests/haproxy.yaml
kind: Pod
apiVersion: v1
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
component: haproxy
tier: control-plane
name: kube-haproxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-haproxy
image: haproxy:1.7-alpine
resources:
requests:
cpu: 100m
volumeMounts:
- name: haproxy-cfg
readOnly: true
mountPath: /usr/local/etc/haproxy/haproxy.cfg
volumes:
- name: haproxy-cfg
hostPath:
path: /etc/haproxy/haproxy.cfg
EOF

2. 在Kubernetes Control Plane的所有Node上部署Keepalived(由Kubelet管理以静态Pod的方式实现)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
## 仅在master01上执行
mkdir -p /etc/kubernetes/manifests/

## 在master01上需要在kubeadm init前先执行,在master02和master03上需先做完kubeadm join后再执行
cat <<EOF > /etc/kubernetes/manifests/keepalived.yaml
kind: Pod
apiVersion: v1
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
component: keepalived
tier: control-plane
name: kube-keepalived
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-keepalived
image: osixia/keepalived:1.4.5
env:
- name: KEEPALIVED_VIRTUAL_IPS
value: 192.168.112.136
- name: KEEPALIVED_INTERFACE
value: ens33
- name: KEEPALIVED_UNICAST_PEERS
value: "#PYTHON2BASH:['192.168.112.128', '192.168.112.129', '192.168.112.130']"
- name: KEEPALIVED_PASSWORD
value: docker
- name: KEEPALIVED_PRIORITY
value: "200"
- name: KEEPALIVED_ROUTER_ID
value: "51"
resources:
requests:
cpu: 100m
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
EOF

3. 在Kubernetes Control Plane 的第一个Node(master01)上操作:

(1)生成kubeadm配置文件,并拉取相关的docker镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
## 生成Kubeadm初始化需要使用的配置文件
mkdir -p kubeadm/config/
cat <<EOF > kubeadm/config/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.112.128
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: 192.168.112.136:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
podSubnet: 10.211.0.0/16
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/16
scheduler: {}

EOF

## 拉取Kubeadm初始化需要使用的docker镜像
kubeadm config images pull --config kubeadm/config/kubeadm-config.yaml
------------------------------------------------------------------------------------------------------------------------------------------------
W0315 10:52:16.188454 5239 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0315 10:52:16.188503 5239 validation.go:28] Cannot validate kubelet config - no validator is available
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5
------------------------------------------------------------------------------------------------------------------------------------------------
。。。。。。

(2)初始化集群(千万注意A和B任选一种方法即可,不可以同时使用)

A. 使用自动分发根证书的方式初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
# 执行Kubeadm的初始化操作(自动分发根证书)
kubeadm init --config kubeadm/config/kubeadm-config.yaml --upload-certs
------------------------------------------------------------------------------------------------------------------------------------------------
W0315 10:53:05.509978 5340 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0315 10:53:05.510016 5340 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.112.128 192.168.112.136]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [192.168.112.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [192.168.112.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0315 10:53:08.283605 5340 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0315 10:53:08.284727 5340 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 42.019337 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
16f06d3321fce089cad4b229da9b5d3ef94c08a246943e0f375b977f18bbab8e
[mark-control-plane] Marking the node master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:33a6370b4bb4a9385c1d878e9a7a085ad969d521e4b309b01be797c0d7867d69 \
--control-plane --certificate-key 16f06d3321fce089cad4b229da9b5d3ef94c08a246943e0f375b977f18bbab8e

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:33a6370b4bb4a9385c1d878e9a7a085ad969d521e4b309b01be797c0d7867d69
------------------------------------------------------------------------------------------------------------------------------------------------

# 保存输出中类似于下面的命令,供添加节点功能使用
。。。。。。
You can now join any number of the control-plane node running the following command on each as root:

kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:33a6370b4bb4a9385c1d878e9a7a085ad969d521e4b309b01be797c0d7867d69 \
--control-plane --certificate-key 16f06d3321fce089cad4b229da9b5d3ef94c08a246943e0f375b977f18bbab8e

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:33a6370b4bb4a9385c1d878e9a7a085ad969d521e4b309b01be797c0d7867d69

B. 使用手动分发根证书的方式初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
# 执行Kubeadm的初始化操作(手动分发根证书)
kubeadm init --config kubeadm/config/kubeadm-config.yaml
------------------------------------------------------------------------------------------------------------------------------------------------
W0315 11:37:50.200933 2834 validation.go:28] Cannot validate kubelet config - no validator is available
W0315 11:37:50.201021 2834 validation.go:28] Cannot validate kube-proxy config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.112.128 192.168.112.136]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [192.168.112.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [192.168.112.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0315 11:37:52.884008 2834 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0315 11:37:52.885218 2834 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 36.521431 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:05945a0dc7d9c5e45e196d8582de19a3df559d1f9f4e4cb52c77d3051db923b4 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:05945a0dc7d9c5e45e196d8582de19a3df559d1f9f4e4cb52c77d3051db923b4
------------------------------------------------------------------------------------------------------------------------------------------------

# 保存输出中类似于下面的命令,供添加节点功能使用(后续Master节点的加入一定要在手动分发完根证书后再执行第一个命令进行加入)
# 注意:控制Master节点的加入使用第一个命令,Node节点的加入使用第二个命令
。。。。。。
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:05945a0dc7d9c5e45e196d8582de19a3df559d1f9f4e4cb52c77d3051db923b4 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:05945a0dc7d9c5e45e196d8582de19a3df559d1f9f4e4cb52c77d3051db923b4


## 配置master01到master02和master03的ssh免密登录
ssh-keygen
ssh-copy-id -i .ssh/id_rsa.pub root@master02
ssh-copy-id -i .ssh/id_rsa.pub root@master03

## 验证master01到master02和master03的ssh免密登录
ssh master02
ssh master03

## 分发pki证书和admin.conf文件
cat <<EOF > kubeadm/config/scp-config.sh
USER=root
CONTROL_PLANE_IPS="192.168.112.129 192.168.112.130"
for host in \${CONTROL_PLANE_IPS}; do
ssh \${USER}@\$host 'mkdir -p /etc/kubernetes/pki/etcd/'
scp /etc/kubernetes/pki/ca.crt \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt \${USER}@\$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key \${USER}@\$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf \${USER}@\$host:/etc/kubernetes/
done
EOF
chmod 0755 kubeadm/config/scp-config.sh
./kubeadm/config/scp-config.sh

4. 在Kubernetes Control Plane 的第二个Node(master02)上操作:(千万注意A和B任选一种方法即可,不可以同时使用)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
# A. 使用自动分发根证书的方式初始化
kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:33a6370b4bb4a9385c1d878e9a7a085ad969d521e4b309b01be797c0d7867d69 \
--control-plane --certificate-key 16f06d3321fce089cad4b229da9b5d3ef94c08a246943e0f375b977f18bbab8e
------------------------------------------------------------------------------------------------------------------------------------------------
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master02 localhost] and IPs [192.168.112.129 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master02 localhost] and IPs [192.168.112.129 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.112.129 192.168.112.136]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0315 10:59:52.640333 1546 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0315 10:59:52.645116 1546 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0315 10:59:52.646387 1546 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-03-15T11:00:28.875+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.112.129:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

------------------------------------------------------------------------------------------------------------------------------------------------

# B. 使用手动分发根证书的方式初始化
kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:05945a0dc7d9c5e45e196d8582de19a3df559d1f9f4e4cb52c77d3051db923b4 \
--control-plane
------------------------------------------------------------------------------------------------------------------------------------------------
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.112.129 192.168.112.136]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master02 localhost] and IPs [192.168.112.129 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master02 localhost] and IPs [192.168.112.129 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0315 11:48:00.712980 2760 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0315 11:48:00.717833 2760 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0315 11:48:00.718658 2760 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-03-15T11:48:38.856+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.112.129:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet-check] Initial timeout of 40s passed.
[mark-control-plane] Marking the node master02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

------------------------------------------------------------------------------------------------------------------------------------------------

5. 在Kubernetes Control Plane 的第三个Node(master03)上操作:(千万注意A和B任选一种方法即可,不可以同时使用)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
# A. 使用自动分发根证书的方式初始化
kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:33a6370b4bb4a9385c1d878e9a7a085ad969d521e4b309b01be797c0d7867d69 \
--control-plane --certificate-key 16f06d3321fce089cad4b229da9b5d3ef94c08a246943e0f375b977f18bbab8e
------------------------------------------------------------------------------------------------------------------------------------------------
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master03 localhost] and IPs [192.168.112.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master03 localhost] and IPs [192.168.112.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master03 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.112.130 192.168.112.136]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0315 11:02:05.176831 1648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0315 11:02:05.182344 1648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0315 11:02:05.183197 1648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-03-15T11:02:32.084+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.112.130:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master03 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master03 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

------------------------------------------------------------------------------------------------------------------------------------------------

# B. 使用手动分发根证书的方式初始化
kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:05945a0dc7d9c5e45e196d8582de19a3df559d1f9f4e4cb52c77d3051db923b4 \
--control-plane
------------------------------------------------------------------------------------------------------------------------------------------------
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master03 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.112.130 192.168.112.136]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master03 localhost] and IPs [192.168.112.130 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master03 localhost] and IPs [192.168.112.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0315 11:49:29.220424 2807 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0315 11:49:29.225217 2807 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0315 11:49:29.226261 2807 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-03-15T11:49:56.765+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.112.130:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master03 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master03 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

------------------------------------------------------------------------------------------------------------------------------------------------

6. Kubernetes Control Plane的三个节点上分别配置Kubectl访问权限

1
2
3
4
5
# 在master01、master02和master03上分别执行
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

7. 高可用部署的Stack结构验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# 在master01、master02和master03中的任意一个master执行都可以
kubectl get pod --all-namespaces -o wide
------------------------------------------------------------------------------------------------------------------------------------------------
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-648f4868b8-gszmn 1/1 Running 0 2m36s 10.211.235.1 master03 <none> <none>
kube-system calico-node-dk4s6 1/1 Running 0 2m36s 192.168.112.128 master01 <none> <none>
kube-system calico-node-lhj5p 1/1 Running 0 2m36s 192.168.112.129 master02 <none> <none>
kube-system calico-node-tscpz 1/1 Running 0 2m36s 192.168.112.130 master03 <none> <none>
kube-system coredns-7f9c544f75-9w4kn 1/1 Running 0 12m 10.211.59.193 master02 <none> <none>
kube-system coredns-7f9c544f75-xvsbn 1/1 Running 0 12m 10.211.59.194 master02 <none> <none>
kube-system etcd-master01 1/1 Running 0 12m 192.168.112.128 master01 <none> <none>
kube-system etcd-master02 1/1 Running 0 5m58s 192.168.112.129 master02 <none> <none>
kube-system etcd-master03 1/1 Running 0 3m46s 192.168.112.130 master03 <none> <none>
kube-system kube-apiserver-master01 1/1 Running 0 12m 192.168.112.128 master01 <none> <none>
kube-system kube-apiserver-master02 1/1 Running 0 5m59s 192.168.112.129 master02 <none> <none>
kube-system kube-apiserver-master03 1/1 Running 0 3m46s 192.168.112.130 master03 <none> <none>
kube-system kube-controller-manager-master01 1/1 Running 1 12m 192.168.112.128 master01 <none> <none>
kube-system kube-controller-manager-master02 1/1 Running 0 5m59s 192.168.112.129 master02 <none> <none>
kube-system kube-controller-manager-master03 1/1 Running 0 3m46s 192.168.112.130 master03 <none> <none>
kube-system kube-haproxy-master01 1/1 Running 0 12m 192.168.112.128 master01 <none> <none>
kube-system kube-keepalived-master01 1/1 Running 0 12m 192.168.112.128 master01 <none> <none>
kube-system kube-proxy-6fw8x 1/1 Running 0 12m 192.168.112.128 master01 <none> <none>
kube-system kube-proxy-7hkv7 1/1 Running 0 6m 192.168.112.129 master02 <none> <none>
kube-system kube-proxy-9trwk 1/1 Running 0 3m47s 192.168.112.130 master03 <none> <none>
kube-system kube-scheduler-master01 1/1 Running 1 12m 192.168.112.128 master01 <none> <none>
kube-system kube-scheduler-master02 1/1 Running 0 5m59s 192.168.112.129 master02 <none> <none>
kube-system kube-scheduler-master03 1/1 Running 0 3m46s 192.168.112.130 master03 <none> <none>
------------------------------------------------------------------------------------------------------------------------------------------------

# 在master01、master02和master03中的任意一个master执行都可以
kubectl get node -o wide
------------------------------------------------------------------------------------------------------------------------------------------------
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master01 Ready master 13m v1.17.0 192.168.112.128 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://18.9.9
master02 Ready master 6m40s v1.17.0 192.168.112.129 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://18.9.9
master03 Ready master 4m27s v1.17.0 192.168.112.130 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://18.9.9
------------------------------------------------------------------------------------------------------------------------------------------------

8. 确认etcd的健康状况

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# master01、master02和master03上分别执行,这里以master01为例
kubectl exec -it etcd-master01 /bin/sh -n kube-system

etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key member list
------------------------------------------------------------------------------------------------------------------------------------------------
ade36780a0899522, started, master01, https://192.168.112.128:2380, https://192.168.112.128:2379, false
b4a6061544dbd63b, started, master03, https://192.168.112.130:2380, https://192.168.112.130:2379, false
ecaa91fc374ff6f0, started, master02, https://192.168.112.129:2380, https://192.168.112.129:2379, false
------------------------------------------------------------------------------------------------------------------------------------------------

etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key endpoint health
------------------------------------------------------------------------------------------------------------------------------------------------
https://127.0.0.1:2379 is healthy: successfully committed proposal: took = 9.338525ms
------------------------------------------------------------------------------------------------------------------------------------------------

etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key endpoint status
------------------------------------------------------------------------------------------------------------------------------------------------
https://127.0.0.1:2379, ade36780a0899522, 3.4.3, 2.6 MB, false, false, 21, 53251, 53251,
------------------------------------------------------------------------------------------------------------------------------------------------

9. 为高可用集群添加两个Node

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
# node01上执行(无论是自动分发根证书的方式还是手动分发证书的方式,在这里都没有区别)
kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:33a6370b4bb4a9385c1d878e9a7a085ad969d521e4b309b01be797c0d7867d69
------------------------------------------------------------------------------------------------------------------------------------------------
W0315 11:12:27.853703 9587 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

------------------------------------------------------------------------------------------------------------------------------------------------

# node02上执行(无论是自动分发根证书的方式还是手动分发证书的方式,在这里都没有区别)
kubeadm join 192.168.112.136:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:33a6370b4bb4a9385c1d878e9a7a085ad969d521e4b309b01be797c0d7867d69
------------------------------------------------------------------------------------------------------------------------------------------------
W0315 11:13:18.680949 9561 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

------------------------------------------------------------------------------------------------------------------------------------------------

# 在master01、master02和master03中的任意一个master执行都可以
kubectl get node -o wide
------------------------------------------------------------------------------------------------------------------------------------------------
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master01 Ready master 23m v1.17.0 192.168.112.128 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://18.9.9
master02 Ready master 17m v1.17.0 192.168.112.129 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://18.9.9
master03 Ready master 15m v1.17.0 192.168.112.130 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://18.9.9
node01 Ready <none> 4m59s v1.17.0 192.168.112.131 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://18.9.9
node02 Ready <none> 4m8s v1.17.0 192.168.112.132 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://18.9.9
------------------------------------------------------------------------------------------------------------------------------------------------

# 在master01、master02和master03中的任意一个master执行都可以
kubectl get pod --all-namespaces -o wide
------------------------------------------------------------------------------------------------------------------------------------------------
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-648f4868b8-gszmn 1/1 Running 0 14m 10.211.235.1 master03 <none> <none>
kube-system calico-node-dk4s6 1/1 Running 0 14m 192.168.112.128 master01 <none> <none>
kube-system calico-node-lhj5p 1/1 Running 0 14m 192.168.112.129 master02 <none> <none>
kube-system calico-node-lkl66 1/1 Running 0 4m43s 192.168.112.132 node02 <none> <none>
kube-system calico-node-ncjc4 1/1 Running 0 5m34s 192.168.112.131 node01 <none> <none>
kube-system calico-node-tscpz 1/1 Running 0 14m 192.168.112.130 master03 <none> <none>
kube-system coredns-7f9c544f75-9w4kn 1/1 Running 0 24m 10.211.59.193 master02 <none> <none>
kube-system coredns-7f9c544f75-xvsbn 1/1 Running 0 24m 10.211.59.194 master02 <none> <none>
kube-system etcd-master01 1/1 Running 0 24m 192.168.112.128 master01 <none> <none>
kube-system etcd-master02 1/1 Running 0 18m 192.168.112.129 master02 <none> <none>
kube-system etcd-master03 1/1 Running 0 15m 192.168.112.130 master03 <none> <none>
kube-system kube-apiserver-master01 1/1 Running 0 24m 192.168.112.128 master01 <none> <none>
kube-system kube-apiserver-master02 1/1 Running 0 18m 192.168.112.129 master02 <none> <none>
kube-system kube-apiserver-master03 1/1 Running 0 15m 192.168.112.130 master03 <none> <none>
kube-system kube-controller-manager-master01 1/1 Running 1 24m 192.168.112.128 master01 <none> <none>
kube-system kube-controller-manager-master02 1/1 Running 0 18m 192.168.112.129 master02 <none> <none>
kube-system kube-controller-manager-master03 1/1 Running 0 15m 192.168.112.130 master03 <none> <none>
kube-system kube-haproxy-master01 1/1 Running 0 24m 192.168.112.128 master01 <none> <none>
kube-system kube-keepalived-master01 1/1 Running 0 24m 192.168.112.128 master01 <none> <none>
kube-system kube-proxy-6fw8x 1/1 Running 0 24m 192.168.112.128 master01 <none> <none>
kube-system kube-proxy-7hkv7 1/1 Running 0 18m 192.168.112.129 master02 <none> <none>
kube-system kube-proxy-96cz5 1/1 Running 0 5m34s 192.168.112.131 node01 <none> <none>
kube-system kube-proxy-9trwk 1/1 Running 0 15m 192.168.112.130 master03 <none> <none>
kube-system kube-proxy-pwslt 1/1 Running 0 4m43s 192.168.112.132 node02 <none> <none>
kube-system kube-scheduler-master01 1/1 Running 1 24m 192.168.112.128 master01 <none> <none>
kube-system kube-scheduler-master02 1/1 Running 0 18m 192.168.112.129 master02 <none> <none>
kube-system kube-scheduler-master03 1/1 Running 0 15m 192.168.112.130 master03 <none> <none>
------------------------------------------------------------------------------------------------------------------------------------------------

五、关于所有节点(Master和Node)的重置

1
2
3
kubeadm reset
rm -rf /etc/kubernetes/ /var/lib/etcd/ /etc/cni/ $HOME/.kube/
reboot

六、参考资料

1. 官方资料(官方最新版本v1.17)

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

2. 第三方资料(因Kubernetes 从v1.15开始到v1.17,kubeadm的安装方式和二进制安装方式基本相同,故v1.15的资料可以供v1.17参考)

https://www.cnblogs.com/lingfenglian/p/11753590.html
https://blog.51cto.com/fengwan/2426528?source=dra
https://my.oschina.net/beyondken/blog/1935402
https://www.cnblogs.com/shenlinken/p/9968274.html