使用二进制文件安装高可用Kubernetes v1.17.0集群(Stacked Control Plane Nodes For Baremetal)

一、高可用部署的实现方式介绍

本方案演变自 Kubeadm Highly Available v1.17.0(Stacked etcd topology)部署方案。

二、实验环境版本信息

1. 高可用工具的版本(这里记录的是docker镜像的版本)

keepalived-1.3.5-16.el7
haproxy-1.5.18-9.el7

2. Kubernetes各个组件的版本

etcd v3.4.3
kube-apiserver v1.17.0
kube-controller-manager v1.17.0
kube-scheduler v1.17.0
kubectl v1.17.0
coredns 1.6.5

docker 18.09.9
kube-proxy v1.17.0
kubelet v1.17.0
calico v3.11.1 (calico/node:v3.11.1 calico/pod2daemon-flexvol:v3.11.1 calico/cni:v3.11.1 calico/kube-controllers:v3.11.1)

三、部署架构介绍

stacked_etcd_topology

1. Kubernetes Master(Control Plane)

192.168.112.128 master01 -> docker kubelet keepalived haproxy etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy calico
192.168.112.129 master02 -> docker kubelet keepalived haproxy etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy calico
192.168.112.130 master03 -> docker kubelet keepalived haproxy etcd kube-apiserver kube-controller-manager kube-scheduler kube-proxy calico

2. Kubernetes Node

192.168.112.131 node01 -> docker kubelet kube-proxy calico(calico-node)
192.168.112.132 node02 -> docker kubelet kube-proxy calico(calico-node)

四、实现过程记录

1. 在Kubernetes Control Plane上的所有Node上部署HAProxy做为负载均衡器(由Systemd管理以启动二进制文件的方式实现)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
## 在控制平面的所有Node上执行,即master01、master02和master03上都执行
yum install -y haproxy
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
cat <<EOF > /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local0 err
maxconn 50000
uid 99
gid 99
#daemon
nbproc 1
pidfile haproxy.pid

defaults
mode tcp
log 127.0.0.1 local0 err
maxconn 50000
retries 3
timeout connect 10s
timeout client 10m
timeout server 10m

listen stats
mode http
bind 0.0.0.0:9090
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /haproxy-status
stats realm Haproxy\ Statistics
stats auth admin:12345678
stats hide-version
stats admin if TRUE

frontend kube-apiserver-https
mode tcp
bind :8443
default_backend kube-apiserver-backend

backend kube-apiserver-backend
mode tcp
balance roundrobin
server master01 192.168.112.128:6443 weight 3 minconn 100 maxconn 50000 check inter 5000 rise 2 fall 5
server master02 192.168.112.129:6443 weight 3 minconn 100 maxconn 50000 check inter 5000 rise 2 fall 5
server master03 192.168.112.130:6443 weight 3 minconn 100 maxconn 50000 check inter 5000 rise 2 fall 5
EOF

systemctl daemon-reload
systemctl enable haproxy.service
systemctl start haproxy.service
systemctl status haproxy.service
------------------------------------------------------------------------------------------------------------------------------------------------
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2020-03-15 12:42:17 CST; 34s ago
Main PID: 3273 (haproxy-systemd)
Tasks: 3
Memory: 2.3M
。。。。。。

2. 在Kubernetes Control Plane的所有Node上部署Keepalived(由Systemd管理以启动二进制文件的方式实现)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
# 在控制平面的所有Node上执行,即master01、master02和master03上都执行
yum install -y keepalived
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

# 在控制平面的master01上执行
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
router_id k8s-1
}

vrrp_script CheckK8sMaster {
script "curl -k https://127.0.0.1:6443/api"
interval 3
timeout 9
fall 2
rise 2
}

vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 200
advert_int 1
mcast_src_ip 192.168.112.128
nopreempt
authentication {
auth_type PASS
auth_pass 378378
}
unicast_peer {
192.168.112.129
192.168.112.130
}
virtual_ipaddress {
192.168.112.136
}
track_script {
CheckK8sMaster
}
}
EOF

# 在控制平面的master02上执行
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
router_id k8s-2
}

vrrp_script CheckK8sMaster {
script "curl -k https://127.0.0.1:6443/api"
interval 3
timeout 9
fall 2
rise 2
}

vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 150
advert_int 1
mcast_src_ip 192.168.112.129
nopreempt
authentication {
auth_type PASS
auth_pass 378378
}
unicast_peer {
192.168.112.128
192.168.112.130
}
virtual_ipaddress {
192.168.112.136
}
track_script {
CheckK8sMaster
}
}
EOF

# 在控制平面的master03上执行
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
router_id k8s-3
}

vrrp_script CheckK8sMaster {
script "curl -k https://127.0.0.1:6443/api"
interval 3
timeout 9
fall 2
rise 2
}

vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 150
advert_int 1
mcast_src_ip 192.168.112.130
nopreempt
authentication {
auth_type PASS
auth_pass 378378
}
unicast_peer {
192.168.112.128
192.168.112.129
}
virtual_ipaddress {
192.168.112.136
}
track_script {
CheckK8sMaster
}
}
EOF

# 在控制平面的所有Node上执行,即master01、master02和master03上都执行
systemctl daemon-reload
systemctl enable keepalived.service
systemctl start keepalived.service
systemctl status keepalived.service
------------------------------------------------------------------------------------------------------------------------------------------------
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2020-03-15 12:49:45 CST; 16s ago
Process: 3632 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 3633 (keepalived)
Tasks: 3
Memory: 6.5M
。。。。。。

3. 复制所有二进制文件到操作系统/usr/bin/目录下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# master01、master02和master03上分别执行
tar -zxvf etcd-v3.4.3-linux-amd64.tar.gz
tar -zxvf kubernetes-server-linux-amd64.tar.gz

cp etcd-v3.4.3-linux-amd64/etcd /usr/bin/
cp etcd-v3.4.3-linux-amd64/etcdctl /usr/bin/
cp kubernetes/server/bin/kube-apiserver /usr/bin/
cp kubernetes/server/bin/kube-controller-manager /usr/bin/
cp kubernetes/server/bin/kube-scheduler /usr/bin/
cp kubernetes/server/bin/kubectl /usr/bin/


# node01和node02上分别执行
tar -zxvf kubernetes-server-linux-amd64.tar.gz

cp kubernetes/server/bin/kubelet /usr/bin/
cp kubernetes/server/bin/kube-proxy /usr/bin/

4. Kubernetes Control Plane的第一个Node上生成根证书、RSA秘钥和kubectl的访问配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# 创建证书和配置文件的存放目录
mkdir -p /etc/kubernetes/pki/etcd/

# 生成etcd的相关证书
cd /etc/kubernetes/pki/etcd/
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=etcd-ca" -days 5000 -out ca.crt

# 生成rsa的公钥和私钥
cd /etc/kubernetes/pki/
openssl genrsa -out sa.key 2048
openssl rsa -in sa.key -pubout -out sa.pub

# 生成根证书
cd /etc/kubernetes/pki/
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=kubernetes" -days 5000 -out ca.crt

openssl genrsa -out front-proxy-ca.key 2048
openssl req -x509 -new -nodes -key front-proxy-ca.key -subj "/CN=front-proxy-ca" -days 5000 -out front-proxy-ca.crt

# 为kubectl生成相关的证书和配置文件
openssl genrsa -out kubectl.key 2048
openssl req -new -key kubectl.key -subj "/O=system:masters/CN=kubernetes-admin" -out kubectl.csr
openssl x509 -req -in kubectl.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubectl.crt -days 5000

export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl config set-cluster kubernetes --server=https://192.168.112.136:8443 --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/kubectl.crt --client-key=/etc/kubernetes/pki/kubectl.key --embed-certs=true
kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin
kubectl config use-context kubernetes-admin@kubernetes
unset KUBECONFIG

## 为kube-proxy生成相关的证书和配置文件
## kubernetes内置的为kube-proxy而生的clusterrole,可以使用kubectl get clusterrole system:node-proxier -o yaml进行查看
## kubernetes内置的为kube-proxy而生的clusterrolebinding,绑定到了用户system:kube-proxy,可以使用kubectl get clusterrolebinding system:node-proxier -o yaml进行查看
openssl genrsa -out proxy.key 2048
openssl req -new -key proxy.key -subj "/CN=system:kube-proxy" -out proxy.csr
openssl x509 -req -in proxy.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out proxy.crt -days 5000

export KUBECONFIG=/etc/kubernetes/proxy.conf
kubectl config set-cluster kubernetes --server=https://192.168.112.136:8443 --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl config set-credentials system:kube-proxy --client-certificate=/etc/kubernetes/pki/proxy.crt --client-key=/etc/kubernetes/pki/proxy.key --embed-certs=true
kubectl config set-context system:kube-proxy@kubernetes --cluster=kubernetes --user=system:kube-proxy
kubectl config use-context system:kube-proxy@kubernetes
unset KUBECONFIG

## 为Bootstrap Token生成配置文件,一旦这里的 --token 参数值做了修改,后面用于开启Bootstrap Token的Secret配置需要同步修改,其对应赢规律如下:
## 1. token为abcdef.0123456789abcdef,其对应了后面启用Bootstrap Token的Secret中的 <token-id>.<token-secret>
## 2. 后面用于启用Bootstrap Token的Secret的名字为bootstrap-token-abcdef,其严格对应了格式:bootstrap-token-<token-id>
export KUBECONFIG=/etc/kubernetes/bootstrap-kubelet.conf
kubectl config set-cluster kubernetes --server=https://192.168.112.136:8443 --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl config set-credentials system:bootstrap:abcdef --token=abcdef.0123456789abcdef
kubectl config set-context system:bootstrap:abcdef@kubernetes --cluster=kubernetes --user=system:bootstrap:abcdef
kubectl config use-context system:bootstrap:abcdef@kubernetes
unset KUBECONFIG

5. 分发根证书、RSA秘钥和kubectl的访问配置文件到Kubernetes Control Plane的剩余两个Node上

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 在master01上执行
# 配置master01到master02和master03的ssh免密登录
ssh-keygen
ssh-copy-id -i .ssh/id_rsa.pub root@master02
ssh-copy-id -i .ssh/id_rsa.pub root@master03

## 验证master01到master02和master03的ssh免密登录
ssh master02
ssh master03

cat <<EOF > kubernetes-master-transfer.sh
USER=root
CONTROL_PLANE_IPS="192.168.112.129 192.168.112.130"
for host in \${CONTROL_PLANE_IPS}; do
ssh \${USER}@\$host 'mkdir -p /etc/kubernetes/pki/etcd/'
scp /etc/kubernetes/pki/ca.crt \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt \${USER}@\$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key \${USER}@\$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf \${USER}@\$host:/etc/kubernetes/
scp /etc/kubernetes/proxy.conf \${USER}@\$host:/etc/kubernetes/
scp /etc/kubernetes/bootstrap-kubelet.conf \${USER}@\$host:/etc/kubernetes/
done
EOF
chmod 0755 kubernetes-master-transfer.sh
./kubernetes-master-transfer.sh

6. 利用根证书签发各个Master节点上需要的所有证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
# 仅在master01上执行
## 生成etcd的相关证书
cd /etc/kubernetes/pki/etcd/

cat <<EOF > server_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation,digitalSignature,keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = master01
DNS.2 = localhost
IP.1 = 192.168.112.128
IP.2 = 127.0.0.1
EOF
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=master01" -config server_ssl.cnf -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile server_ssl.cnf -out server.crt

cat <<EOF > peer_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation,digitalSignature,keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = master01
DNS.2 = localhost
IP.1 = 192.168.112.128
IP.2 = 127.0.0.1
EOF
openssl genrsa -out peer.key 2048
openssl req -new -key peer.key -subj "/CN=master01" -config peer_ssl.cnf -out peer.csr
openssl x509 -req -in peer.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile peer_ssl.cnf -out peer.crt

openssl genrsa -out healthcheck-client.key 2048
openssl req -new -key healthcheck-client.key -subj "/O=system:masters/CN=kube-etcd-healthcheck-client" -out healthcheck-client.csr
openssl x509 -req -in healthcheck-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out healthcheck-client.crt -days 5000

cd /etc/kubernetes/pki/
openssl genrsa -out apiserver-etcd-client.key 2048
openssl req -new -key apiserver-etcd-client.key -subj "/O=system:masters/CN=kube-apiserver-etcd-client" -out apiserver-etcd-client.csr
openssl x509 -req -in apiserver-etcd-client.csr -CA /etc/kubernetes/pki/etcd/ca.crt -CAkey /etc/kubernetes/pki/etcd/ca.key -CAcreateserial -out apiserver-etcd-client.crt -days 5000

## 为kube-apiserver生成相关的证书和配置文件
cat <<EOF > master_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation,digitalSignature,keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = master
DNS.2 = kubernetes
DNS.3 = kubernetes.default
DNS.4 = kubernetes.default.svc
DNS.5 = kubernetes.default.svc.cluster.local
IP.1 = 10.96.0.1
IP.2 = 192.168.112.128
IP.3 = 192.168.112.136
EOF

openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -subj "/CN=kube-apiserver" -config master_ssl.cnf -out apiserver.csr
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt

openssl genrsa -out front-proxy-client.key 2048
openssl req -new -key front-proxy-client.key -subj "/CN=front-proxy-client" -out front-proxy-client.csr
openssl x509 -req -in front-proxy-client.csr -CA front-proxy-ca.crt -CAkey front-proxy-ca.key -CAcreateserial -out front-proxy-client.crt -days 5000

openssl genrsa -out apiserver-kubelet-client.key 2048
openssl req -new -key apiserver-kubelet-client.key -subj "/O=system:masters/CN=kube-apiserver-kubelet-client" -out apiserver-kubelet-client.csr
openssl x509 -req -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver-kubelet-client.crt -days 5000


# 仅在master02上执行
## 生成etcd的相关证书
cd /etc/kubernetes/pki/etcd/

cat <<EOF > server_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation,digitalSignature,keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = master02
DNS.2 = localhost
IP.1 = 192.168.112.129
IP.2 = 127.0.0.1
EOF
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=master02" -config server_ssl.cnf -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile server_ssl.cnf -out server.crt

cat <<EOF > peer_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation,digitalSignature,keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = master02
DNS.2 = localhost
IP.1 = 192.168.112.129
IP.2 = 127.0.0.1
EOF
openssl genrsa -out peer.key 2048
openssl req -new -key peer.key -subj "/CN=master02" -config peer_ssl.cnf -out peer.csr
openssl x509 -req -in peer.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile peer_ssl.cnf -out peer.crt

openssl genrsa -out healthcheck-client.key 2048
openssl req -new -key healthcheck-client.key -subj "/O=system:masters/CN=kube-etcd-healthcheck-client" -out healthcheck-client.csr
openssl x509 -req -in healthcheck-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out healthcheck-client.crt -days 5000

cd /etc/kubernetes/pki/
openssl genrsa -out apiserver-etcd-client.key 2048
openssl req -new -key apiserver-etcd-client.key -subj "/O=system:masters/CN=kube-apiserver-etcd-client" -out apiserver-etcd-client.csr
openssl x509 -req -in apiserver-etcd-client.csr -CA /etc/kubernetes/pki/etcd/ca.crt -CAkey /etc/kubernetes/pki/etcd/ca.key -CAcreateserial -out apiserver-etcd-client.crt -days 5000

## 为kube-apiserver生成相关的证书和配置文件
cat <<EOF > master_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation,digitalSignature,keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = master
DNS.2 = kubernetes
DNS.3 = kubernetes.default
DNS.4 = kubernetes.default.svc
DNS.5 = kubernetes.default.svc.cluster.local
IP.1 = 10.96.0.1
IP.2 = 192.168.112.129
IP.3 = 192.168.112.136
EOF

openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -subj "/CN=kube-apiserver" -config master_ssl.cnf -out apiserver.csr
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt

openssl genrsa -out front-proxy-client.key 2048
openssl req -new -key front-proxy-client.key -subj "/CN=front-proxy-client" -out front-proxy-client.csr
openssl x509 -req -in front-proxy-client.csr -CA front-proxy-ca.crt -CAkey front-proxy-ca.key -CAcreateserial -out front-proxy-client.crt -days 5000

openssl genrsa -out apiserver-kubelet-client.key 2048
openssl req -new -key apiserver-kubelet-client.key -subj "/O=system:masters/CN=kube-apiserver-kubelet-client" -out apiserver-kubelet-client.csr
openssl x509 -req -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver-kubelet-client.crt -days 5000


# 仅在master03上执行
## 生成etcd的相关证书
cd /etc/kubernetes/pki/etcd/

cat <<EOF > server_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation,digitalSignature,keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = master03
DNS.2 = localhost
IP.1 = 192.168.112.130
IP.2 = 127.0.0.1
EOF
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=master03" -config server_ssl.cnf -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile server_ssl.cnf -out server.crt

cat <<EOF > peer_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation,digitalSignature,keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = master03
DNS.2 = localhost
IP.1 = 192.168.112.130
IP.2 = 127.0.0.1
EOF
openssl genrsa -out peer.key 2048
openssl req -new -key peer.key -subj "/CN=master03" -config peer_ssl.cnf -out peer.csr
openssl x509 -req -in peer.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile peer_ssl.cnf -out peer.crt

openssl genrsa -out healthcheck-client.key 2048
openssl req -new -key healthcheck-client.key -subj "/O=system:masters/CN=kube-etcd-healthcheck-client" -out healthcheck-client.csr
openssl x509 -req -in healthcheck-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out healthcheck-client.crt -days 5000

cd /etc/kubernetes/pki/
openssl genrsa -out apiserver-etcd-client.key 2048
openssl req -new -key apiserver-etcd-client.key -subj "/O=system:masters/CN=kube-apiserver-etcd-client" -out apiserver-etcd-client.csr
openssl x509 -req -in apiserver-etcd-client.csr -CA /etc/kubernetes/pki/etcd/ca.crt -CAkey /etc/kubernetes/pki/etcd/ca.key -CAcreateserial -out apiserver-etcd-client.crt -days 5000

## 为kube-apiserver生成相关的证书和配置文件
cat <<EOF > master_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation,digitalSignature,keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = master
DNS.2 = kubernetes
DNS.3 = kubernetes.default
DNS.4 = kubernetes.default.svc
DNS.5 = kubernetes.default.svc.cluster.local
IP.1 = 10.96.0.1
IP.2 = 192.168.112.130
IP.3 = 192.168.112.136
EOF

openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -subj "/CN=kube-apiserver" -config master_ssl.cnf -out apiserver.csr
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt

openssl genrsa -out front-proxy-client.key 2048
openssl req -new -key front-proxy-client.key -subj "/CN=front-proxy-client" -out front-proxy-client.csr
openssl x509 -req -in front-proxy-client.csr -CA front-proxy-ca.crt -CAkey front-proxy-ca.key -CAcreateserial -out front-proxy-client.crt -days 5000

openssl genrsa -out apiserver-kubelet-client.key 2048
openssl req -new -key apiserver-kubelet-client.key -subj "/O=system:masters/CN=kube-apiserver-kubelet-client" -out apiserver-kubelet-client.csr
openssl x509 -req -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver-kubelet-client.crt -days 5000

# 分别在master01、master02和master03上执行
## 为kube-controller-manager生成相关的证书和配置文件
openssl genrsa -out controller-manager.key 2048
openssl req -new -key controller-manager.key -subj "/CN=system:kube-controller-manager" -out controller-manager.csr
openssl x509 -req -in controller-manager.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out controller-manager.crt -days 5000

export KUBECONFIG=/etc/kubernetes/controller-manager.conf
kubectl config set-cluster kubernetes --server=https://192.168.112.136:8443 --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.crt --client-key=/etc/kubernetes/pki/controller-manager.key --embed-certs=true
kubectl config set-context system:kube-controller-manager@kubernetes --cluster=kubernetes --user=system:kube-controller-manager
kubectl config use-context system:kube-controller-manager@kubernetes
unset KUBECONFIG

## 为kube-scheduler生成相关的证书和配置文件
openssl genrsa -out scheduler.key 2048
openssl req -new -key scheduler.key -subj "/CN=system:kube-scheduler" -out scheduler.csr
openssl x509 -req -in scheduler.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out scheduler.crt -days 5000

export KUBECONFIG=/etc/kubernetes/scheduler.conf
kubectl config set-cluster kubernetes --server=https://192.168.112.136:8443 --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.crt --client-key=/etc/kubernetes/pki/scheduler.key --embed-certs=true
kubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler
kubectl config use-context system:kube-scheduler@kubernetes
unset KUBECONFIG

7. 在所有Master上,分别配置和启动所有组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
# 仅在master01上执行
## 配置和启动etcd服务
mkdir -p /etc/etcd/
mkdir -p /var/lib/etcd/

cat <<EOF > /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.env
ExecStart=/usr/bin/etcd \$ETCD_ARGS

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/etcd/etcd.env
ETCD_ARGS="--advertise-client-urls=https://192.168.112.128:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.112.128:2380 --initial-cluster=master01=https://192.168.112.128:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.112.128:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.112.128:2380 --name=master01 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"
EOF

systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
systemctl status etcd.service

## 为etcd集群添加两个节点
etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key member add master02 --peer-urls="https://192.168.112.129:2380"

etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key member add master03 --peer-urls="https://192.168.112.130:2380"

## 配置kube-apiserver服务
cat <<EOF > /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.env
ExecStart=/usr/bin/kube-apiserver \$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kube-apiserver.env
KUBE_API_ARGS="--advertise-address=192.168.112.128 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key"
EOF

# 仅在master02上执行
## 配置和启动etcd服务
mkdir -p /etc/etcd/
mkdir -p /var/lib/etcd/

cat <<EOF > /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.env
ExecStart=/usr/bin/etcd \$ETCD_ARGS

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/etcd/etcd.env
ETCD_ARGS="--advertise-client-urls=https://192.168.112.129:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.112.129:2380 --initial-cluster=master01=https://192.168.112.128:2380,master02=https://192.168.112.129:2380 --initial-cluster-state=existing --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.112.129:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.112.129:2380 --name=master02 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"
EOF

systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
systemctl status etcd.service

## 配置kube-apiserver服务
cat <<EOF > /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.env
ExecStart=/usr/bin/kube-apiserver \$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kube-apiserver.env
KUBE_API_ARGS="--advertise-address=192.168.112.129 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key"
EOF


# 仅在master03上执行
## 配置和启动etcd服务
mkdir -p /etc/etcd/
mkdir -p /var/lib/etcd/

cat <<EOF > /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.env
ExecStart=/usr/bin/etcd \$ETCD_ARGS

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/etcd/etcd.env
ETCD_ARGS="--advertise-client-urls=https://192.168.112.130:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.112.130:2380 --initial-cluster=master01=https://192.168.112.128:2380,master03=https://192.168.112.130:2380,master02=https://192.168.112.129:2380 --initial-cluster-state=existing --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.112.130:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.112.130:2380 --name=master03 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"
EOF

systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
systemctl status etcd.service

## 配置kube-apiserver服务
cat <<EOF > /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.env
ExecStart=/usr/bin/kube-apiserver \$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kube-apiserver.env
KUBE_API_ARGS="--advertise-address=192.168.112.130 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key"
EOF


# 分别在master01、master02和master03上执行

## 启动kube-apiserver服务
systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl status kube-apiserver.service

## 配置和启动kube-controller-manager服务
cat <<EOF > /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.env
ExecStart=/usr/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kube-controller-manager.env
KUBE_CONTROLLER_MANAGER_ARGS="--allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.211.0.0/16 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --node-cidr-mask-size=24 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/16 --use-service-account-credentials=true"
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl status kube-controller-manager.service

## 配置和启动kube-scheduler服务
cat <<EOF > /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.env
ExecStart=/usr/bin/kube-scheduler \$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kube-scheduler.env
KUBE_SCHEDULER_ARGS=" --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true"
EOF

systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
systemctl status kube-scheduler.service

8. 集群中配置启用Bootstrap Token

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# 仅在master01上执行
## 注意:expiration必须要在当前日期以后,否则会出现token创建后,kubernetes就会自动删除
export KUBECONFIG=/etc/kubernetes/admin.conf
cat <<EOF > /etc/kubernetes/bootstrap-token-abcdef.yaml
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-abcdef
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
auth-extra-groups: system:bootstrappers:default-node-token
expiration: 2020-12-31T00:00:00+08:00
token-id: abcdef
token-secret: 0123456789abcdef
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
EOF
kubectl create -f /etc/kubernetes/bootstrap-token-abcdef.yaml

cat <<EOF > /etc/kubernetes/create-csrs-for-bootstrapping.yaml
# enable bootstrapping nodes to create CSR
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: create-csrs-for-bootstrapping
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f /etc/kubernetes/create-csrs-for-bootstrapping.yaml

cat <<EOF > /etc/kubernetes/auto-approve-csrs-for-group.yaml
# Approve all CSRs for the group "system:bootstrappers"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f /etc/kubernetes/auto-approve-csrs-for-group.yaml

cat <<EOF > /etc/kubernetes/auto-approve-renewals-for-nodes.yaml
# Approve renewal CSRs for the group "system:nodes"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f /etc/kubernetes/auto-approve-renewals-for-nodes.yaml

9. 分发bootstrap-kubelet.conf和proxy.conf到所有Master和Node上

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 在master01上执行
# 配置master01到master02和master03的ssh免密登录
ssh-keygen
ssh-copy-id -i .ssh/id_rsa.pub root@node01
ssh-copy-id -i .ssh/id_rsa.pub root@node02

## 验证master01到node01和node02的ssh免密登录
ssh node01
ssh node02

cat <<EOF > kubernetes-node-transfer.sh
USER=root
CONTROL_PLANE_IPS="192.168.112.129 192.168.112.130 192.168.112.131 192.168.112.132"
for host in \${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt \${USER}@\$host:/etc/kubernetes/pki/
scp /etc/kubernetes/bootstrap-kubelet.conf \${USER}@\$host:/etc/kubernetes/
scp /etc/kubernetes/proxy.conf \${USER}@\$host:/etc/kubernetes/
done
EOF
chmod 0755 kubernetes-node-transfer.sh
./kubernetes-node-transfer.sh

8. 在所有Node上,分别配置和启动所有组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
# 在node01和node02上执行,如果master01、master02和master03也需要具备Node的功能,那么其上也需要执行
## 创建配置目录和工作目录
mkdir -p /etc/kubernetes/manifests
mkdir -p /etc/kubernetes/pki/
mkdir -p /var/lib/kubelet/
mkdir -p /var/lib/kube-proxy/

## 创建kubelet的配置文件
cat <<EOF > /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: true
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
EOF

## 配置和启动kubelet服务
cat <<EOF > /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/kubelet.env
ExecStart=/usr/bin/kubelet \$KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kubelet.env
KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF

systemctl daemon-reload
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl status kubelet.service


## 创建kube-proxy的配置文件
cat <<EOF > /var/lib/kube-proxy/config.conf
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: /etc/kubernetes/proxy.conf
qps: 0
clusterCIDR: 10.211.0.0/16
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: ""
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
udpIdleTimeout: 0s
winkernel:
enableDSR: false
networkName: ""
sourceVip: ""
EOF

## 配置和启动kube-proxy服务
cat <<EOF > /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
Requires=network.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-proxy.env
ExecStart=/usr/bin/kube-proxy \$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kube-proxy.env
KUBE_PROXY_ARGS="--config=/var/lib/kube-proxy/config.conf --hostname-override=node01"
EOF

yum install -y conntrack

systemctl daemon-reload
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
systemctl status kube-proxy.service

9. 让master01、master02和master03节点具备Node节点的功能

1
2
3
4
5
6
7
8
9
## 如果master01、master02和master03节点需要具备node节点的功能,需要参考8中的步骤,先分别在master01、master02和master03节点上完成kubelet和kube-proxy的安装后,再分别给master01、master02和master03节点打上下面的标签和污点
kubectl label node master01 node-role.kubernetes.io/master=
kubectl taint node master01 node-role.kubernetes.io/master=:NoSchedule

kubectl label node master02 node-role.kubernetes.io/master=
kubectl taint node master02 node-role.kubernetes.io/master=:NoSchedule

kubectl label node master03 node-role.kubernetes.io/master=
kubectl taint node master03 node-role.kubernetes.io/master=:NoSchedule

10. 配置和安装网络插件(calico和core-dns)

请参考单点二进制Kubernetes集群的配置和安装方法,这里不再赘述。

11. 确认集群各组件的健康状况

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 确认etcd的健康状况
etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key member list
------------------------------------------------------------------------------------------------------------------------------------------------
70b95c7dc2a3de1e, started, master03, https://192.168.112.130:2380, https://192.168.112.130:2379, false
71611ba7f1e4ff79, started, master02, https://192.168.112.129:2380, https://192.168.112.129:2379, false
ade36780a0899522, started, master01, https://192.168.112.128:2380, https://192.168.112.128:2379, false
------------------------------------------------------------------------------------------------------------------------------------------------

# 确认所有节点(Master和Node)的健康状况
kubectl get node -o wide
------------------------------------------------------------------------------------------------------------------------------------------------
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master01 Ready master 130m v1.17.0 192.168.112.128 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://18.9.9
master02 Ready master 130m v1.17.0 192.168.112.129 <none> CentOS Linux 7 (Core) 3.10.0-1062.12.1.el7.x86_64 docker://18.9.9
master03 Ready master 130m v1.17.0 192.168.112.130 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://18.9.9
------------------------------------------------------------------------------------------------------------------------------------------------

# 确认calico和core dns的运行状况
kubectl get pod --all-namespaces -o wide
------------------------------------------------------------------------------------------------------------------------------------------------
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-648f4868b8-ldgrw 1/1 Running 1 121m 10.211.241.66 master01 <none> <none>
kube-system calico-node-2dtf2 1/1 Running 0 121m 192.168.112.130 master03 <none> <none>
kube-system calico-node-2z8nv 1/1 Running 1 121m 192.168.112.128 master01 <none> <none>
kube-system calico-node-tvs2j 1/1 Running 1 121m 192.168.112.129 master02 <none> <none>
kube-system coredns-7f9c544f75-s26rt 1/1 Running 1 115m 10.211.59.194 master02 <none> <none>
kube-system coredns-7f9c544f75-zfst9 1/1 Running 0 115m 10.211.235.1 master03 <none> <none>
------------------------------------------------------------------------------------------------------------------------------------------------

12. 为高可用集群添加两个Node

因高可用二进制Kubernetes集群添加Node的方法与单点二进制Kubernetes集群添加Node的方法完全一直,故请参考单点二进制Kubernetes集群的添加Node方法。

五、参考资料

1. 官方资料(官方最新版本v1.17)

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

2. 第三方资料(因Kubernetes 从v1.15开始到v1.17,kubeadm的安装方式和二进制安装方式基本相同,故v1.15的资料可以供v1.17参考)

https://www.cnblogs.com/lingfenglian/p/11753590.html
https://blog.51cto.com/fengwan/2426528?source=dra
https://my.oschina.net/beyondken/blog/1935402
https://www.cnblogs.com/shenlinken/p/9968274.html