使用二进制文件的方式安装 Kubernetes v1.11.0 集群(一)

一、实现方式介绍

1. 控制平面的所有组件都采用二进制方式部署,交由systemd统一管理;

控制平面包括:etcd、kube-apiserver、kube-controller-manager和kube-scheduler。

2. 网络插件依然采用Addon方式部署,交由kubernetes统一管理。

网络插件包括:calico-node及其相关的组件。

二、实验环境版本信息

1. 操作系统的版本信息

CentOS Linux release 7.6.1810 (Core)

2. 各组件的版本信息

etcd v3.2.18
kube-apiserver v1.11.0
kube-controller-manager v1.11.0
kube-scheduler v1.11.0
kubectl v1.11.0

docker 17.03.1-ce
kubelet v1.11.0
calico v3.1.3

三、部署架构

1. Kubernetes Master(Control Plane)

192.168.112.128 master -> etcd kube-apiserver kube-controller-manager kube-scheduler

2. Kubernetes Node

192.168.112.129 node01 -> docker kubelet kube-proxy calico-node
192.168.112.130 node02 -> docker kubelet kube-proxy calico-node

四、准备二进制文件与Docker镜像

1. 下载相关的二进制文件压缩包

https://github.com/etcd-io/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
https://dl.k8s.io/v1.11.0/kubernetes-server-linux-amd64.tar.gz

2. 拉取相关的Docker镜像

calico网络组件的镜像:
calico/cni:v3.1.3
calico/node:v3.1.3
calico/typha:v0.7.4

kube-dns的镜像:
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3

infra容器的镜像:
registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1

五、部署过程记录

1. 准备基础环境(Master和Node上都执行)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# 更新系统
yum update -y

# 设置正确的时区和时间
yum install -y ntpdate
timedatectl set-timezone Asia/Shanghai
ntpdate cn.ntp.org.cn

# 关闭防火墙
systemctl disable firewalld.service
systemctl stop firewalld.service

# 关闭swap分区
swapoff -a
sed -i 's#/dev/mapper/cl-swap#\# /dev/mapper/cl-swap#' /etc/fstab

# 关闭selinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

# 设置各个节点的主机名
## 192.168.112.128
hostnamectl set-hostname master

## 192.168.112.129
hostnamectl set-hostname node01

## 192.168.112.130
hostnamectl set-hostname node02

# 配置主机名和IP的映射
cat <<EOF >> /etc/hosts

# For Kubernetes Cluster
192.168.112.128 master
192.168.112.129 node01
192.168.112.130 node02
EOF

# 修改内核参数
cat <<EOF > /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_tw_resue = 1
EOF

sysctl --system

# 修改ulimit限制
cat <<EOF > /etc/security/limits.d/kubernetes.conf
* hard nofile 65535
* soft nofile 65535
EOF

2. 安装Docker环境(在所有Node上都执行)

1
2
3
4
5
6
7
8
9
10
11
12
13
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast

yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.1.ce-1.el7.centos.noarch.rpm

yum install -y docker-ce-17.03.1.ce-1.el7.centos

systemctl enable docker.service
systemctl start docker.service
systemctl status docker.service

docker version

3. 复制所有二进制文件到操作系统/usr/bin/目录下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 192.168.112.128 (Master上执行)
tar -zxvf etcd-v3.2.18-linux-amd64.tar.gz
tar -zxvf kubernetes-server-linux-amd64.tar.gz

cp etcd-v3.2.18-linux-amd64/etcd /usr/bin/
cp etcd-v3.2.18-linux-amd64/etcdctl /usr/bin/
cp kubernetes/server/bin/kube-apiserver /usr/bin/
cp kubernetes/server/bin/kube-controller-manager /usr/bin/
cp kubernetes/server/bin/kube-scheduler /usr/bin/
cp kubernetes/server/bin/kubectl /usr/bin/


# 192.168.112.129 和 192.168.112.130 (Node上执行)
tar -zxvf kubernetes-server-linux-amd64.tar.gz

cp kubernetes/server/bin/kubelet /usr/bin/
cp kubernetes/server/bin/kube-proxy /usr/bin/

4. 在Master上生成所有组件的相关证书和配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
# 创建证书和配置文件的存放目录
mkdir -p /etc/kubernetes/pki/

# 生成rsa的公钥和私钥
cd /etc/kubernetes/
openssl genrsa -out sa.key 2048
openssl rsa -in sa.key -pubout -out sa.pub

# 进入证书目录
cd /etc/kubernetes/pki/

# 生成根证书
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=master" -days 5000 -out ca.crt

# 为kube-apiserver生成相关的证书和配置文件
cat <<EOF > master_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation,digitalSignature,keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = master
IP.1 = 10.96.0.1
IP.2 = 192.168.112.128
EOF

openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -subj "/CN=master" -config master_ssl.cnf -out apiserver.csr
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt

# 为kube-controller-manager生成相关的证书和配置文件
openssl genrsa -out controller-manager.key 2048
openssl req -new -key controller-manager.key -subj "/CN=master" -out controller-manager.csr
openssl x509 -req -in controller-manager.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out controller-manager.crt -days 5000

export KUBECONFIG=/etc/kubernetes/controller-manager.conf
kubectl config set-cluster kubernetes --server=https://192.168.112.128:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.crt --client-key=/etc/kubernetes/pki/controller-manager.key --embed-certs=true
kubectl config set-context system:kube-controller-manager@kubernetes --cluster=kubernetes --user=system:kube-controller-manager
kubectl config use-context system:kube-controller-manager@kubernetes
unset KUBECONFIG

# 为kube-scheduler生成相关的证书和配置文件
openssl genrsa -out scheduler.key 2048
openssl req -new -key scheduler.key -subj "/CN=master" -out scheduler.csr
openssl x509 -req -in scheduler.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out scheduler.crt -days 5000

export KUBECONFIG=/etc/kubernetes/scheduler.conf
kubectl config set-cluster kubernetes --server=https://192.168.112.128:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.crt --client-key=/etc/kubernetes/pki/scheduler.key --embed-certs=true
kubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler
kubectl config use-context system:kube-scheduler@kubernetes
unset KUBECONFIG

# 为kubectl生成相关的证书和配置文件
openssl genrsa -out kubectl.key 2048
openssl req -new -key kubectl.key -subj "/CN=master" -out kubectl.csr
openssl x509 -req -in kubectl.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubectl.crt -days 5000

export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl config set-cluster kubernetes --server=https://192.168.112.128:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl config set-credentials system:admin --client-certificate=/etc/kubernetes/pki/kubectl.crt --client-key=/etc/kubernetes/pki/kubectl.key --embed-certs=true
kubectl config set-context system:admin@kubernetes --cluster=kubernetes --user=system:admin
kubectl config use-context system:admin@kubernetes
unset KUBECONFIG

# 为kubelet生成相关的证书和配置文件
openssl genrsa -out kubelet.key 2048
openssl req -new -key kubelet.key -subj "/CN=node" -out kubelet.csr
openssl x509 -req -in kubelet.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet.crt -days 5000

export KUBECONFIG=/etc/kubernetes/kubelet.conf
kubectl config set-cluster kubernetes --server=https://192.168.112.128:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl config set-credentials system:kubelet --client-certificate=/etc/kubernetes/pki/kubelet.crt --client-key=/etc/kubernetes/pki/kubelet.key --embed-certs=true
kubectl config set-context system:kubelet@kubernetes --cluster=kubernetes --user=system:kubelet
kubectl config use-context system:kubelet@kubernetes
unset KUBECONFIG

# 为kube-proxy生成相关的证书和配置文件
openssl genrsa -out proxy.key 2048
openssl req -new -key proxy.key -subj "/CN=node" -out proxy.csr
openssl x509 -req -in proxy.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out proxy.crt -days 5000

export KUBECONFIG=/etc/kubernetes/proxy.conf
kubectl config set-cluster kubernetes --server=https://192.168.112.128:6443 --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
kubectl config set-credentials system:proxy --client-certificate=/etc/kubernetes/pki/proxy.crt --client-key=/etc/kubernetes/pki/proxy.key --embed-certs=true
kubectl config set-context system:proxy@kubernetes --cluster=kubernetes --user=system:proxy
kubectl config use-context system:proxy@kubernetes
unset KUBECONFIG

5. 配置和启动Master上的所有组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
# 配置和启动etcd服务
mkdir -p /var/lib/etcd/

cat <<EOF > /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
systemctl status etcd.service

# 配置和启动kube-apiserver服务
cat <<EOF > /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver
ExecStart=/usr/bin/kube-apiserver \$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kube-apiserver
KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --service-account-key-file=/etc/kubernetes/sa.pub --advertise-address=192.168.112.128 --secure-port=6443 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=10.96.0.0/16 --service-node-port-range=30000-32767 --authorization-mode=Node,RBAC --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --enable-admission-plugins=ResourceQuota,LimitRanger"
EOF

systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl status kube-apiserver.service

# 为第4步中涉及的用户master和node绑定cluster-admin角色
kubectl create clusterrolebinding system:component:master --clusterrole=cluster-admin --user=master
kubectl create clusterrolebinding system:component:node --clusterrole=cluster-admin --user=node

# 配置和启动kube-controller-manager服务
cat <<EOF > /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager
ExecStart=/usr/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kube-controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/etc/kubernetes/sa.key --leader-elect=true --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --root-ca-file=/etc/kubernetes/pki/ca.crt --kubeconfig=/etc/kubernetes/controller-manager.conf --allocate-node-cidrs=true --cluster-cidr=10.211.0.0/16 --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl status kube-controller-manager.service

# 配置和启动kube-scheduler服务
cat <<EOF > /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler
ExecStart=/usr/bin/kube-scheduler \$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kube-scheduler
KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
EOF

systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
systemctl status kube-scheduler.service

5. 配置和启动Node上的所有组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
# 创建配置目录和工作目录
mkdir -p /etc/kubernetes
mkdir -p /var/lib/kubelet

# 传输相关配置文件到当前节点上
scp root@192.168.112.128:/etc/kubernetes/kubelet.conf /etc/kubernetes/
scp root@192.168.112.128:/etc/kubernetes/proxy.conf /etc/kubernetes/

# 配置和启动kubelet服务
cat <<EOF > /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \$KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kubelet
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubelet.conf --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF

systemctl daemon-reload
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl status kubelet.service

# 配置和启动kube-proxy服务
mkdir -p /var/lib/kube-proxy/
cat <<EOF > /var/lib/kube-proxy/config.conf
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/proxy.conf
qps: 5
clusterCIDR: 10.211.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: iptables
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
EOF

cat <<EOF > /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
Requires=network.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-proxy
ExecStart=/usr/bin/kube-proxy \$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat <<EOF > /etc/kubernetes/kube-proxy
KUBE_PROXY_ARGS="--config=/var/lib/kube-proxy/config.conf --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
EOF

systemctl daemon-reload
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
systemctl status kube-proxy.service

6. 配置和安装网络插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
mkdir -p calico

cat <<EOF > calico/rbac-kdd.yaml
# Calico Version v3.1.3
# https://docs.projectcalico.org/v3.1/releases#v3.1.3
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-node
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- pods/status
verbs:
- update
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- watch
- patch
- apiGroups: [""]
resources:
- services
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
verbs:
- get
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- update
- watch
- apiGroups: ["extensions"]
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- clusterinformations
- hostendpoints
verbs:
- create
- get
- list
- update
- watch

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
EOF

cat <<EOF > calico/calico.yaml
# Calico Version v3.1.3
# https://docs.projectcalico.org/v3.1/releases#v3.1.3
# This manifest includes the following component versions:
# calico/node:v3.1.3
# calico/cni:v3.1.3

# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# To enable Typha, set this to "calico-typha" *and* set a non-zero value for Typha replicas
# below. We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is
# essential.
typha_service_name: "none"

# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": 1500,
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}

---

# This manifest creates a Service, which will be backed by Calico's Typha daemon.
# Typha sits in between Felix and the API server, reducing Calico's load on the API server.

apiVersion: v1
kind: Service
metadata:
name: calico-typha
namespace: kube-system
labels:
k8s-app: calico-typha
spec:
ports:
- port: 5473
protocol: TCP
targetPort: calico-typha
name: calico-typha
selector:
k8s-app: calico-typha

---

# This manifest creates a Deployment of Typha to back the above service.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: calico-typha
namespace: kube-system
labels:
k8s-app: calico-typha
spec:
# Number of Typha replicas. To enable Typha, set this to a non-zero value *and* set the
# typha_service_name variable in the calico-config ConfigMap above.
#
# We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is essential
# (when using the Kubernetes datastore). Use one replica for every 100-200 nodes. In
# production, we recommend running at least 3 replicas to reduce the impact of rolling upgrade.
replicas: 0
revisionHistoryLimit: 2
template:
metadata:
labels:
k8s-app: calico-typha
annotations:
# This, along with the CriticalAddonsOnly toleration below, marks the pod as a critical
# add-on, ensuring it gets priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
hostNetwork: true
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
# Since Calico can't network a pod until Typha is up, we need to run Typha itself
# as a host-networked pod.
serviceAccountName: calico-node
containers:
- image: calico/typha:v0.7.4
name: calico-typha
ports:
- containerPort: 5473
name: calico-typha
protocol: TCP
env:
# Enable "info" logging by default. Can be set to "debug" to increase verbosity.
- name: TYPHA_LOGSEVERITYSCREEN
value: "info"
# Disable logging to file and syslog since those don't make sense in Kubernetes.
- name: TYPHA_LOGFILEPATH
value: "none"
- name: TYPHA_LOGSEVERITYSYS
value: "none"
# Monitor the Kubernetes API to find the number of running instances and rebalance
# connections.
- name: TYPHA_CONNECTIONREBALANCINGMODE
value: "kubernetes"
- name: TYPHA_DATASTORETYPE
value: "kubernetes"
- name: TYPHA_HEALTHENABLED
value: "true"
# Uncomment these lines to enable prometheus metrics. Since Typha is host-networked,
# this opens a port on the host, which may need to be secured.
#- name: TYPHA_PROMETHEUSMETRICSENABLED
# value: "true"
#- name: TYPHA_PROMETHEUSMETRICSPORT
# value: "9093"
livenessProbe:
httpGet:
path: /liveness
port: 9098
periodSeconds: 30
initialDelaySeconds: 30
readinessProbe:
httpGet:
path: /readiness
port: 9098
periodSeconds: 10
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
readOnly: true
volumes:
- name: tz-config
hostPath:
path: /etc/localtime
---

# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
hostNetwork: true
tolerations:
# Make sure calico/node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
containers:
# Runs calico/node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: calico/node:v3.1.3
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Enable felix info logging.
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Disable file logging so \`kubectl logs\` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPV6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
value: "1440"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within \`--cluster-cidr\`.
- name: CALICO_IPV4POOL_CIDR
value: "10.211.0.0/16"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "CrossSubnet"
# Enable IP-in-IP within Felix.
- name: FELIX_IPINIPENABLED
value: "true"
# Typha support: controlled by the ConfigMap.
- name: FELIX_TYPHAK8SSERVICENAME
valueFrom:
configMapKeyRef:
name: calico-config
key: typha_service_name
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
httpGet:
path: /readiness
port: 9099
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: tz-config
mountPath: /etc/localtime
readOnly: true
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: calico/cni:v3.1.3
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
- name: tz-config
mountPath: /etc/localtime
readOnly: true
volumes:
# Used by calico/node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
- name: tz-config
hostPath:
path: /etc/localtime

# Create all the CustomResourceDefinitions needed for
# Calico policy and networking mode.
---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy

---

apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
EOF

kubectl create -f calico/

7. 配置和安装DNS插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
mkdir -p kube-dns/

cat <<EOF > kube-dns/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: kube-dns
name: coredns
namespace: kube-system
spec:
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
k8s-app: kube-dns
spec:
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
- mountPath: /etc/localtime
name: tz-config
readOnly: true
dnsPolicy: Default
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: coredns
serviceAccountName: coredns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
name: coredns
name: config-volume
- hostPath:
path: /etc/localtime
type: ""
name: tz-config
EOF

cat <<EOF > kube-dns/service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: KubeDNS
name: kube-dns
namespace: kube-system
spec:
clusterIP: 10.96.0.10
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP
EOF

cat <<EOF > kube-dns/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
reload
}
EOF

cat <<EOF > kube-dns/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
EOF

cat <<EOF > kube-dns/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
EOF

cat <<EOF > kube-dns/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
EOF

kubectl create -f kube-dns/

7. 验证Service的访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
mkdir -p nginx/

cat <<EOF > nginx/01-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
EOF

cat <<EOF > nginx/02-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- nodePort: 31073
protocol: TCP
port: 80
targetPort: 80
type: NodePort
EOF

# 有响应视为正常,否则视为不正常
curl -XGET http://192.168.112.129:31073
curl -XGET http://192.168.112.130:31073

8. 验证Pod的网络和DNS配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
# 在node01节点和node02节点上分别操作
mkdir -p network/
cd network/

cat <<EOF > Dockerfile
FROM alpine:3.8

MAINTAINER wangxin_0611@126.com

RUN apk add --no-cache ca-certificates bind-tools iputils iproute2 net-tools tcpdump
EOF

docker build -t alpine:3.8-network .

# 在master节点上操作
mkdir -p network/
cd network/

cat <<EOF > network.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: network
namespace: default
spec:
selector:
matchLabels:
app: network
template:
metadata:
labels:
app: network
spec:
containers:
- name: network
image: alpine:3.8-network
imagePullPolicy: IfNotPresent
command:
- sleep
- "3600"
restartPolicy: Always
EOF

kubectl create -f network/

[root@master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
network-bl7jx 1/1 Running 0 6m 10.211.1.4 node02
network-m2vp6 1/1 Running 0 6m 10.211.0.4 node01

# 在node01上的pod中验证
[root@master ~]# kubectl exec -it network-m2vp6 /bin/sh
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 72:01:49:fa:fb:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.211.0.4/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::7001:49ff:fefa:fbf3/64 scope link
valid_lft forever preferred_lft forever
/ # ping -c 4 10.211.1.4
PING 10.211.1.4 (10.211.1.4) 56(84) bytes of data.
64 bytes from 10.211.1.4: icmp_seq=1 ttl=62 time=0.314 ms
64 bytes from 10.211.1.4: icmp_seq=2 ttl=62 time=0.490 ms
64 bytes from 10.211.1.4: icmp_seq=3 ttl=62 time=0.415 ms
64 bytes from 10.211.1.4: icmp_seq=4 ttl=62 time=0.491 ms

--- 10.211.1.4 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.314/0.427/0.491/0.075 ms
/ # nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53

Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1

/ # nslookup kubernetes
Server: 10.96.0.10
Address: 10.96.0.10#53

Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1

/ # exit

# 在node02上的pod中验证
[root@master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
network-bl7jx 1/1 Running 0 9m 10.211.1.4 node02
network-m2vp6 1/1 Running 0 9m 10.211.0.4 node01
[root@master ~]# kubectl exec -it network-bl7jx /bin/sh
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 5a:a6:51:22:9d:2b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.211.1.4/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::58a6:51ff:fe22:9d2b/64 scope link
valid_lft forever preferred_lft forever
/ # ping -c 4 10.211.0.4
PING 10.211.0.4 (10.211.0.4) 56(84) bytes of data.
64 bytes from 10.211.0.4: icmp_seq=1 ttl=62 time=0.450 ms
64 bytes from 10.211.0.4: icmp_seq=2 ttl=62 time=0.685 ms
64 bytes from 10.211.0.4: icmp_seq=3 ttl=62 time=0.726 ms
64 bytes from 10.211.0.4: icmp_seq=4 ttl=62 time=0.707 ms

--- 10.211.0.4 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.450/0.642/0.726/0.111 ms
/ # nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53

Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1

/ # nslookup kubernetes
Server: 10.96.0.10
Address: 10.96.0.10#53

Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1

/ # exit