Kubernetes集群对接Ceph集群:搭建可对接Ceph实验环境的Kubernetes实验环境

一、实验环境说明

1. 环境主旨说明

本文旨在帮助读者搭建一个可对接Ceph实验环境的Kubernetes实验环境。关于基本Kubernetes环境的搭建,这里不做讲解,读者请参考网络上的资料,或者本博客的另外一篇文章《使用kubeadm的方式安装Kubernetes集群(一)》,链接地址详见“参考资料”。

2. 环境要点说明

升级所有节点的内核为主线版本,包括master节点和所有node节点。
所有的节点都安装ceph-common组件和python-cephfs组件,包括master节点和所有node节点。

二、实验环境版本信息

1. 操作系统的版本信息

CentOS Linux release 7.7.1908 (Core)

2. 核心组件的版本信息

Ceph Luminous 版本 的 ceph-common 和 python-cephfs
Kubernetes v1.16.0

三、实验步骤

1. 升级所有节点(所有的master和node节点)的内核为主线版本(当前主线版本为 5.3.6-1.el7.elrepo.x86_64)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
yum install -y yum-plugin-fastestmirror
cat /etc/redhat-release
cat /etc/os-release
uname -snr

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum repolist

yum --enablerepo=elrepo-kernel install -y kernel-ml
yum repolist all

awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

uname -snr

2. 所有节点(所有的master和node节点)安装 ceph luminous 版本 的 ceph-common 和 python-cephfs

1
2
3
4
5
6
7
8
9
10
11
12
13
cat <<EOF >> /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
EOF

yum makecache fast
yum install -y ceph-common python-cephfs

3. 为 Kubernetes 集群安装 ceph rbd 和 ceph fs 的 对应的 provisioner 服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
mkdir -p external-storage/ceph/common/
mkdir -p external-storage/ceph/rbd/
mkdir -p external-storage/ceph/fs/

cat <<EOF >> external-storage/ceph/common/01-namespaces.yaml
apiVersion: v1
kind: Namespace
metadata:
name: storage
EOF

# For ceph rbd
cat <<EOF >> external-storage/ceph/rbd/01-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cephrbd-provisioner
namespace: storage
EOF

cat <<EOF >> external-storage/ceph/rbd/02-clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephrbd-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list", "create", "update", "delete"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
EOF

cat <<EOF >> external-storage/ceph/rbd/03-clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephrbd-provisioner
subjects:
- kind: ServiceAccount
name: cephrbd-provisioner
namespace: storage
roleRef:
kind: ClusterRole
name: cephrbd-provisioner
apiGroup: rbac.authorization.k8s.io
EOF

cat <<EOF >> external-storage/ceph/rbd/04-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cephrbd-provisioner
namespace: storage
spec:
replicas: 1
selector:
matchLabels:
app: cephrbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: cephrbd-provisioner
spec:
containers:
- name: cephrbd-provisioner
image: wangx/rbd-provisioner:luminous
imagePullPolicy: IfNotPresent
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: cephrbd-provisioner
EOF


# For ceph fs
cat <<EOF >> external-storage/ceph/fs/01-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cephfs-provisioner
namespace: storage
EOF

cat <<EOF >> external-storage/ceph/fs/02-clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list", "create", "update", "delete"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
EOF

cat <<EOF >> external-storage/ceph/fs/03-clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: storage
roleRef:
kind: ClusterRole
name: cephfs-provisioner
apiGroup: rbac.authorization.k8s.io
EOF

cat <<EOF >> external-storage/ceph/fs/04-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cephfs-provisioner
namespace: storage
spec:
replicas: 1
selector:
matchLabels:
app: cephfs-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: cephfs-provisioner
spec:
containers:
- name: cephfs-provisioner
image: wangx/cephfs-provisioner:luminous
imagePullPolicy: IfNotPresent
env:
- name: PROVISIONER_NAME
value: ceph.com/cephfs
command:
- "/usr/local/bin/cephfs-provisioner"
args:
- "-id=cephfs-provisioner-1"
serviceAccount: cephfs-provisioner
EOF

# crete ceph provisioner's namespace
kubectl create -f external-storage/ceph/common/

# create ceph rbd provisioner
kubectl create -f external-storage/ceph/rbd/

# create ceph fs provisioner
kubectl create -f external-storage/ceph/fs/

4. 为 Kubernetes 集群的 ceph rbd 和 ceph fs 的 provisioner 服务创建 StroageClass 对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
# For ceph rbd 
## 注意:这里需要更换两个Secret的key的值为你的环境的。
mkdir -p external-storage/ceph/rbd/storageclass/
cat <<EOF >> external-storage/ceph/rbd/storageclass/01-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: cephrbd-admin-secret
namespace: storage
type: "kubernetes.io/cephrbd"
data:
# ceph auth get-key client.admin | base64
key: QVFDTmZxRmRDRmtnT3hBQURwY29VdjltbGJqRmIxMTJ2dzlLdEE9PQ==
---
apiVersion: v1
kind: Secret
metadata:
name: cephrbd-user-secret
namespace: storage
type: "kubernetes.io/cephrbd"
data:
# ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
# ceph auth get-key client.kube | base64
key: QVFBZ3Q2RmRibnBOTXhBQXkwQkJrdmQxQW5adHlWN0syZWIvSEE9PQ==
EOF

cat <<EOF >> external-storage/ceph/rbd/storageclass/02-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cephrbd
provisioner: ceph.com/rbd
parameters:
monitors: ceph-mon.storage.svc.cluster.local:6789 # 这里需要使用Kubernetes内部的DNS配置ceph monitor的地址
pool: kube
adminId: admin
adminSecretNamespace: storage
adminSecretName: cephrbd-admin-secret
userId: kube
userSecretNamespace: storage
userSecretName: cephrbd-user-secret
imageFormat: "2"
imageFeatures: layering
---
kind: Service
apiVersion: v1
metadata:
name: ceph-mon
namespace: storage
spec:
type: ExternalName
externalName: 192.168.112.131.xip.io # ceph monitor的地址
EOF

kubectl create -f external-storage/ceph/rbd/storageclass/

# For ceph fs
## 注意:这里需要更换Secret的key的值为你的环境的。
mkdir -p external-storage/ceph/fs/storageclass/
cat <<EOF >> external-storage/ceph/fs/storageclass/01-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: cephfs-admin-secret
namespace: storage
type: "kubernetes.io/cephfs"
data:
# ceph auth get-key client.admin | base64
key: QVFDTmZxRmRDRmtnT3hBQURwY29VdjltbGJqRmIxMTJ2dzlLdEE9PQ==
EOF

cat <<EOF >> external-storage/ceph/fs/storageclass/02-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 192.168.112.131:6789
adminId: admin
adminSecretName: cephfs-admin-secret
adminSecretNamespace: storage
claimRoot: /volumes/kubernetes
EOF

kubectl create -f external-storage/ceph/fs/storageclass/

5. 验证 Kubernetes 集群的 ceph rbd 和 ceph fs 的 provisioner 服务配合StroageClass 对象实现的动态存储供应功能

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
# 创建pvc和pod用于验证ceph rbd
mkdir -p external-storage/ceph/rbd/example/
cat <<EOF >> external-storage/ceph/rbd/example/01-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim1
spec:
accessModes:
- ReadWriteOnce
storageClassName: cephrbd
resources:
requests:
storage: 1Gi
EOF

cat <<EOF >> external-storage/ceph/rbd/example/02-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod-1
spec:
containers:
- name: test-pod-1
image: nginx:1.7.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: pvc
mountPath: "/data"
volumes:
- name: pvc
persistentVolumeClaim:
claimName: claim1
EOF

kubectl create -f external-storage/ceph/rbd/example/

# 创建pvc和pod用于验证ceph fs
mkdir -p external-storage/ceph/fs/example/
cat <<EOF >> external-storage/ceph/fs/example/01-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim2
spec:
storageClassName: cephfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
EOF

cat <<EOF >> external-storage/ceph/fs/example/02-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod-2
spec:
containers:
- name: test-pod-2
image: nginx:1.7.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: pvc
mountPath: "/data"
volumes:
- name: pvc
persistentVolumeClaim:
claimName: claim2
EOF

kubectl create -f external-storage/ceph/fs/example/

# 验证步骤如下所示:
[root@master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim1 Bound pvc-8bcd7a8a-9c72-4c24-a5ce-dda3f72b459c 1Gi RWO cephrbd 5m22s
claim2 Bound pvc-6aeaf23b-c1c0-4654-8fa9-50656b5b7247 1Gi RWX cephfs 3m43s

[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-6aeaf23b-c1c0-4654-8fa9-50656b5b7247 1Gi RWX Delete Bound default/claim2 cephfs 3m55s
pvc-8bcd7a8a-9c72-4c24-a5ce-dda3f72b459c 1Gi RWO Delete Bound default/claim1 cephrbd 5m36s

[root@master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pod-1 1/1 Running 0 6m2s 10.211.196.133 node01 <none> <none>
test-pod-2 1/1 Running 0 4m23s 10.211.140.69 node02 <none> <none>

## 进入pod下的container中
[root@master ~]# kubectl exec -it test-pod-1 /bin/bash
root@test-pod-1:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 17G 3.6G 14G 21% /
tmpfs 64M 0 64M 0% /dev
tmpfs 982M 0 982M 0% /sys/fs/cgroup
/dev/rbd0 976M 2.6M 958M 1% /data
/dev/mapper/cl-root 17G 3.6G 14G 21% /dev/termination-log
/dev/mapper/cl-root 17G 3.6G 14G 21% /etc/resolv.conf
/dev/mapper/cl-root 17G 3.6G 14G 21% /etc/hostname
/dev/mapper/cl-root 17G 3.6G 14G 21% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/mapper/cl-root 17G 3.6G 14G 21% /var/cache/nginx
tmpfs 982M 12K 982M 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 982M 0 982M 0% /proc/acpi
tmpfs 64M 0 64M 0% /proc/kcore
tmpfs 64M 0 64M 0% /proc/keys
tmpfs 64M 0 64M 0% /proc/timer_list
tmpfs 64M 0 64M 0% /proc/sched_debug
tmpfs 982M 0 982M 0% /proc/scsi
tmpfs 982M 0 982M 0% /sys/firmware
root@test-pod-1:/# cd /data/
root@test-pod-1:/data# ls -la
total 20
drwxr-xr-x 3 root root 4096 Oct 14 05:43 .
drwxr-xr-x 1 root root 41 Oct 14 05:39 ..
drwx------ 2 root root 16384 Oct 14 05:39 lost+found
root@test-pod-1:/data# echo 'hello ceph rbd.' > readme.md
root@test-pod-1:/data# ls -la
total 24
drwxr-xr-x 3 root root 4096 Oct 14 05:43 .
drwxr-xr-x 1 root root 41 Oct 14 05:39 ..
drwx------ 2 root root 16384 Oct 14 05:39 lost+found
-rw-r--r-- 1 root root 16 Oct 14 05:43 readme.md
root@test-pod-1:/data# cat readme.md
hello ceph rbd.
root@test-pod-1:/data# exit
exit

[root@master ~]# kubectl exec -it test-pod-2 /bin/bash
root@test-pod-2:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 17G 3.6G 14G 21% /
tmpfs 64M 0 64M 0% /dev
tmpfs 982M 0 982M 0% /sys/fs/cgroup
192.168.112.131:6789:/volumes/kubernetes/kubernetes/kubernetes-dynamic-pvc-41646ced-ee45-11e9-bfd9-eec9a057c13d 18G 0 18G 0% /data
/dev/mapper/cl-root 17G 3.6G 14G 21% /dev/termination-log
/dev/mapper/cl-root 17G 3.6G 14G 21% /etc/resolv.conf
/dev/mapper/cl-root 17G 3.6G 14G 21% /etc/hostname
/dev/mapper/cl-root 17G 3.6G 14G 21% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/mapper/cl-root 17G 3.6G 14G 21% /var/cache/nginx
tmpfs 982M 12K 982M 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 982M 0 982M 0% /proc/acpi
tmpfs 64M 0 64M 0% /proc/kcore
tmpfs 64M 0 64M 0% /proc/keys
tmpfs 64M 0 64M 0% /proc/timer_list
tmpfs 64M 0 64M 0% /proc/sched_debug
tmpfs 982M 0 982M 0% /proc/scsi
tmpfs 982M 0 982M 0% /sys/firmware
root@test-pod-2:/# cd /data/
root@test-pod-2:/data# ls -la
total 0
drwxr-xr-x 2 root root 0 Oct 14 05:41 .
drwxr-xr-x 1 root root 29 Oct 14 05:41 ..
root@test-pod-2:/data# echo 'hello ceph fs.' > readme.md
root@test-pod-2:/data# ls -la
total 1
drwxr-xr-x 2 root root 1 Oct 14 05:44 .
drwxr-xr-x 1 root root 29 Oct 14 05:41 ..
-rw-r--r-- 1 root root 15 Oct 14 05:44 readme.md
root@test-pod-2:/data# cat readme.md
hello ceph fs.
root@test-pod-2:/data# exit
exit

# 在test-pod-1的宿主机上
[root@node01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 969M 0 969M 0% /dev
tmpfs 982M 0 982M 0% /dev/shm
tmpfs 982M 23M 959M 3% /run
tmpfs 982M 0 982M 0% /sys/fs/cgroup
/dev/mapper/cl-root 17G 3.6G 14G 21% /
/dev/sda1 1014M 242M 773M 24% /boot
tmpfs 982M 12K 982M 1% /var/lib/kubelet/pods/444da5da-6b0b-4eb9-961c-e67620e2790d/volumes/kubernetes.io~secret/calico-node-token-j29td
tmpfs 982M 12K 982M 1% /var/lib/kubelet/pods/5c6b4dc9-9789-431f-abda-69b791e00852/volumes/kubernetes.io~secret/kube-proxy-token-2xvqz
tmpfs 982M 12K 982M 1% /var/lib/kubelet/pods/117bf0a0-1de4-4233-8f86-58e3b7e222d9/volumes/kubernetes.io~secret/default-token-lvg9j
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/c0137c8945f708c9a9af29c435006164f1880e62e12d0944558d76fe826cf79e/merged
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/a173aac7b6bd051d9309d31dc98bffc8d8b4601738ad8670981c65e9f206f4d7/merged
shm 64M 0 64M 0% /var/lib/docker/containers/af5a902877155cc3f0d80506e4af572364e1b02b7527ebc272e36cc3537fdaf0/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/779e0a0a9e8e46ba53b976b9e0e0acb133778159c001d314d9fb4a689ce4dd51/mounts/shm
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/e0f0e09989f3e152c17f2ccf7e15df86a0844f59277804d409126e7508e892f2/merged
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/6cf53e19aa7a1cabfcc992b492f0c2d7b3a5b7a41a999e3e17c6e85acd432385/merged
tmpfs 197M 0 197M 0% /run/user/0
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/ea9758ae1391c0b51be400d2c716398a6e6a75042c2d266b0fd5fffce49f1856/merged
shm 64M 0 64M 0% /var/lib/docker/containers/78374c2866c9e59956b46e6cf7745743e5366273eae4a28f584167895eff28c3/mounts/shm
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/f75360891104dd17f6035b84fc53e2cee9aa49cf4cb99abbc07ea4e53f3e68fa/merged
tmpfs 982M 12K 982M 1% /var/lib/kubelet/pods/bc337b72-b78d-4f22-9074-0ea1e3c0854b/volumes/kubernetes.io~secret/cephfs-provisioner-token-gnvmf
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/5f2bb0260a595ebf2685f606c8fa57ae312b36d9ed0d2ac1c0226f108dcb4f8a/merged
shm 64M 0 64M 0% /var/lib/docker/containers/2be9a9eab8bf6181da1b00bd6745053236b5b72b4a625244f58664d6cef4056d/mounts/shm
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/19c0add3e797d302865131a736187e08b5773ff504cf24baa5ca6dc9e1e2f0bd/merged
tmpfs 982M 12K 982M 1% /var/lib/kubelet/pods/94567637-87d1-44bf-b40c-4cf8d7249455/volumes/kubernetes.io~secret/default-token-lvg9j
/dev/rbd0 976M 2.6M 958M 1% /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/kube-image-kubernetes-dynamic-pvc-06ed8402-ee45-11e9-81e1-626518d2252b
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/0f9b3b248dd6fd5db871c9a8340c3b221099ca48e5e44496b0e8ff87d385053e/merged
shm 64M 0 64M 0% /var/lib/docker/containers/7fa70409cd56a31427c89a61db4d102261610678a06a550eecfd1c44092c0d02/mounts/shm
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/723eda3646d4eabdb62faa89566f0c6d84a14f7f882acee46af96630d96480e4/merged

# 在test-pod-2的宿主机上
[root@node02 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 969M 0 969M 0% /dev
tmpfs 982M 0 982M 0% /dev/shm
tmpfs 982M 22M 960M 3% /run
tmpfs 982M 0 982M 0% /sys/fs/cgroup
/dev/mapper/cl-root 17G 3.6G 14G 21% /
/dev/sda1 1014M 242M 773M 24% /boot
tmpfs 982M 12K 982M 1% /var/lib/kubelet/pods/cde8bcd0-da54-40f8-90e1-a8c53daaca8a/volumes/kubernetes.io~secret/default-token-lvg9j
tmpfs 982M 12K 982M 1% /var/lib/kubelet/pods/27785efc-053d-41c1-b081-d61056715dce/volumes/kubernetes.io~secret/kube-proxy-token-2xvqz
tmpfs 982M 12K 982M 1% /var/lib/kubelet/pods/6af3ba89-fd1d-4ca6-9f1f-d1a9d24aab68/volumes/kubernetes.io~secret/calico-node-token-j29td
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/fe5b85b1611ec8ce24761d70dec417d8102f71c13dc3ba0005387999d7f90538/merged
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/a2e0efa3dd1bc8d4725acd7ea3123cf2331cabfbd3f58185de44f81dc61f17b2/merged
shm 64M 0 64M 0% /var/lib/docker/containers/a9a00476e189f794aaa8d448f3fd492ed95d7f285ebe398f2561c3369f98f6fc/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/97c5b22f15b951a1d8c88deae0a34163d729cedc2fd21aef2108e657d3dbd2ed/mounts/shm
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/d69d00d1bb9d51e3b2a622edfe213b16069886f6a3507b7d22f8cf35aadb67fb/merged
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/bc97fe5450c049b42c16ed70907ebf759fd16d8aa0f1f1b597828c400a95fec1/merged
tmpfs 197M 0 197M 0% /run/user/0
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/be31ca99791b3a61b34750063296b3f61e0e33f6ca13bd89d09305e29282d93f/merged
shm 64M 0 64M 0% /var/lib/docker/containers/75bf7cd29b86a191287b60b505ca9147aee2590026227b75c108e527c8a42050/mounts/shm
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/d424bbdc32abf0a7860ea8cc57138a578d3ac379a2a5a62d467ebdd503f85f12/merged
tmpfs 982M 12K 982M 1% /var/lib/kubelet/pods/63e075b8-1b82-4ead-924b-e8233217c597/volumes/kubernetes.io~secret/cephrbd-provisioner-token-xx8qj
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/032279366ff9f083d5f54e8653789f77d30962fb74d9c796912462336ba089a8/merged
shm 64M 0 64M 0% /var/lib/docker/containers/8e3f8bcbbb05781e69759fe678fdba68fa6ec0514972b3a399936c90d754ea1d/mounts/shm
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/ca226a39a87d3319f974bdbca242ff6621cafe356a24e9080d65b013f26b30ef/merged
tmpfs 982M 12K 982M 1% /var/lib/kubelet/pods/094d7470-5712-4523-8eb8-9994dbfb2cfe/volumes/kubernetes.io~secret/default-token-lvg9j
192.168.112.131:6789:/volumes/kubernetes/kubernetes/kubernetes-dynamic-pvc-41646ced-ee45-11e9-bfd9-eec9a057c13d 18G 0 18G 0% /var/lib/kubelet/pods/094d7470-5712-4523-8eb8-9994dbfb2cfe/volumes/kubernetes.io~cephfs/pvc-6aeaf23b-c1c0-4654-8fa9-50656b5b7247
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/7615fa1f189797ca2b111fa6122ea3a1d8c2f2c009d052a3e06c6a9e9d0b9ba3/merged
shm 64M 0 64M 0% /var/lib/docker/containers/0a76bdb1043188470fa7a2aa3df3ce9640b658064cb41c872025b90ed46f6bcd/mounts/shm
overlay 17G 3.6G 14G 21% /var/lib/docker/overlay2/49ef6810dedcd675d541f06df84336744a7cbb84a876daf96ffab7bbcfa304f7/merged

四、参考资料

https://github.com/kubernetes-incubator/external-storage/tree/v5.2.0/ceph
https://www.howtoforge.com/tutorial/how-to-upgrade-kernel-in-centos-7-server/
https://singhwang.github.io/2019/10/03/kubeadm_kubernetes_cluster_000/