使用Gluster工具管理GlusterFS集群

一、配置和验证GlusterFS Cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 添加节点到glusterfs cluster
[root@server07 ~]# gluster peer probe server08
peer probe: success.

[root@server07 ~]# gluster peer probe server09
peer probe: success.

# 查看glusterfs cluster的状态
[root@server07 ~]# gluster peer status
Number of Peers: 2

Hostname: server08
Uuid: 8530c074-760f-4d03-a5a7-f1b3ccaa5cfd
State: Peer in Cluster (Connected)
Hostname: server09

Uuid: 41a4b6df-bcb3-4650-8a21-54afc1e27cbe
State: Peer in Cluster (Connected)

二、在GlusterFS Cluster上操作和使用Volume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
# 查看volume的列表
[root@server07 ~]# gluster volume info
No volumes present

# 创建volume对应的存储目录(集群的所有主机上都要创建)
mkdir -p /opt/gluster/data

# 创建volume
[root@server07 ~]# gluster volume create k8s-volume transport tcp server07:/opt/gluster/data server08:/opt/gluster/data server09:/opt/gluster/data force
volume create: k8s-volume: success: please start the volume to access data

[root@server07 ~]# gluster volume info k8s-volume

Volume Name: k8s-volume
Type: Distribute
Volume ID: e4974285-1304-4ea7-b60f-ebe8375dba86
Status: Created
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: server07:/opt/gluster/data
Brick2: server08:/opt/gluster/data
Brick3: server09:/opt/gluster/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

# 启动volume
[root@server07 ~]# gluster volume start k8s-volume
volume start: k8s-volume: success

# 查看volume的信息
[root@server07 ~]# gluster volume info k8s-volume

Volume Name: k8s-volume
Type: Distribute
Volume ID: e4974285-1304-4ea7-b60f-ebe8375dba86
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: server07:/opt/gluster/data
Brick2: server08:/opt/gluster/data
Brick3: server09:/opt/gluster/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

# 验证数据卷的挂载和数据写入
[root@server07 ~]# mount -t glusterfs server07:k8s-volume /mnt
[root@server07 ~]# ls -la /mnt
drwxr-xr-x. 3 root root 4096 4月 9 23:36 .
dr-xr-xr-x. 17 root root 224 4月 9 21:59 ..
[root@server07 ~]# echo "hello glusterfs kubernetes." > /mnt/readme.md
[root@server07 ~]# ls -la /mnt
drwxr-xr-x. 3 root root 4096 4月 10 05:36 .
dr-xr-xr-x. 17 root root 224 4月 9 21:59 ..
-rw-r--r--. 1 root root 28 4月 10 05:36 readme.md

[root@server07 ~]# cat /mnt/readme.md
hello glusterfs kubernetes.

[root@server07 ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/mapper/cl-root 8.0G 1.1G 7.0G 14% /
devtmpfs 478M 0 478M 0% /dev
tmpfs 489M 0 489M 0% /dev/shm
tmpfs 489M 6.8M 482M 2% /run
tmpfs 489M 0 489M 0% /sys/fs/cgroup
/dev/sda1 1014M 139M 876M 14% /boot
tmpfs 98M 0 98M 0% /run/user/0
server07:k8s-volume 24G 3.2G 21G 14% /mnt
[root@server07 ~]# umount server07:k8s-volume
[root@server07 ~]# ls -la /mnt/
drwxr-xr-x. 2 root root 6 11月 5 2016 .
dr-xr-xr-x. 17 root root 224 4月 9 21:59 ..

# 查看volume的信息
[root@server07 ~]# gluster volume info k8s-volume

Volume Name: k8s-volume
Type: Distribute
Volume ID: e4974285-1304-4ea7-b60f-ebe8375dba86
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: server07:/opt/gluster/data
Brick2: server08:/opt/gluster/data
Brick3: server09:/opt/gluster/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

# 停止volume
[root@server07 ~]# gluster volume stop k8s-volume
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: k8s-volume: success

# 查看volume的信息
[root@server07 ~]# gluster volume info k8s-volume

Volume Name: k8s-volume
Type: Distribute
Volume ID: e4974285-1304-4ea7-b60f-ebe8375dba86
Status: Stopped
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: server07:/opt/gluster/data
Brick2: server08:/opt/gluster/data
Brick3: server09:/opt/gluster/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

# 删除volume
[root@server07 ~]# gluster volume delete k8s-volume
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: k8s-volume: success

# 查看volume的信息
[root@server07 ~]# gluster volume info k8s-volume
Volume k8s-volume does not exist

三、如何重置GlusterFS Cluster中的所有Node

假设集群中只有一个volume,它叫k8s-volume,下面对集群进行重置:

1
2
3
4
5
6
7
8
9
10
# 重置glusterfs
gluster volume list
gluster volume stop k8s-volume
gluster volume delete k8s-volume
gluster volume list
gluster peer status
gluster peer help
gluster peer detach server08
gluster peer detach server09
gluster peer status

四、如何重置GlusterFS Cluster使用过的磁盘为裸磁盘

1
2
# 注意千万不用其操作根磁盘,其同样适用于被其他存储使用过的磁盘
dd if=/dev/zero of=/dev/<sd?> bs=1M count=200