在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
GlusterFS因有很好的扩展性,使用的用户很多,使用GlusterFS能够解决网络存储、冗余备份等问题,那么在Linux下要如何安装GlusterFS呢?今天小编以CentOS6.4为例,给大家介绍下CentOS6.4安装配置GlusterFS的方法。 环境介绍: OS: CentOS 6.4 x86_64 Minimal Servers: sc2-log1,sc2-log2,sc2-log3,sc2-log4 Client: sc2-ads15 具体步骤: 1. 在sc2-log{1-4}上安装GlusterFS软件包: 代码如下 # wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo # yum install -y glusterfs-3.4.2-1.el6 glusterfs-server-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6 # /etc/init.d/glusterd start # chkconfig glusterfsd on 2. 在sc2-log1上配置整个GlusterFS集群: 代码如下 [root@sc2-log1 ~]# gluster peer probe sc2-log1 1 peer probe: success: on localhost not needed [root@sc2-log1 ~]# gluster peer probe sc2-log2 1 peer probe: success [root@sc2-log1 ~]# gluster peer probe sc2-log3 1 peer probe: success [root@sc2-log1 ~]# gluster peer probe sc2-log4 1 peer probe: success [root@sc2-log1 ~]# gluster peer status 01 Number of Peers: 3 02 03 Hostname: sc2-log2 04 Port: 24007 05 Uuid: 399973af-bae9-4326-9cbd-b5b05e5d2927 06 State: Peer in Cluster (Connected) 07 08 Hostname: sc2-log3 09 Port: 24007 10 Uuid: 833a7b8d-e3b3-4099-baf9-416ee7213337 11 State: Peer in Cluster (Connected) 12 13 Hostname: sc2-log4 14 Port: 24007 15 Uuid: 54bf115a-0119-4021-af80-7a6bca137fd9 16 State: Peer in Cluster (Connected) 3. 在sc2-log{1-4}上创建数据存放目录: 代码如下 # mkdir -p /usr/local/share/{models,geoip,wurfl} # ls -l /usr/local/share/ 1 total 24 2 drwxr-xr-x 2 root root 4096 Apr 1 12:19 geoip 3 drwxr-xr-x 2 root root 4096 Apr 1 12:19 models 4 drwxr-xr-x 2 root root 4096 Apr 1 12:19 wurfl 4. 在sc2-log1上创建GlusterFS磁盘: 代码如下 [root@sc2-log1 ~]# gluster volume create models replica 4 sc2-log1:/usr/local/share/models sc2-log2:/usr/local/share/models sc2-log3:/usr/local/share/models sc2-log4:/usr/local/share/models force 1 volume create: models: success: please start the volume to access data [root@sc2-log1 ~]# gluster volume create geoip replica 4 sc2-log1:/usr/local/share/geoip sc2-log2:/usr/local/share/geoip sc2-log3:/usr/local/share/geoip sc2-log4:/usr/local/share/geoip force 1 volume create: geoip: success: please start the volume to access data [root@sc2-log1 ~]# gluster volume create wurfl replica 4 sc2-log1:/usr/local/share/wurfl sc2-log2:/usr/local/share/wurfl sc2-log3:/usr/local/share/wurfl sc2-log4:/usr/local/share/wurfl force 1 volume create: wurfl: success: please start the volume to access data [root@sc2-log1 ~]# gluster volume start models 1 volume start: models: success [root@sc2-log1 ~]# gluster volume start geoip 1 volume start: geoip: success [root@sc2-log1 ~]# gluster volume start wurfl 1 volume start: wurfl: success [root@sc2-log1 ~]# gluster volume info 01 Volume Name: models 02 Type: Replicate 03 Volume ID: b29b22bd-6d8c-45c0-b199-91fa5a76801f 04 Status: Started 05 Number of Bricks: 1 x 4 = 4 06 Transport-type: tcp 07 Bricks: 08 Brick1: sc2-log1:/usr/local/share/models 09 Brick2: sc2-log2:/usr/local/share/models 10 Brick3: sc2-log3:/usr/local/share/models 11 Brick4: sc2-log4:/usr/local/share/models 12 13 Volume Name: geoip 14 Type: Replicate 15 Volume ID: 69b0caa8-7c23-4712-beae-6f536b1cffa3 16 Status: Started 17 Number of Bricks: 1 x 4 = 4 18 Transport-type: tcp 19 Bricks: 20 Brick1: sc2-log1:/usr/local/share/geoip 21 Brick2: sc2-log2:/usr/local/share/geoip 22 Brick3: sc2-log3:/usr/local/share/geoip 23 Brick4: sc2-log4:/usr/local/share/geoip 24 25 Volume Name: wurfl 26 Type: Replicate 27 Volume ID: c723a99d-eeab-4865-819a-c0926cf7b88a 28 Status: Started 29 Number of Bricks: 1 x 4 = 4 30 Transport-type: tcp 31 Bricks: 32 Brick1: sc2-log1:/usr/local/share/wurfl 33 Brick2: sc2-log2:/usr/local/share/wurfl 34 Brick3: sc2-log3:/usr/local/share/wurfl 35 Brick4: sc2-log4:/usr/local/share/wurfl 5. 在sc2-ads15上部署客户端并mount GlusterFS文件系统: [sc2-ads15][root@sc2-ads15 ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo [sc2-ads15][root@sc2-ads15 ~]# yum install -y glusterfs-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6 [sc2-ads15][root@sc2-ads15 ~]# mkdir -p /mnt/{models,geoip,wurfl} [sc2-ads15][root@sc2-ads15 ~]# mount -t glusterfs -o ro sc2-log3:models /mnt/models/ [sc2-ads15][root@sc2-ads15 ~]# mount -t glusterfs -o ro sc2-log3:geoip /mnt/geoip/ [sc2-ads15][root@sc2-ads15 ~]# mount -t glusterfs -o ro sc2-log3:wurfl /mnt/wurfl/ [sc2-ads15][root@sc2-ads15 ~]# df -h 1 Filesystem Size Used Avail Use% Mounted on 2 /dev/mapper/vg_t-lv_root 3 59G 7.7G 48G 14% / 4 tmpfs 3.9G 0 3.9G 0% /dev/shm 5 /dev/xvda1 485M 33M 428M 8% /boot 6 sc2-log3:models 98G 8.6G 85G 10% /mnt/models 7 sc2-log3:geoip 98G 8.6G 85G 10% /mnt/geoip 8 sc2-log3:wurfl 98G 8.6G 85G 10% /mnt/wurfl 6. 相关数据读写可用性测试: 在sc2-ads15挂载点上写入数据: 代码如下 [sc2-ads15][root@sc2-ads15 ~]# umount /mnt/models [sc2-ads15][root@sc2-ads15 ~]# mount -t glusterfs sc2-log3:models /mnt/models/ [sc2-ads15][root@sc2-ads15 ~]# echo “This is sc2-ads15” 》 /mnt/models/hello.txt [sc2-ads15][root@sc2-ads15 ~]# mkdir /mnt/testdir 在sc2-log1数据目录中进行查看: [root@sc2-log1 ~]# ls /usr/local/share/models/ 1 hello.txt testdir 结果: 数据写入成功 在sc2-log1数据目录中直接写入数据: 代码如下 [root@sc2-log1 ~]# echo “This is sc2-log1” 》 /usr/local/share/models/hello.2.txt [root@sc2-log1 ~]# mkdir /usr/local/share/models/test2 在sc2-ads15挂载点上进行查看: [sc2-ads15][root@sc2-ads15 ~]# ls /mnt/models [sc2-ads15][root@sc2-ads15 ~]# ls -l /mnt/models 1 hello.txt testdir 结果: 数据写入失败 在sc2-log1挂载点上写入数据: 代码如下 [root@sc2-log1 ~]# mount -t glusterfs sc2-log1:models /mnt/models/ [root@sc2-log1 ~]# echo “This is sc2-log1” 》 /mnt/models/hello.3.txt [root@sc2-log1 ~]# mkdir /mnt/models/test3 在sc2-ads15挂载点上进行查看: [sc2-ads15][root@sc2-ads15 models]# ls /mnt/models 1 hello.2.txt hello.3.txt hello.txt test2 test3 testdir 结果: 数据写入成功,同时之前写入失败的数据也成功加载了。 最终结论: 在数据目录中直接写入数据,会导致其它节点因为得不到通知而使数据同步失败。 正确的做法是所有的读写操作都通过挂载点来进行。 7. 其它操作笔记: 删除GlusterFS磁盘: 代码如下 # gluster volume stop models # gluster volume delete models 卸载GlusterFS磁盘: 代码如下 sc2-log4 ACL访问控制: 代码如下 # gluster volume set models auth.allow 10.60.1.*,10.70.1.* 添加GlusterFS节点: 代码如下 # gluster peer probe sc2-log5 # gluster peer probe sc2-log6 # gluster volume add-brick models sc2-log5:/data/gluster sc2-log6:/data/gluster 迁移GlusterFS磁盘数据: 代码如下 # gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models start # gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models status # gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit 修复GlusterFS磁盘数据(例如在sc2-log1宕机的情况下): 代码如下 # gluster volume replace-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit -force # gluster volume heal models full 通过本文的介绍,想必你已经对GlusterFS在CentOS6.4的安装配置有了一定的了解,除此之外,本文还介绍了GlusterFS的其他操作方法。 |
请发表评论