您当前的位置: 首页 > 学无止境 > 心得笔记 网站首页心得笔记
分布式复制块设备drbd
发布时间:2019-05-03 16:02:50编辑:雪饮阅读()
上回实现了heartbeatv3+corosync的集群资源管理。这次来实现分布式复制块设备。
需要的软件包:
drbd83-8.3.15-2.el5.centos.i386.rpm
kmod-drbd83-8.3.15-3.el5.centos.i686.rpm
各节点安装drbd:
这里直接在上次集群环境的基础上来实现
[root@node1 ~]# yum --nogpgcheck localinstall /usr/local/src/*drbd*
各节点新建立一个分区
[root@node1 ~]# fdisk /dev/sdb
The number of cylinders for this disk is set to 2610.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610): +1G
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@node1 ~]# partprobe /dev/sdb
暂时先不格式化
各节点配置/etc/drbd.d/global_common.conf
复制配置样例
[root@node1 ~]# cp /usr/share/doc/drbd83-8.3.15/drbd.conf /etc/drbd.conf
cp: overwrite `/etc/drbd.conf'? y
然后将handlers配置项下如下3项解除注释:
pri-on-incon-degr
pri-lost-after-sb
local-io-error
然后在disk配置项下添加如下配置项:
on-io-error detach;
然后在net配置项下添加如下配置项:
cram-hmac-alg "sha1";
shared-secret "mydrbdjkljll";
注意:shared-secret名字只是随意起的,标准也是随机一点好
然后在syncer配置项下添加如下配置项:
rate 200M;
各节点配置建立/etc/drbd.d/mydrbd.res
[root@node1 ~]# vi /etc/drbd.d/mydrbd.res
[root@node1 ~]# cat /etc/drbd.d/mydrbd.res
resource mydrbd {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;
on node1.magedu.com {
address 192.168.2.173:7789;
}
on node2.magedu.com {
address 192.168.2.191:7789;
}
}
初始化drbd
上面所有工作都同步各个节点后就在各个节点准备初始化drbd
[root@node1 ~]# drbdadm create-md mydrbd
--== Thank you for participating in the global usage survey ==--
The server's response is:
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success
接下来这个命令会等待其它节点也运行service drbd start,所以尽量各个节点运行该命令的时间不要差的太多
[root@node1 ~]# service drbd start
Starting DRBD resources: [ ]..........
查看drbd分布式的状态
[root@node1 ~]# cat /proc/drbd
version: 8.3.15 (api:88/proto:86-97)
GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by mockbuild@builder17.centos.org, 2013-03-27 16:04:08
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:987896
设置当前节点为drbd的主节点
[root@node1 ~]# drbdadm -- --overwrite-data-of-peer primary mydrbd
[root@node1 ~]# cat /proc/drbd
version: 8.3.15 (api:88/proto:86-97)
GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by mockbuild@builder17.centos.org, 2013-03-27 16:04:08
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:987896 nr:0 dw:0 dr:987896 al:0 bm:61 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
drbd格式化与挂载测试
格式化
注意,必须在drbd的主节点上格式化
[root@node1 ~]# mke2fs -j /dev/drbd0
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
123648 inodes, 246974 blocks
12348 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=255852544
8 block groups
32768 blocks per group, 32768 fragments per group
15456 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 32 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
挂载测试主从
[root@node1 ~]# mkdir /mydata
[root@node1 ~]# mount /dev/drbd0 /mydata
[root@node1 ~]# cp /etc/inittab /mydata/
[root@node1 ~]# ls /mydata
inittab lost+found
测试从服务器的时候先从主服务器中将drbd移除挂载并将当前服务器设置为从服务器,防止原本其它从服务器的访问。
[root@node1 ~]# umount /dev/drbd0
[root@node1 ~]# drbdadm secondary mydrbd
然后从服务器切换为主服务器并挂载
[root@node2 ~]# drbdadm primary mydrbd
[root@node2 ~]# mkdir /mydata
[root@node2 ~]# ls /mydata
inittab lost+found
关键字词:分布式,复制,块设备,drbd