您当前的位置: 首页 > 学无止境 > 心得笔记 网站首页心得笔记
lvm实现集群文件系统
发布时间:2019-06-02 16:23:08编辑:雪饮阅读()
上次讲到通过iscsi以及一个设备的分区实现了集群文件系统,这次要实现的是通过一个设备整体而不是该设备的分区来实现逻辑卷的集群文件系统。
跳板机上面的准备
准备一个新的磁盘设备,不要分区
[root@nfs ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 2610 20860402+ 8e Linux LVM
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb4 1 2610 20964793+ 5 Extended
/dev/sdb5 1 2433 19543009+ 8e Linux LVM
Disk /dev/sdc: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 1217 9775521 83 Linux
/dev/sdc2 1218 2610 11189272+ 83 Linux
Disk /dev/sdd: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
在/etc/tgt/targets.conf配置文件中配置target为
<target xy22080702>
<backing-store /dev/sdd>
vendor_id MageEdu
lun 8
</backing-store>
initiator-address 192.168.2.0/24
incominguser xy220807 xy220807
</target>
重启服务并查看结果
[root@nfs ~]# service tgtd restart
Stopping SCSI target daemon: Stopping target framework daemon
[ OK ]
Starting SCSI target daemon: Starting target framework daemon
[root@nfs ~]# tgtadm --lld iscsi --mode target --op show
Target 1: xy22080702
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 8
Type: disk
SCSI ID: IET 00010008
SCSI SN: beaf18
Size: 21475 MB, Block size: 512
Online: Yes
Removable media: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/sdd
Backing store flags:
Account information:
xy220807
ACL information:
192.168.2.0/24
集群节点上准备
[root@node1 ~]# yum install lvm2-cluster
[root@node1 ~]# lvmconf --enable-cluster
然后启动每个节点的cman服务
然后3个节点分别启动clvmd服务
[root@node1 ~]# service clvmd start
Starting clvmd:
Activating VG(s): 2 logical volume(s) in volume group "VolGroup00" now active
clvmd not running on node node3.magedu.com
clvmd not running on node node2.magedu.com
[ OK ]
然后每个节点重新配置下自己的登录账号
/etc/iscsi/iscsid.conf
node.session.auth.authmethod = CHAP
node.session.auth.username =xy220807
node.session.auth.password = xy220807
然后重启服务、发现、登录
[root@node1 ~]# service iscsi restart
iscsiadm: No matching sessions found
Stopping iSCSI daemon:
iscsid is stopped [ OK ]
Starting iSCSI daemon: [ OK ]
[ OK ]
Setting up iSCSI targets: Logging in to [iface: default, target: xy22080702, portal: 192.168.2.189,3260] (multiple)
iscsiadm: Could not login to [iface: default, target: xy22080702, portal: 192.168.2.189,3260].
iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure)
iscsiadm: Could not log into all portals
[ OK ]
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.2.189
192.168.2.189:3260,1 xy22080702
[root@node1 ~]# iscsiadm -m node -T xy22080702 -p 192.168.2.189 -l
Logging in to [iface: default, target: xy22080702, portal: 192.168.2.189,3260] (multiple)
Login to [iface: default, target: xy22080702, portal: 192.168.2.189,3260] successful.
然后每个节点都可以看到跳板机上刚才导出的设备了,只是设备名和跳板机上可能未必相同,因为跳板机的磁盘数量顺序等和每个节点是未必相同的。
[root@node3 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 2610 20860402+ 8e Linux LVM
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
64 heads, 32 sectors/track, 20480 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdb doesn't contain a valid partition table
然后某个节点将该设备整体做成物理卷
[root@node3 ~]# pvcreate /dev/sdb
Writing physical volume data to disk "/dev/sdb"
Physical volume "/dev/sdb" successfully created
此时每个节点都可以看到了该物理卷了(个别节点可能因为同步较慢,需要稍微过一会儿才能看到)
[root@node2 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup00 lvm2 a-- 19.88G 0
/dev/sdb lvm2 a-- 20.00G 20.00G
然后在某一节点创建卷组
[root@node2 ~]# vgcreate clustervg /dev/sdb
Clustered volume group "clustervg" successfully created
然后在某一个节点创建10G的逻辑卷
[root@node2 ~]# lvcreate -L 10G -n clusterlv clustervg
Logical volume "clusterlv" created
然后某个节点将该逻辑卷格式化为gfs2的集群文件系统且分配为两个集群挂载点
[root@node2 ~]# mkfs.gfs2 -j 2 -p lock_dlm -t tcluster:lktb1 /dev/clustervg/clusterlv
This will destroy any data on /dev/clustervg/clusterlv.
Are you sure you want to proceed? [y/n] y
Device: /dev/clustervg/clusterlv
Blocksize: 4096
Device Size 10.00 GB (2621440 blocks)
Filesystem Size: 10.00 GB (2621438 blocks)
Journals: 2
Resource Groups: 40
Locking Protocol: "lock_dlm"
Lock Table: "tcluster:lktb1"
UUID: 7724F4BA-F338-BB5C-0BE7-0693A6BCD446
lvm集群文件系统
某个节点上挂载lvm集群文件系统
[root@node1 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata
获取lvm集群文件系统的所有可调参数
[root@node1 ~]# gfs2_tool gettune /mydata
new_files_directio = 0
new_files_jdata = 0
quota_scale = 1.0000 (1, 1)
logd_secs = 1
recoverd_secs = 60
statfs_quantum = 30
stall_secs = 600
quota_cache_secs = 300
quota_simul_sync = 64
statfs_slow = 0
complain_secs = 10
max_readahead = 262144
quota_quantum = 60
quota_warn_period = 10
jindex_refresh_secs = 60
log_flush_secs = 60
incore_log_blocks = 1024
注意:这里目录名后面不要在带斜杠了
设置该文件系统文件写入处理方式为落地
默认是在内存中的,个人理解应是异步落地,而不是同步落地,该配置后就是同步落地
[root@node1 ~]# gfs2_tool settune /mydata new_files_directio 1
冻结集群文件系统
这个冻结文件系统貌似有问题,本文整个实验我第一次做的时候是将操作几乎是均分在每个节点的,当操作到冻结文件系统这步的时候一直卡住不见执行结果大概有1-2个小时,最后我干脆重新来一次,这次我大多数操作只在两个节点完成,除了集群必须的操作
[root@node1 ~]# gfs2_tool freeze /mydata
冻结集群文件系统后其他节点则无法touch文件了
解冻集群文件系统
gfs2_tool unfreeze /mydata
增加集群文件系统挂载节点上限
这里也卡了好久,差不多有一个小时,后来实在等不下去就重来一次并且不设置该文件系统文件写入处理方式为落地,于是就成功了,所以个人感觉应该是落地io性能太大导致死机一个节点,当时用第二个节点也测试了,虽然一直没有死机,但是一直也在卡着。
[root@node3 ~]# gfs2_jadd -j 1 /dev/clustervg/clusterlv
Filesystem: /mydata
Old Journals 2
New Journals 3
集群文件系统lv逻辑卷扩容
[root@node3 ~]# lvextend -L 15G /dev/clustervg/clusterlv
Extending logical volume clusterlv to 15.00 GB
Logical volume clusterlv successfully resized
关键字词:lvm,gfs2,集群,iscsi
上一篇:gfs2分布式文件系统