Raid与LVM结合实验

一、实验目的:1、通过添加3块硬盘,组成RAID5模式
2、建立LVM
3、增加LVM容量

二、实验环境: Windows 操作系统下运行虚拟机VMware虚拟环境下创建Windows server

三、实验步骤(硬件类:任务、电路图或原理框图、操作过程等;软件类:任务、流程图、关键代码与注释、主要环节等)
磁盘分区:
1.首先,在机器上挂载 4 块 10GB 的磁盘
[root@DanCentOS67 daniel]# fdisk -l

Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdd: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sde: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdf: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

2.接着分别将每个磁盘分为2个分区,以 /dev/sdc 为例:
使用 fdisk 工具划出第一个分区 /dev/sdc1,大小约为 5GB(dev/sdc2 同理,Partition number 要改为 2,扇区的起止编号也不同,sdc2扇区编号是 654-1305):
[root@DanCentOS67 daniel]# fdisk /dev/sdc
WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305): 653
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

3.使用 fdisk-l 查看分区结果:
[root@DanCentOS67 daniel]# fdisk -l
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xbd293e5b

Device Boot Start End Blocks Id System
/dev/sdc1 1 653 5245191 fd Linux raid autodetect
/dev/sdc2 654 1305 5237190 fd Linux raid autodetect
4.对剩下的3个磁盘(/dev/sdd,/dev/sde,/dev/sdf)进行同样的处理,处理后使用 fdisk -l 查看是否全部完成分区格式化:
[root@DanCentOS67 daniel]# fdisk -l

……

Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xbd293e5b

Device Boot Start End Blocks Id System
/dev/sdc1 1 653 5245191 fd Linux raid autodetect
/dev/sdc2 654 1305 5237190 fd Linux raid autodetect

到这里,就完成了倒数第二层结构。

5.RAID 5的配置:
这里采用 RAID5,在提高IO性能的同时保证数据安全。
首先加载raid5内核模块:
[root@DanCentOS67 daniel]# modprobe raid5
接着将 /dev/sdc1,/dev/sdc2,/dev/sdd1 合并为 /dev/md127:
[root@DanCentOS67 daniel]# mdadm --create /dev/md127 --level=5 --raid-devices=3 /dev/sd[cd]1 /dev/sdc2
mdadm: /dev/sdc1 appears to contain an ext2fs file system
size=5245188K mtime=Thu Jan 1 00:00:00 1970
mdadm: /dev/sdd1 appears to contain an ext2fs file system
size=5245188K mtime=Thu Jan 1 00:00:00 1970
mdadm: /dev/sdc2 appears to contain an ext2fs file system
size=5237188K mtime=Thu Jan 1 00:00:00 1970
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md127 started.
查看 /dev/md127 的状态:
root@DanCentOS67 daniel]# mdadm --misc --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Thu Mar 9 12:37:43 2017
Raid Level : raid5
Array Size : 10465280 (9.98 GiB 10.72 GB)
Used Dev Size : 5232640 (4.99 GiB 5.36 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Thu Mar  9 12:41:44 2017
      State : clean 

Active Devices : 3
Working Devices : 3

可以看到实际可用大小变为10GB左右,即 5GB*(3-1) = 10GB,具体原理可以参考RAID5的实现原理。

6.同理,将 /dev/sdd2,/dev/sde1,/dev/sde2 合并为 /dev/md126:
[root@DanCentOS67 daniel]# mdadm --create /dev/md126 --level=5 --raid-devices=3 /dev/sd[de]2 /dev/sde1
将 /dev/sdf1 和 /dev/sdf2 合并为 /dev/md125:
[root@DanCentOS67 daniel]# mdadm --create /dev/md125 --level=5 --raid-device=2 /dev/sdf1 /dev/sdf2
到这里RAID 5已经全部做完。

7.LVM配置:
首先,将 /dev/md127,/dev/md126,/dev/md125 创建为 PhysicalVolume(PV):
[root@DanCentOS67 daniel]# pvcreate /dev/md127 /dev/md126 /dev/md125
Physical volume “/dev/md127” successfully created
Physical volume “/dev/md126” successfully created
Physical volume “/dev/md125” successfully created
扫描 Physical Volume 的改动:
[root@DanCentOS67 daniel]# pvscan
PV /dev/md125 lvm2 [4.99 GiB]
PV /dev/md126 lvm2 [9.98 GiB]
PV /dev/md127 lvm2 [9.98 GiB]
Total: 3 [24.95 GiB] / in use: 0 [0 ] / in no VG: 3 [24.95 GiB]
查看每个 Volume 的详细信息:
[root@DanCentOS67 daniel]# pvdisplay
“/dev/md125” is a new physical volume of “4.99 GiB”
— NEW Physical volume —
PV Name /dev/md125
VG Name
这里我们首先将 /dev/md127 和 /dev/md126 添加到 Volume Group 中:
[root@DanCentOS67 daniel]# vgcreate VolGroup1 /dev/md127 /dev/md126
Volume group “VolGroup1” successfully created
查看 Volume Group的使用情况:
[root@DanCentOS67 daniel]# vgdisplay
— Volume group —
VG Name VolGroup1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 19.95 GiB
PE Size 4.00 MiB
Total PE 5108
Alloc PE / Size 0 / 0
Free PE / Size 5108 / 19.95 GiB
VG UUID dsTlXU-D3Y5-Fqb3-wCOl-LHmj-7Qyf-hXU186
Volume Group 创建好后,我们在其上划分新的 Logical Volume:
[root@DanCentOS67 daniel]# lvcreate -l 2500 -n LogicalVol1 VolGroup1
Logical volume “LogicalVol1” created.
[root@DanCentOS67 daniel]# lvcreate -l 2608 -n LogicalVol2 VolGroup1
Logical volume “LogicalVol2” created.
前面我们 Volume Group中一共有 5108 个 PE,每个 PE(Physical Extent)的默认大小是 4.00 MiB,一共 19.95 GiB。我们为 LogicalVol1划分了 2500 个 PE,即 10000 MiB(9.77 GiB),为 LogicalVol2 划分了 2608 个 PE,即 10432MiB(10.19 Gib)。

查看 LogicalVolume 的划分情况:
[root@DanCentOS67 daniel]# lvdisplay
— Logical volume —
LV Path /dev/VolGroup1/LogicalVol1
LV Name LogicalVol1
VG Name VolGroup1
LV UUID 8w2ZhQ-hTSr-Sg0z-OJRv-h9aq-coZw-zeYCtI
LV Write Access read/write
LV Creation host, time DanCentOS67, 2017-03-09 13:37:56 +0000
LV Status available

open 0

LV Size 9.77 GiB
Current LE 2500
Segments 1
Allocation inherit
Read ahead sectors auto

  • currently set to 4096
    Block device 253:0

— Logical volume —
LV Path /dev/VolGroup1/LogicalVol2
LV Name LogicalVol2
VG Name VolGroup1
LV UUID JdQsXc-OMwk-A6ZB-e3vJ-rLPv-hNek-3yNVVg
LV Write Access read/write
LV Creation host, time DanCentOS67, 2017-03-09 13:38:08 +0000
LV Status available

open 0

LV Size 10.19 GiB
Current LE 2608
Segments 2
Allocation inherit
Read ahead sectors auto

  • currently set to 4096
    Block device 253:1
    fdisk -l 查看设备:
    [root@DanCentOS67 daniel]# fdisk -l

……

Disk /dev/mapper/VolGroup1-LogicalVol1: 10.5 GB, 10485760000 bytes
255 heads, 63 sectors/track, 1274 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup1-LogicalVol2: 10.9 GB, 10938744832 bytes
255 heads, 63 sectors/track, 1329 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000
使用 mkfs.ext4 工具将两个 LogicalVolume 的文件系统格式化为 ext4:
[root@DanCentOS67 daniel]# mkfs.ext4 /dev/mapper/VolGroup1-LogicalVol1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
640848 inodes, 2560000 blocks
128000 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2621440000
79 block groups
32768 blocks per group, 32768 fragments per group
8112 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@DanCentOS67 daniel]# mkfs.ext4 /dev/mapper/VolGroup1-LogicalVol2

格式化后,将格式化得到的文件分区mount到目录中(如果要开机自动挂载可以修改 /etc/fstab):
[root@DanCentOS67 daniel]# mkdir /mnt/LV1
[root@DanCentOS67 daniel]# mkdir /mnt/LV2
[root@DanCentOS67 daniel]# mount /dev/mapper/VolGroup1-LogicalVol1 /mnt/LV1
[root@DanCentOS67 daniel]# mount /dev/mapper/VolGroup1-LogicalVol2 /mnt/LV2
到这里,LVM部分已经配置完成。
四、实验结果与问题分析(简述实验结果,并说明实验过程中出现的问题和解决方法,或指出本实验的局限。)

LVM是逻辑卷管理(Logical Volume Manager)的简称,它是Linux环境下对磁盘分区进行管理的一种机制,LVM是建立在硬盘和分区之上,文件系统之下的一个逻辑层,来提高磁盘分区管理的灵活性。通过LVM系统管理员可以轻松管理磁盘分区,如:将若干个磁盘分区连接为一个整块的卷组(volume group),形成一个存储池。管理员可以在卷组上随意创建逻辑卷组(logical volumes),并进一步在逻辑卷组上创建文件系统。管理员通过LVM可以方便的调整存储卷组的大小,并且可以对磁盘存储按照组的方式进行命名、管理和分配,例如按照使用用途进行定义: “ development ” 和 “ sales ”,而不是使用物理磁盘名“ sda ”和“sdb”。而且当系统添加了新的磁盘,通过LVM管理员就不必将磁盘的文件移动到新的磁盘上以充分利用新的存储空间,而是直接扩展文件系统跨越磁盘即可。LVM具有很好的可伸缩性,使用起来非常方便。可以方便地对卷组、逻辑卷的大小进行调整,更进一步调整文件系统的大小。


版权声明:本文为qq_41426449原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。