RAID On CentOS
First let's understand the term RAID
RAID basically stands for Redundant Array of Inexpensive Disk
RAID is used for Implementing Fastness in reading, writing, and Redundancy on DISks
RAID types:
1: Software RAID
2: Hardware RAID
3: Fake RAID
Software RAID in CentOS itself manages the RAID array and provides a set of commands (such as mdadm) to set up and manage it.
Software RAID isn’t as fast as hardware RAID, because the operating system has to send the duplicate data itself, rather than having dedicated hardware do it. However, software RAID can be used with any block device, so it is possible to use RAID to link two USB devices together, for example, or even one USB device, one SATA, and a SAN
Some people prefer software RAID over hardware RAID for these reasons, but where performance is key (such as a busy database server), software RAID might not be fast enough.
Fake RAID is describes a technology that provides the basics for RAID but doesn’t actually implement it. Instead, fake RAID leaves implementation up to the host operating system. Few people use fake RAID, and it isn’t supported under Linux, as software RAID is so much better. Avoid fake RAID like the plague and make sure it’s disabled in the BIOS if your motherboard offers it.
There are six different RAID levels. However only the three most common are currently supported under software RAID: RAID0, RAID1, and RAID5.
RAID 0
In RAID0, disks are logically combined into one big disk, and data is basically distributed evenly on each disk. In this level, hard disk space is maximized to the fullest, but there is no data redundancy: if one disk fails, the data on all disks in the array is lost. Also, reading and writing are fast since reads and writes can be done simultaneously across all the disks.
RAID 1
RAID1 is also called mirroring, since data on one disk is cloned on another. Data is written to both disks, so generally write operations take longer than on a single disk, but read operations are faster, because CentOS can choose which disk to read from.
If a single disk fails, the system will be able to run from the remaining disk. RAID1 also supports the concept of hot spares—disks that are part of an array but aren’t actively being used. These disks don’t actually do anything until one of the disks in the mirror fails. When this happens, the failed disk is removed, and the hot spare disk is added to the mirror itself. The data is then copied from the good disk on to the spare, and the array is brought back up to full strength. The biggest disadvantage with RAID1 is that for every gigabyte of usable space, another gigabyte is used to provide redundancy. Therefore, if you have one 20-GB disk and one 30-GB disk, the maximum
size you can allocate to RAID1 is 20GB.
RAID 5
RAID5 (also known as “striped with parity”) is an attempt to get the best of both worlds. It aims to get as much of the speed of RAID0 as possible while retaining as much of the redundancy of RAID1. What we end up with is a system that can take the loss of a single hard disk (a RAID5 array needs at least three disks) but isn’t constrained by RAID1’s one-to-one requirement. RAID5 is probably the most common type of RAID found in enterprise environments, as it offers a good compromise between speed and redundancy.
Now on practical approch:
First check the disk has installed in Server:
# fdisk -l
Disk /dev/hda: 8388 MB, 8388108288 bytes
255 heads, 63 sectors/track, 1019 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 83 Linux
/dev/hda2 14 1019 8080695 8e Linux LVM
Disk /dev/hdb: 10.4 GB, 10485522432 bytes
16 heads, 63 sectors/track, 20317 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Disk /dev/hdb doesn't contain a valid partition table
Disk /dev/hdd: 10.4 GB, 10485522432 bytes
16 heads, 63 sectors/track, 20317 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Disk /dev/hdd doesn't contain a valid partition table
# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/hdb1 /dev/hdd1
mdadm: size set to 10239616K
mdadm: array /dev/md0 started.
A RAID device represented in /dev/md0.
Verifying RAID Creation
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdd1[1] hdb1[0]
10239616 blocks [2/2] [UU]
[=======>.............] resync = 38.5% (3943936/10239616) finish=1.9min speed=53941K/secunused devices:
If there are no problems, we can format md0 using mkfs.ext3, like so:
# mkfs.ext3 /dev/md0
To attach the array to /var/cache, we just add the following line on /etc/fstab:
/dev/md0 /var/cache ext3 defaults 1 2
Now need to create a configuration file for mdadm so that RAID starts perfectly on boot up.
# mdadm –detail –scan –verbose > /etc/mdadm.conf
The above command scans all available disks on the system and looks for RAID markers. It collects all of
this information and places it in /etc/mdadm.conf.
This information is then used by CentOS during booting to re-creat the arrays
For monitoring the helth of RAID: Use the below command:
# mdadm --monitor --mail=info-RAID@emaildomain.com –delay 1800 /dev/md0
The above command will do a scan on RAID at every 1800 seconds and if there any problem, a mail report will send at info-RAID@emaildomain.com.
DONE :)
No comments:
Post a Comment