How To Raid In Linux With Mdadm?
Hi, today I am gonna show you how to make raid ( redundant array of inexpensive disks ). Why wee need it? Think about that you have customer data stored in a server. Say one day disk is dead and then all of your data about customer are gone. This is a very bad new. What if disk dies but the data remains the same.
This will be very good. You can think that will disk dies frequently. No but it is not impossible, especially in enterprise side or in personal use in home. RAID is a technology to backup disks with extra disks. Say we have 3 disks which are used separate projects. And we want to backup them with high availibility in particular working hours. RAID can be done with separate hardware, firmware or software. Making raid with hardware is the best, fast and reliable way but it may be expensive or too much for project. Firmware raid is generally provided by motherboard manufacturers as a cheap option for hardware raid. Software raid is the cheapest and least reliable way to mail raid. But it can be suitable for home or unprofessional usage. I will not delve into details of raid levels. But you can find more about in wikipedia. In linux software raid consist of kernel module and user space programs.
3 disks are used actively to store data but 4. disk the or spare disk is used to backup for 3 disks. It doesn’t store all data of the 3 disks as it is impossible if 3 disks are near to full.
First list disks. We will make array with vdb,vdc,vdd and vde disks. I make this in my vm so your disk names mabe different like sda, sdb etc. All disk have 1 GB.
12 $ ls /dev/vd*/dev/vda /dev/vda1 /dev/vdb /dev/vdc /dev/vdd /dev/vde
Install mdadm in ubuntu with this command
1234567891011121314151617181920 $ apt-get install mdadmReading package lists... DoneBuilding dependency treeReading state information... DoneThe following extra packages will be installed:libpam-systemd postfix ssl-certSuggested packages:procmail postfix-mysql postfix-pgsql postfix-ldap postfix-pcre sasl2-bindovecot-common postfix-cdb mail-reader postfix-doc openssl-blacklistRecommended packages:default-mta mail-transport-agentThe following NEW packages will be installed:mdadm postfix ssl-certThe following packages will be upgraded:libpam-systemd1 upgraded, 3 newly installed, to remove and 3 not upgraded.13 not fully installed or removed.Need to get 1,464 kB/1,490 kB of archives.After this operation, 4,904 kB of additional disk space will be used.Do you want to continue? [Y/n] Y
If not load mdadm kernel module
1 $ modprobe raid456
Here we create raid. We should give count of disk with raid-devices level with leve and the disk to be used in raid. Here meta data is optional it sets raid metada version After the command is executed we got a message that says our new disk which we named md0 is created . If you wab to create raid0 use level=stripe and raid1 level=mirror
12 $ mdadm --create /dev/md0 --metadata 1.2 --level=4 --raid-devices=4 /dev/vdb /dev/vdc /dev/vdd /dev/vdemdadm: array /dev/md0 started.
Also we got a mail about our raid operation. As you can see in message the pyshical disk are nuımbered and possibile future raid levels are provided with Personalities row.
12345678910111213141516171819202122232425262728 $ cat /var/mail/rootFrom root@ubuntu Mon Jul 21 13:48:15 2014Return-Path: <root@ubuntu>X-Original-To: rootDelivered-To: root@ubuntuReceived: by ubuntu (Postfix, from userid )id 8E4CC208B9; Mon, 21 Jul 2014 13:48:15 +0300 (EEST)From: mdadm monitoring <root@ubuntu>To: root@ubuntuSubject: DegradedArray event on /dev/md0:ubuntuMessage-Id: <20140721104815.8E4CC208B9@ubuntu>Date: Mon, 21 Jul 2014 13:48:15 +0300 (EEST)This is an automatically generated mail message from mdadmrunning on ubuntuA DegradedArray event had been detected on md device /dev/md0.Faithfully yours, etc.P.S. The /proc/mdstat file currently contains the following:Personalities : [raid6] [raid5] [raid4]md0 : active raid4 vde[4] vdd[2] vdc[1] vdb[]3142656 blocks super 1.2 level 4, 512k chunk, algorithm [4/3] [UUU_][>....................] recovery = 0.0% (/1047552) finish=1091.2min speed=0K/secunused devices: <none>
To see raid devices status.Shows total disk count and total useable size which is 3/4 of the total disk size. As you can see version is 1.2 where we set it while creating the array
123456789101112131415161718192021222324252627282930313233343536 $ cat /proc/mdstatPersonalities : [raid6] [raid5] [raid4]md0 : active raid4 vde[4] vdd[2] vdc[1] vdb[]3142656 blocks super 1.2 level 4, 512k chunk, algorithm [4/4] [UUUU]unused devices: <none>$ mdadm -D /dev/md0/dev/md0:Version : 1.2Creation Time : Mon Jul 21 13:48:15 2014Raid Level : raid4Array Size : 3142656 (3.00 GiB 3.22 GB)Used Dev Size : 1047552 (1023.17 MiB 1072.69 MB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentUpdate Time : Mon Jul 21 13:48:28 2014State : cleanActive Devices : 4Working Devices : 4Failed Devices :Spare Devices :Chunk Size : 512KName : ubuntu: (local to host ubuntu)UUID : 515af24c:86bb12f7:015dfebf:6bd9b35bEvents : 18Number Major Minor RaidDevice State253 16 active sync /dev/vdb1 253 32 1 active sync /dev/vdc2 253 48 2 active sync /dev/vdd4 253 64 3 active sync /dev/vde
You can save raid configuration in file.
1234567891011121314151617181920212223242526 $ mdadm --detail --scan >> /etc/mdadm/mdadm.conf$ cat /etc/mdadm/mdadm.conf# mdadm.conf## Please refer to mdadm.conf(5) for information about this file.## by default (built-in), scan all partitions (/proc/partitions) and all# containers for MD superblocks. alternatively, specify devices to scan, using# wildcards if desired.#DEVICE partitions containers# auto-create devices with Debian standard permissionsCREATE owner=root group=disk mode=0660 auto=yes# automatically tag new arrays as belonging to the local systemHOMEHOST <system># instruct the monitoring daemon where to send mail alertsMAILADDR root# definitions of existing MD arrays# This file was auto-generated on Mon, 21 Jul 2014 13:43:42 +0300# by mkconf $Id$ARRAY /dev/md/ubuntu: metadata=1.2 name=ubuntu: UUID=515af24c:86bb12f7:015dfebf:6bd9b35b
To see a disk status of array
123456789101112131415161718192021222324252627 $ mdadm -E /dev/vdd/dev/vdd:Magic : a92b4efcVersion : 1.2Feature Map : 0x0Array UUID : 515af24c:86bb12f7:015dfebf:6bd9b35bName : ubuntu: (local to host ubuntu)Creation Time : Mon Jul 21 13:48:15 2014Raid Level : raid4Raid Devices : 4Avail Dev Size : 2096128 (1023.67 MiB 1073.22 MB)Array Size : 3142656 (3.00 GiB 3.22 GB)Used Dev Size : 2095104 (1023.17 MiB 1072.69 MB)Data Offset : 1024 sectorsSuper Offset : 8 sectorsState : cleanDevice UUID : e5cc5bd9:decd247a:16c39662:abeb38bbUpdate Time : Mon Jul 21 14:07:48 2014Checksum : c07c9d94 - correctEvents : 18Chunk Size : 512KDevice Role : Active device 2Array State : AAAA ('A' == active, '.' == missing)
Create file system for new disk
123456789101112131415161718192021 $ mkfs.ext4 /dev/md0mke2fs 1.42.9 (4-Feb-2014)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=128 blocks, Stripe width=384 blocks196608 inodes, 785664 blocks39283 blocks (5.00%) reserved for the super userFirst data block=Maximum filesystem blocks=80530636824 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks:32768, 98304, 163840, 229376, 294912Allocating group tables: doneWriting inode tables: doneCreating journal (16384 blocks): doneWriting superblocks and filesystem accounting information: done
Mount the disk and create some files. Also show status of mounted disks
1234567891011121314 $ mount /dev/md0 /mnt$ mkdir /mnt/ismail$ ls /mnt/ismail lost+found$ df -lhFilesystem Size Used Avail Use% Mounted on/dev/vda1 18G 1.7G 16G 10% /none 4.0K 4.0K % /sys/fs/cgroupudev 235M 4.0K 235M 1% /devtmpfs 49M 268K 49M 1% /runnone 5.0M 5.0M % /run/locknone 245M 245M % /run/shmnone 100M 100M % /run/user/dev/md0 2.9G 4.6M 2.8G 1% /mnt
Say disk vdd is corrupted. And we need to remove it.
1 $ mdadm --remove /dev/md0 /dev/vde
or add some disks
1 $ mdadm --add /dev/md0 /dev/vdf
To use raid after restart. Raid devies must reassembled every time to use.
1234 <strong>mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1</strong>More easy way to reassembly
1 <strong>mdadm --assemble --scan</strong>
1 |
Resources:
https://raid.wiki.kernel.org/index.php/RAID_setup
Ubuntu 14.04 Server Guide
1 Response
[…] http://www.poftut.com/raid-linux-mdadm/ […]