Hi, today I am gonna show you how to make raid ( redundant array of inexpensive disks ). Why wee need it? Think about what you have customer data stored in a server. Say one day disk is dead and then all of your data about the customer are gone. This is a very bad new. What if disk dies but the data remains the same. This will be very good. You can think that will disk dies frequently. No, but it is not impossible, especially in the enterprise side or in personal use at home. RAID is a technology to backup disks with extra disks. Say we have 3 disks which are used separate projects. And we want to backup them with high availability in particular working hours. RAID can be done with separate hardware, firmware or software. Making raid with hardware is the best, fast and reliable way but it may be expensive or too much for the project. Firmware raid is generally provided by motherboard manufacturers as a cheap option for hardware raid. Software raid is the cheapest and least reliable way to mail raid. But it can be suitable for home or unprofessional usage. I will not delve into details of raid levels. But you can find more about in wikipedia. In Linux software raid consist of a kernel module and user-space programs.
List Disk Drives
3 disks are used actively to store data but 4. disk the or spare disk is used to backup for 3 disks. It doesn’t store all data of the 3 disks as it is impossible if 3 disks are near to full.
First list disks. We will make an array with vdb,vdc,vdd and vde disks. I make this in my VM so your disk names are maybe different like sda, sdb etc. All disk has 1 GB.
$ ls /dev/vd* /dev/vda /dev/vda1 /dev/vdb /dev/vdc /dev/vdd /dev/vde
Install mdadm
Install mdadm in Ubuntu, Debian, Mint, Kali with the following apt
command.
$ sudo apt install mdadm

Load mdadm Kernel Module
mdadm
provides services via low-level drivers. It has a kernel module named raid456
and can be loaded with the following command. In order to load a Linux kernel module, we need root privileges which can be provided with the sudo command like below.
$ sudo modprobe raid456
We can check if the raid456 kernel module is loaded properly with the lsmod
command which will list all currently loaded kernel modules.

Create RAID
We should give the count of the disk with raid-devices level with level and the disk to be used in the raid. Here metadata is optional it sets raid metadata version After the command is executed we got a message that says our new disk which we named md0 is created. If you want to create raid0 use level=stripe and raid1 level=mirror
$ mdadm --create /dev/md0 --metadata 1.2 --level=4 --raid-devices=4 /dev/vdb /dev/vdc /dev/vdd /dev/vde mdadm: array /dev/md0 started.
Print mdadm Information
To see raid devices status we will print the /proc/mdstat
file like below. Shows total disk count and total usable size which is 3/4 of the total disk size. As you can see the version is 1.2 where we set it while creating the array
$ cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid4 vde[4] vdd[2] vdc[1] vdb[0] 3142656 blocks super 1.2 level 4, 512k chunk, algorithm 0 [4/4] [UUUU] unused devices: <none> $ mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jul 21 13:48:15 2014 Raid Level : raid4 Array Size : 3142656 (3.00 GiB 3.22 GB) Used Dev Size : 1047552 (1023.17 MiB 1072.69 MB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jul 21 13:48:28 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Name : ubuntu:0 (local to host ubuntu) UUID : 515af24c:86bb12f7:015dfebf:6bd9b35b Events : 18 Number Major Minor RaidDevice State 0 253 16 0 active sync /dev/vdb 1 253 32 1 active sync /dev/vdc 2 253 48 2 active sync /dev/vdd 4 253 64 3 active sync /dev/vde
Save/Backup mdadm Configuration
In order to save or backup mdadm with the --detail
and --scan
options like below.
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf $ cat /etc/mdadm/mdadm.conf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Mon, 21 Jul 2014 13:43:42 +0300 # by mkconf $Id$ ARRAY /dev/md/ubuntu:0 metadata=1.2 name=ubuntu:0 UUID=515af24c:86bb12f7:015dfebf:6bd9b35b
List Disk Status and Array
We can list disk and array status with the -E
option by providing the disk name.
$ mdadm -E /dev/vdd /dev/vdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 515af24c:86bb12f7:015dfebf:6bd9b35b Name : ubuntu:0 (local to host ubuntu) Creation Time : Mon Jul 21 13:48:15 2014 Raid Level : raid4 Raid Devices : 4 Avail Dev Size : 2096128 (1023.67 MiB 1073.22 MB) Array Size : 3142656 (3.00 GiB 3.22 GB) Used Dev Size : 2095104 (1023.17 MiB 1072.69 MB) Data Offset : 1024 sectors Super Offset : 8 sectors State : clean Device UUID : e5cc5bd9:decd247a:16c39662:abeb38bb Update Time : Mon Jul 21 14:07:48 2014 Checksum : c07c9d94 - correct Events : 18 Chunk Size : 512K Device Role : Active device 2 Array State : AAAA ('A' == active, '.' == missing)
Create File System For New Disk
We will use mkfs.ext4
command in order to create a file system to the /dev/md0
disk drive.
$ mkfs.ext4 /dev/md0 mke2fs 1.42.9 (4-Feb-2014) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=384 blocks 196608 inodes, 785664 blocks 39283 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=805306368 24 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done
Mount Disk
Mount the disk and create some files. Also, show the status of mounted disks.
$ mount /dev/md0 /mnt $ mkdir /mnt/ismail $ ls /mnt/ ismail lost+found $ df -lh Filesystem Size Used Avail Use% Mounted on /dev/vda1 18G 1.7G 16G 10% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 235M 4.0K 235M 1% /dev tmpfs 49M 268K 49M 1% /run none 5.0M 0 5.0M 0% /run/lock none 245M 0 245M 0% /run/shm none 100M 0 100M 0% /run/user /dev/md0 2.9G 4.6M 2.8G 1% /mnt
Remove Disk From Array
Say disk vdd is corrupted. And we need to remove it.
$ mdadm --remove /dev/md0 /dev/vde
Add Disk To Disk Array
We will use --add
option disk vdf
to the mdm drive md0
.
$ mdadm --add /dev/md0 /dev/vdf
Restart or Initialize New Disk
To use raid after the restart. Raid devices must be reassembled every time to use.
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 More easy way to reassembly
mdadm --assemble --scan
1 thought on “How To Raid In Linux With mdadm?”