[GTALUG] RAID and IX-2-dl

Lennart Sorensen lsorense at csclub.uwaterloo.ca
Fri Feb 20 19:28:33 UTC 2015


On Thu, Feb 19, 2015 at 04:56:12PM -0500, D. Hugh Redelmeier wrote:
> I just bought a LenovoEMC / IOmega IX2-dl NAS box (cheap, of 
> course).  It has a pair of WD Green 2T disk drives.
> 
> The supplied firmware is Linux, of course.  With a pretty face.
> 
> /proc/cpuid says "Feroceon 88FR131 rev 1 (v5l)" which I think is an 
> unfortunately old ARM family.  It has 256M of RAM.  The kernel is 
> "2.6.31.8".
> 
> It offers me RAID 1, RAID 0, and "none".  I don't think that I want 
> either RAID and I'm not sure what "none" is.
> 
> I don't want RAID 1 because It doesn't give me much reliability 
> improvement for the price (halving the space) and it doesn't give me any 
> speed improvement.
> 
> I don't want RAID 0 since it gives me less reliability than no RAID.
> 
> I want two different filesystems so that when one goes south, the other 
> isn't lost.

I suspect none in fact means make each disk seperate.  But it might not.

It could be that they mean:

RAID1: Mirror two identical disks
RAID0: Stripe data across two identical disks for speed
none: Concat the disks into one large disk, without added safety or
speed gains.

> Any advice on how to lightly prod this system to do what I want?
> 
> Here's some configuration information.
> 
> /etc/fstab doesn't mention the hard drives
> 
> First, here's how it looks with the RAID1 setup.
> 
> lsblk(8) says.  (This command is new to me.  I think that I like it):
> 
> 
> NAME                                MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> loop0                                 7:0    0 680.4M  1 loop  /mnt/apps
> loop1                                 7:1    0     8M  0 loop  /mnt/etc
> loop2                                 7:2    0   100K  0 loop  /oem
> sda                                   8:0    0   1.8T  0 disk  
> ├─sda1                                8:1    0    20G  0 part  
> │ └─md0                               9:0    0    20G  0 raid1 
> │   ├─md0_vg-BFDlv (dm-0)           253:0    0     4G  0 lvm   /boot
> │   └─md0_vg-vol1 (dm-1)            253:1    0    16G  0 lvm   /mnt/system
> └─sda2                                8:2    0   1.8T  0 part  
>   └─md1                               9:1    0   1.8T  0 raid1 
>     └─6366c931_vg-lv163c50af (dm-2) 253:2    0   1.8T  0 lvm   /mnt/pools/A/A0
> sdb                                   8:16   0   1.8T  0 disk  
> ├─sdb1                                8:17   0    20G  0 part  
> │ └─md0                               9:0    0    20G  0 raid1 
> │   ├─md0_vg-BFDlv (dm-0)           253:0    0     4G  0 lvm   /boot
> │   └─md0_vg-vol1 (dm-1)            253:1    0    16G  0 lvm   /mnt/system
> └─sdb2                                8:18   0   1.8T  0 part  
>   └─md1                               9:1    0   1.8T  0 raid1 
>     └─6366c931_vg-lv163c50af (dm-2) 253:2    0   1.8T  0 lvm   /mnt/pools/A/A0
> mtdblock0                            31:0    0   504K  0 disk  
> mtdblock1                            31:1    0     4K  0 disk  
> mtdblock2                            31:2    0     4K  0 disk  
> 
> df(1) says:
> 
> Filesystem                          1K-blocks   Used  Available Use% Mounted on
> rootfs                                  51200   4216      46984   9% /
> /dev/root.old                           11339   3185       8154  29% /initrd
> none                                    51200   4216      46984   9% /
> /dev/md0_vg/BFDlv                     4128448 714776    3203960  19% /boot
> /dev/loop0                             691776 619873      71903  90% /mnt/apps
> /dev/loop1                               7657   1010       6238  14% /mnt/etc
> none                                     7657   1010       6238  14% /etc
> /dev/loop2                                128    128          0 100% /oem
> tmpfs                                   24776     80      24696   1% /run
> tmpfs                                    5120      4       5116   1% /run/lock
> tmpfs                                   49540      0      49540   0% /run/shm
> /dev/mapper/md0_vg-vol1              16493480 975280   15350636   6% /mnt/system
> /dev/mapper/6366c931_vg-lv163c50af 1902053516 202188 1901851328   1% /mnt/pools/A/A0
> /dev/mapper/6366c931_vg-lv163c50af 1902053516 202188 1901851328   1% /nfs/Backups
> /dev/mapper/6366c931_vg-lv163c50af 1902053516 202188 1901851328   1% /nfs/Documents
> /dev/mapper/6366c931_vg-lv163c50af 1902053516 202188 1901851328   1% /nfs/Movies
> /dev/mapper/6366c931_vg-lv163c50af 1902053516 202188 1901851328   1% /nfs/Music
> /dev/mapper/6366c931_vg-lv163c50af 1902053516 202188 1901851328   1% /nfs/SharedMedia
> /dev/mapper/6366c931_vg-lv163c50af 1902053516 202188 1901851328   1% /nfs/Pictures
> 
> After I switched from RAID1 to "none":
> 
> NAME                                MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
> loop0                                 7:0    0 680.4M  1 loop   /mnt/apps
> loop1                                 7:1    0     8M  0 loop   /mnt/etc
> loop2                                 7:2    0   100K  0 loop   /oem
> sda                                   8:0    0   1.8T  0 disk   
> ├─sda1                                8:1    0    20G  0 part   
> │ └─md0                               9:0    0    20G  0 raid1  
> │   ├─md0_vg-BFDlv (dm-0)           253:0    0     4G  0 lvm    /boot
> │   └─md0_vg-vol1 (dm-1)            253:1    0    16G  0 lvm    /mnt/system
> └─sda2                                8:2    0   1.8T  0 part   
>   └─md1                               9:1    0   3.6T  0 linear 
>     └─529a853a_vg-lv1ac0e3be (dm-2) 253:2    0   3.6T  0 lvm    /mnt/pools/A/A0
> sdb                                   8:16   0   1.8T  0 disk   
> ├─sdb2                                8:18   0   1.8T  0 part   
> │ └─md1                               9:1    0   3.6T  0 linear 
> │   └─529a853a_vg-lv1ac0e3be (dm-2) 253:2    0   3.6T  0 lvm    /mnt/pools/A/A0
> └─sdb1                                8:17   0    20G  0 part   
>   └─md0                               9:0    0    20G  0 raid1  
>     ├─md0_vg-BFDlv (dm-0)           253:0    0     4G  0 lvm    /boot
>     └─md0_vg-vol1 (dm-1)            253:1    0    16G  0 lvm    /mnt/system
> mtdblock0                            31:0    0   504K  0 disk   
> mtdblock1                            31:1    0     4K  0 disk   
> mtdblock2                            31:2    0     4K  0 disk   

That does in fact look like with none as the setting, it is just adding
each disk to the volumegroup for the pool.

It would seem with this design, the only option that will tolerate
disk failures is RAID1.  The others will have complete data loss if any
disk fails.  RAID0 gives extra speed but requires identical disks,
while none does not give extra speed and allows different disk sizes to
be in use.

-- 
Len Sorensen


More information about the talk mailing list