RAID 1+0 or RAID 1 with 3 disks

Discuss installation of Scalix software

Moderators: ScalixSupport, admin

DagonSphere

RAID 1+0 or RAID 1 with 3 disks

Postby DagonSphere » Fri Feb 22, 2008 4:13 pm

I've been evaluating Scalix in a VMware VM for a while and I like what I see. Here's my current configuration for the test box:
    1.7 GHz P4 Xeon
    1 GB Ram
    Debian Etch system
    VMware server 1.0.4
    VMware image is sitting on a RAID 1array, comprised of 2 SATA disks. (EXT3)
    Guest OS is CentOS5, and Scalix is installed there on an EXT3 partition.
The performance for Outlook with the connector wasn't bad at all. I'm disappointed with the webmail app, because Zimbra is very responsive when installed on a Debian guest on the same test hardware, but I digress.

Here's the production system:
    3.0 GHz P4 HT
    2 GB Ram
    OS - CentOS or Debian Etch (need to test Scalix on Debian, but may just opt for EL5)
    2 SATA disks (up to 3 is possible)

I have 6 email users. We make use of public folders to store shared contacts and job related emails. Public folders size is anywhere from 1-6 GB, and each users "mailbox" is upwards to about 1GB, so we'll need around 12-15 Gigs accessable.

I've read elsewhere on the forum that virtualizing Scalix is just a bad idea. So I'm going to install it on a real server. Because of physical space limitations, I'd like to install a 3 disk software RAID 1 array. How would that compare in terms of speed to a RAID 1+0? No speed increase on writes (obviously) but a noticeable increase on reads.

Thanks in advance!

Valerion
Scalix Star
Scalix Star
Posts: 2730
Joined: Thu Feb 26, 2004 7:40 am
Location: Johannesburg, South Africa
Contact:

Postby Valerion » Mon Feb 25, 2008 4:12 am

You need a even number of disks for RAID-1. It works by mirroring data, so Disk 1 and Disk 2 are identical. If either gets damaged, you can still use the array, while the new disk is being rebuilt. If you use an uneven mumber, you will end up with two parts of an array on the same disk. You can use the third disk as an OS and application drive, however.

DagonSphere

Postby DagonSphere » Mon Feb 25, 2008 10:23 am

I currently use RAID 1 on my Debian systems. I've also used a 3 disk setup (where disk 3 was not a spare) and the data I added to it definitely existed on all 3 disks. And I haven't seen Debian put two parts of an array on a single disk. /proc/mdstat never showed me anything like that.

I've seen several posts where someone states that one has to have 2 (or some even number) of disks for RAID 1 to work. I haven't tried it on any of the Red Hat flavors, but Debian did it just fine. (/dev/md0 consisted of /sda1, sdb1, and sdc1 - again, all live, no spares). Just out of curiosity, have to tried it? Or, is it just a bad idea in terms of performance (writing to 3 disks instead of 2) and disk usage (wearing out a third disk at the same rate)? A 2 disk RAID 1 plus a spare 3rd disk makes more sense anyways. I was thinking a 3 disk RAID 1 for read performance.

I'll just rephrase the question.

If I use a 2 disk RAID 1 instead of a 4 disk RAID 1+0 with 6 users and 15 gigs worth of data, how much performance will I be loosing? If I had 50 or 100 users, I wouldn't even question it - I'd just go straight for the 1+0.

I'm thinking that it wouldn't be that noticeable, as it's rare we have more than 3 people on email at one time. RAID 0 provides improvements on both read and write. RAID 1 provides fault tolerance and improvements on reads.

Just looking for thoughts on this.

Thanks!

Valerion
Scalix Star
Scalix Star
Posts: 2730
Joined: Thu Feb 26, 2004 7:40 am
Location: Johannesburg, South Africa
Contact:

Postby Valerion » Tue Feb 26, 2008 3:06 am

On a small scale, you'll lose almost nothing. Your disks aren't that busy.

As to a 3-disk RAID-1, how exactly was the mirror constructed? I ask out of curiosity, I would love to see how it was done, for my own purposes. I can't think how the RAID handles it over the 3 disks offhand.

DagonSphere

Postby DagonSphere » Tue Feb 26, 2008 9:56 am

I guess I failed to mention I'm using software RAID. On Etch, it's as easy as:

Code: Select all

mdadm --create /dev/md0 --level=1 --raid-disks=3 /dev/sda1 /dev/sdb1 /dev/sdc1

/proc/mdstat would show something like (after the mirrors are constructed):

Code: Select all

md3 : active raid1 sda1[0] sdb1[1] sdc1[2]
      237247808 blocks [3/3] [UUU]

What I can't remember at the moment is how to turn a 2 disk array into a 3 disk array. It works without a problem on the creation of the array. I just don't remember if I figured out how to add a third disk to the array, without it treating the new disk as a spare. I haven't tried this on any of the Red Hat flavors yet, but it looks like I don't have to :D

I quit using the 3 disk RAID 1, as the system is used as a file server. Disk 3 is currently being used as a spare. The thought of using a 3 disk RAID 1 was strictly for read performance. But if the impact is minimal/imperceptible, then I'll just use disk 3 as a spare.

Thanks for the input. I appreciate it!! And I hope this helps

mikevl
Scalix Star
Scalix Star
Posts: 596
Joined: Mon Feb 02, 2004 8:32 pm
Location: New Zealand

Postby mikevl » Wed Mar 05, 2008 2:12 am

Hi

Using a software RAID is a bit more risky that doing the same thing with hardware.

But you don't have many users

Mike

DagonSphere

Postby DagonSphere » Fri Mar 07, 2008 5:19 pm

I'm kind of torn on that theory.

If a hardware RAID controller dies, and your disks are OK, then (my understanding is) you need an EXACT replacement of the RAID controller (at least the chipset and possibly rev level), as each controller has its own subtle differences.

Software RAID is a little slower, but is a little more, shall we say "universal". If a controller dies, replace the controller, and everything works fine. Yeah, there's a little cli stuff to take care of when a disk dies, but it's pretty easy.

What's your thoughts about this?

mikevl
Scalix Star
Scalix Star
Posts: 596
Joined: Mon Feb 02, 2004 8:32 pm
Location: New Zealand

Postby mikevl » Fri Mar 07, 2008 6:13 pm

Hi

But in the same light you would have to also ask what would happen if your processor, motherboard, or something else died.

Scalix keeps disk busy. The busier the disks the less time is left available to present the user with a wonderfull experience. Also I my own experience even under linux I would have to say I have experienced more OS failures than Hardware failures. I.E On average we have to reboot the OS every 9 - 12 months but I don't seems to be required to replace motherboards etc in each time I have to cycle the OS. If you buy quality equipment from a reputable supplier then if the equipment fails you should be able to replace it within the econimic life of the server.

As a mission criticle element of most business email needs to be timely and reliable. Having the OS doing hardware tasks adds risk and performance issues. Others on this forum have tried it in the past.

This is only MHO

Mike


Return to “Installation”



Who is online

Users browsing this forum: No registered users and 1 guest

cron