You have a really powerful machine. Just one comment. It's a much longer comment that I had initially expected, but I'll go with it. :)
Scalix is almost never constrained by processor speed, and rarely by Ram when people are reasonable with it (8G is good even for large installs).
Scalix is almost ALWAYS constrained by Disk Speed.
If you're seeing good performance now, that's awesome, ignore my post. But for others seeing performance concerns, get the fastest disks you can find, preferably SAS 15K, and buy lots of them. Build them into a RAID 10 array. With Scalix, RAID 10 will give the best bang for the buck. If you build RAID 5, you will have performance problems on larger installs period. RAID 5 sucks. It's cheaper than RAID 10, but it's slower and less reliable, and brutal during a disk failure recovery.
What that means is that for many people, they can just add a new disk subsystem using something like OpenFiler, which provides iSCSI at no cost other than hardware. In our testing, OpenFiler was consistently and noticeably faster than even locally installed disks, and after over a year now, it's been rock solid. It's a great project, and I'd give it two thumbs up.
SATA disks look great because they're cheap, but they are a disaster with Scalix, and projects built on them will always have performance problems.
For a BIG install, you can improve performance by splitting up the data into multiple RAID 10 arrays. One for /var/opt/scalix/??/postgres, another for /var/opt/scalix/??/s/data. Etc. Put them on seperate controllers, with different disks. There is no benefit to partitioning the same disks in an array differently. This is overkill for most installs though.
Also note that with RAID 10, the more drives you add, the faster it is. So you're better off with 12-75G drives than 4-300G drives. The speed is faster, the failure recovery is faster, the storage is the same. Generally, costs are not too far off, since the smaller drives are much less expensive.
If you need SATA drives, look at the Raptors, since they perform closer to SCSI/SAS. You'll notice that in their cost and capacity too. Do note that they are NOT equal to SCSI/SAS however, as the SATA interface just isn't designed high performance under load.
I would be interested to know how SSD SATA devices compare to SAS, but I don't really have any experience there. My guess is that even though throughput is higher with SSD, once some randomization of read vs write is introduced, SAS will still be faster when both are worked hard. Mostly just due to the SATA interface itself. I'm open to being wrong here though, as I don't really know.
If you can afford it, FiberChannel (optical) will smoke everything else until Infiniband drives are released, but as far as I know, they aren't available yet.
There's a reasonable speed comparison chart here: http://en.wikipedia.org/wiki/Serial_ATA ... rnal_buses
Note that as you move from a straight data read to random reads and writes, the transfer speed drops off very fast. A simple example of this would be the performance difference that Windows sees when drives are heavily fragmented. SCSI/SAS handles this better than SATA/IDE, and that is why there is a big performance gap even though many of the specs look as though there shouldn't be.