Backup/Restore questions.

Discuss the Scalix Server software

Moderators: ScalixSupport, admin

nader
Posts: 27
Joined: Thu Dec 22, 2005 6:17 pm

Backup/Restore questions.

Postby nader » Wed Feb 08, 2006 4:16 pm

Hi,

I have a few questions about backing up and restoring data in scalix.

1 - I know I can backup a user's message store with the omcpoutu commnd. The omcpinu command allows one to restore the data. Is it possible to restore user A's data to a "Restore" user on the SAME mailnode? I think the omcpinu command allows a restore but to a different mailnode only.

2 - If it is only possible to restore to a different mailnode, how do I create that mailnode? Is it even possbile to have more than one mailnode using the community edition?

3 - Can I be certain that the omcpoutu command exports ALL the users data, including contacts, calendars, inbox, and subfolders etc?

4 - Is it possible to backup the public folder area information only, using a command similar to omcpoutu? I'm thinking in case of a catastrophic failiure, I can always restore the most recent scalix backup I made, but if one of my users inadvertantly deletes a file in the public folder area (I just KNOW it will happen), I'd hate to restore the entire system just so that I get her 1 file back....

Kind Regards,

Nader

ScalixSupport
Scalix
Scalix
Posts: 5503
Joined: Thu Mar 25, 2004 8:15 pm

Postby ScalixSupport » Wed Feb 08, 2006 8:39 pm

Here's some answers...but in the end I want to suggest there's an easier way to do everything you need, by simply involving a warm spare server.

1 - Yes, omcpinu must be different mailnode, but you can create another mailnode on the same server - but make sure you understand everything.

[root@dsx1 /]# omshowmn
** dsx1

[root@dsx1 /]# omaddmn -m dsx2
omaddmn : Mailnode correctly added

[root@dsx1 /]# omshowmn
** dsx1
dsx2

[root@dsx1 /]# omshowu -n harris
Authentication ID: Cliff.Harris@scalix.field
User Name : Cliff Harris /CN=Cliff Harris
MailNode : dsx1
Internet Address : "Cliff Harris" <Cliff.Harris@scalix.field>
System Login : 60538
Password : set
Admin Capabilities : NO
Mailbox Admin Capabilities : NO
Language : C
Virtual Vault : Enabled (default)
Mail Account: Unlocked
Last Signon : 02.06.06 15:21:16
Receipt of mail : ENABLED
Service level : 0
Excluded from Tidying : NO
User Class : Full

[root@dsx1 /]# omaddu -n "Cliff Harris/dsx2/cn=Cliff Harris/IA=cliff@scalix.fie
ld" cliff@scalix.field -p pass

Note the IA and the Authid are different - they must be, otherwise it won't let you add it.

[root@dsx1 /]# omcpinu -f harris.mail -m dsx2

Done - but keep in mind one important ramification - you now have two Cliff Harris entries in your directory, so if others address to him during this time - they may be confused. Cliff however should be able to review mail in this other mailbox, logging in using the authentication ID.

Your 2nd bullet should be answered by the above as well.. Answer to your third bullet is yes, but obviously only server-side folders (no .pst files, no public folder data).

Answer to #4 - No, there is no omcpoutu-like utility for backing up public folder data, but you could use the Outlook client and make .pst files, periodically. A general backup of /var/opt/scalix is your best bet.

Here's where I eluded to a better practice using a warm spare/message recovery server. If you have a spare PC lying around that has enough space to store a few gzipped, tarred copies of /var/opt/scalix, you can get an effective, expediant message recovery sytem going. Doesn't have to be a server-class machine, just 512MB mem, single CPU, doesn't need RAID, just SPID (single phat inexpensive disk).

Basically you would install Scalix (same versions) on the spare, then just down the services and delete the contents of /var/opt/scalix. On a nightly basis (or even more often if you wish) simply rsync the data over to the spare. Make sure you also follow the best practice for backing up your Scalix data (snapshots). Be carefuly on the rsync command line...should be something like...

rsync -avz --delete /var/opt/ 10.17.112.33:backup

You then run a nightly script (cron) on the spare that builds a tar.gz of the data you've sent over. Depending on that SPID (or whatever you use), you could keep multiple days worth of tar.gz files of your latest Scalix data. In fact this is where some of our larger enterprise customers have invested a little more, buying hundreds of GB of inexpensive disks for one message recovery spare - that serves multiple Scalix servers. You really can build this box for pretty cheap, and it doesn't need all the bells and whistles of a production server, because it really isn't going to be doing much.

Now if a person claims they deleted an email or public folder post, and they must have back, you simply unpack the data into the /var/opt/scalix on the spare (hopefully you've still got it local), bring up Scalix services - and tell them to access their mailbox via SWA or Outlook but using the IP address of the spare.

Note you can should give the spare the same hostname, obviously a different IP address, but don't register it in DNS anywhere.

Make sense? Once you get it setup, it would probably solve what you were looking for in your original post. As we all know the difficult part of backup/restore process is typically not the backup - it's the restore and then providing access to the restored data that is time consuming - this makes it real easy.

And as I ramble on, I realize I should just send you the technote that explains this. If you want it - please post your email address.

karl

axsom1
Posts: 69
Joined: Tue Aug 17, 2004 12:31 pm

Postby axsom1 » Wed Feb 08, 2006 8:53 pm

I'd be interested in this Tech Note.

Email address is axsomj at lompochospital.org

Thanks Karl.

John

btisdall
Scalix Star
Scalix Star
Posts: 373
Joined: Tue Nov 22, 2005 12:13 pm
Contact:

Postby btisdall » Thu Feb 09, 2006 7:34 am

Tthe single user restore process is something that keeps me awake nights worrying about the "I deleted a very very important mail" call. It seems to me this is one area where, at least from an SME perspective, Exchange is some way ahead because the office admin, with fairly minimal training can load up an Exchange aware backup/restore app, select the mailbox they want to restore and press go. On the other hand (and I'll admit that I'm a long way from being an uber *nix admin but I do have a fair amount of experience) I find the SUR procedure pretty scary, partly because I don't think it's very well dcoumented - I asked for some clarification here some time ago but didn't get a response to my last post:

http://www.scalix.com/community/viewtop ... hlight=iss

I was really hoping that the difficutly of the SUR process was something that would be addressed in 10 but based on the advice already given in this thread I wonder...

None of this is to take away from what I think is a great product from a dedicated team and with a very generous community license, just my 2c, be interested in others' views.

Regards,

Ben Tisdall

btisdall
Scalix Star
Scalix Star
Posts: 373
Joined: Tue Nov 22, 2005 12:13 pm
Contact:

Postby btisdall » Thu Feb 09, 2006 7:45 am

Sorry, would you be kind enough to edit my last post and remove my address.

Thanks.

florian
Scalix
Scalix
Posts: 3852
Joined: Fri Dec 24, 2004 8:16 am
Location: Frankfurt, Germany
Contact:

Postby florian » Thu Feb 09, 2006 1:26 pm

Few things...

first... for the omcpinu import and creation of the shadow user, i'd suggest using the -x option to omaddu when creating the user - this will exclude the user from the system directory/addressbook, i.e. you avoid one of the problems listed above.

then, while scalix 10 does not really provide many enhancements to the backup/restore process, i believe using some best-practices (combination of rsync, snapshot, omcpoutu exports, etc.) we have a very credible backup/recovery story.

Having said that, I might add that we're putting a lot of thought into the subject and want to make admin's life easier going forward; one key item here is certainly what we're discussing here ... stay tuned! :-)

-- f.
Florian von Kurnatowski, Die Harder!

mephisto

Postby mephisto » Thu Feb 09, 2006 2:01 pm

This is an adapted version of a very reliant rsync based backup script I found on the net. I use this with my fileservers every night (and a modified version actually every 30 minutes), but it should work with scalix, too (with a slight change as mentioned below). You need to adapt it to your needs. I use rsh instead of the encrypted ssh as a remote shell (parameter -r rsh), because the NAT this is running on (Buffalo LinkStation hacked to run custom Linux) does not have much CPU power. If you chose to use it with ssh you need to create a ssh cert without a passphrase to make this run unattended.

The interesting thing about this script is, that it makes use of rsyncs --link-dest parameter. This creates hard links and enables us to keep days of backups while only consuming a bit more space than one backup would. This seems a better idea to me than creating tar archives, because I
a) don't have to buy huge disks
b) don't need to compress and extract gigabytes of snapshots to recover files from a certain date.

To fully understand this concept read this site:
http://www.mikerubel.org/computers/rsync_snapshots/

Someone definitely should add lvm snapshot functions to this script! Single files are not corrupted if changed while rsyncing, but the whole data store might get inconsistent. Any volunteers?

Code: Select all

#!/bin/sh

#########################################################
# Script to do incremental rsync backups
# Adapted from script found on the rsync.samba.org
# Brian Hone 3/24/2002
# This script is freely distributed under the GPL
#########################################################

##################################
# Configure These Options
##################################

###################################
# mail address for status updates
#  - This is used to email you a status report
###################################
MAILADDR=admin@mydomain.com

###################################
# HOSTNAME
#  - This is also used for reporting
###################################
HOSTNAME=scalix.mydomain.com

###################################
# directory to backup
# - This is the path to the directory you want to archive
###################################
BACKUPDIR=root@scalix.mydomain.com1:/var/opt/scalix/

###################################
# excludes file - contains one wildcard pattern per line of files to exclude
#  - This is a rsync exclude file.  See the rsync man page and/or the
#    example_exclude_file
###################################
EXCLUDES=/usr/local/exclude.conf

###################################
# root directory to for backup stuff
###################################
ARCHIVEROOT=/scalix-backups/

rm -rf $ARCHIVEROOT/backup.6

if [ -d $ARCHIVEROOT/backup.5 ]
 then
  mv $ARCHIVEROOT/backup.5 $ARCHIVEROOT/backup.6
fi

if [ -d $ARCHIVEROOT/backup.4 ]
 then
  mv $ARCHIVEROOT/backup.4 $ARCHIVEROOT/backup.5
fi

if [ -d $ARCHIVEROOT/backup.3 ]
 then
  mv $ARCHIVEROOT/backup.3 $ARCHIVEROOT/backup.4
fi

if [ -d $ARCHIVEROOT/backup.2 ]
 then
  mv $ARCHIVEROOT/backup.2 $ARCHIVEROOT/backup.3
fi

if [ -d $ARCHIVEROOT/backup.1 ]
 then
  mv $ARCHIVEROOT/backup.1 $ARCHIVEROOT/backup.2
fi

mv $ARCHIVEROOT/main $ARCHIVEROOT/backup.1

#########################################
# From here on out, you probably don't  #
#   want to change anything unless you  #
#   know what you're doing.             #
#########################################

# directory which holds our current datastore
CURRENT=main

# directory which we save incremental changes to
DATE=`date +%Y-%m-%d`
# options to pass to rsync
OPTIONS="-e rsh --force --ignore-errors --delete --delete-excluded \
 --exclude-from=$EXCLUDES -a --link-dest=$ARCHIVEROOT/backup.1"

export PATH=$PATH:/bin:/usr/bin:/usr/local/bin

# make sure our backup tree exists
install -d $ARCHIVEROOT/$CURRENT

# our actual rsyncing function
do_rsync()
{
   rsync $OPTIONS $BACKUPDIR $ARCHIVEROOT/$CURRENT
}

# our post rsync accounting function
do_accounting()
{
   echo "To: $MAILADDR" > /tmp/rsync_script_tmpfile
   echo "From: Scalix Backup Box <backup@mydomain.com>" >> /tmp/rsync_script_tmpfile
   echo "Subject: Scalix Backup Report" >> /tmp/rsync_script_tmpfile
   echo >> /tmp/rsync_script_tmpfile
   echo "Free disk space:" >> /tmp/rsync_script_tmpfile
   echo >> /tmp/rsync_script_tmpfile
   df -h >> /tmp/rsync_script_tmpfile
   echo >> /tmp/rsync_script_tmpfile
   echo "Backup Accounting for Day $DATE on $HOSTNAME:">>/tmp/rsync_script_tmpfile
   echo >> /tmp/rsync_script_tmpfile
   echo "################################################">>/tmp/rsync_script_tmpfile
   cd $ARCHIVEROOT/$CURRENT/
   du -sh * >> /tmp/rsync_script_tmpfile
   /usr/sbin/sendmail -t < /tmp/rsync_script_tmpfile
   rm /tmp/rsync_script_tmpfile
}

# some error handling and/or run our backup and accounting
if [ -f $EXCLUDES ]; then
      do_rsync && do_accounting
   else
      echo "cant find $EXCLUDES"; exit
fi

touch /$ARCHIVEROOT/$CURRENT/

nader
Posts: 27
Joined: Thu Dec 22, 2005 6:17 pm

Postby nader » Fri Feb 10, 2006 6:38 pm

Hi,

Thank you for all the replies.

I think what we will do is backup user data nightly using cron and omcpoutu to our file server. That server gets a changed file backup every night. I'm leaning towards the spare server solution, but I'm not sure if that's something that mgmt wants to invest in.

So most likely I'll have to setup a second mailnode for user restores.

I don't need to backup any local pst files or local mail because everyone will be using SWA, so omcpoutu should take care of everyone's data. So we will end up with yesterdays user backup being online, and every previous day's on tape going back for 4 weeks, so that sort of works around of having multiple backups online - less elegant, I admit, but it should work for us.

As far as backing up the entire server, we will do a weekly full backup, and do a nightly changed files backup of the entire server.

And most importantly, we will also keep our fingers crossed....

Again, thanks for all your help.

Kind Regards,

Nader.


Return to “Scalix Server”



Who is online

Users browsing this forum: No registered users and 11 guests

cron