gianormous weekly maintenance reports

Discuss the Scalix Server software

Moderators: ScalixSupport, admin

fredness

gianormous weekly maintenance reports

Postby fredness » Wed Aug 16, 2006 5:07 pm

I'm getting gianormous weekly maintenance reports. I keep purging root email. What do I do to find out what is causing the big weekly emails, and possible removing/fixing that which is making them large?

Code: Select all

# mail root
 ...
 U879 root@badasshost  Sun Aug 13 02:32 17423/4101895 "Scalix Weekly Maintenance Report"
 ...


teeth well brushed, socks clean'

- fredness

johnwhee

ommaint might be your culprit

Postby johnwhee » Tue Aug 22, 2006 4:27 pm

Hi Fredness,

You might be getting it from the ommaint script out of the admin resource kit. I set it up when we were running 9.4, and had serious problems with a pop logging bug, and the pop delete bug.

I'm getting some whopping huge emails from it (30M) to the ENU account, but it's just all shell script, so I'm thinking of modifying the -weekly output to post to a webpage or something instead of mailing it out.

Cheers!

--John

fredness

Postby fredness » Tue Aug 22, 2006 4:58 pm

Here's attempt at excerpt from the monster email tha happens weekly.

Looks like we have an orphan swarm! So what do I do now. A sence an omcontain answer is in my destiny.

Yes, I thought John Carpenter's Dark Star was a good movie to see as EE undergrad in the main engineering classroom West Lafayeet, IN. Cheese evil.

Code: Select all

...
Subject: Scalix Weekly Maintenance Report
...

 omscan running on 07.30.06 at 02:15:01.
 Host computer : ...
 Report mode requested.

 Last omscan tool run on 07.23.06 at 02:15:01; duration 19 minute(s).
 Previous server cycle run on 07.29.06 at 01:02:17; duration 249 minute(s).
 Current server cycle not started; service reset or delayed.

 Passive scan option requested.

 Scanning file/dir links .... done.

 CAUTION: Scanning of message store has started.
          Mounted file/dir links must be maintained during the scan.
          VxFS file system must not be reorganized - see omscan(1M).

 Checking/Scanning data domain ....
 ~/data/0000001
 ~/data/0000002
 ~/data/0000003
 ...
~/data/00000hv
 ~/data/00000i0
  done.
 Checking data orphans .... done.
 Checking data orphan files .... done.

 Files checked - 25961 : orphans found - 25959
 Checking/Scanning bulletin board area .... done.
 Checking/Scanning user trays ....
 sxadmin / us, dogfood/CN=sxadmin
 Alpo Grrravy / us, dogfood/CN=Alpo Grrravy
 mbadmin / us, dogfood/CN=mbadmin
 Zippy Pinhead / us, dogfood/CN=Zippy Pinhead
 Duke Anomolous / us, dogfood/CN=Duke Anomolous
 ...
 wireless user / hq/CN=wireless user
 nlsgw / hq/CN=nlsgw
  done.
 Checking/Scanning message lists .... done.
 Scanning name directories .... done.
 Scanning temp domain .... done.
 Checking/Scanning message queues .... done.

 Orphans found ....

 Orphan name : ~/data/0000001/00cahd3
 Child Type : Serialised File.
 Child Creator : ********
 Child Subject : ********

 ...

 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
 !!! - - THIS GOES ON FOR 140,000 LINES - - !!!
 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

 ...

 Owner Info : Updated Contact Lists!!!
 Parent Container : ~/data/000001r/00fqb7k:1, RecNum : 6
 Child Affected : ~/data/00000as/00vb8r5:1
 Child Subject : Updated Contact Lists!!!

 Owner Info : Updated Contact Lists!!!
 Parent Container : ~/data/000001r/00fqb7k:1, RecNum : 37
 Child Affected : ~/data/00000as/00vb8r5:1
 Child Subject : Updated Contact Lists!!!

 Disk usage ....

 USER NAME                          IN    OUT    PDG   FCAB   DLST     WB   TOTAL (KB)

 Bulletin Board area                 -      -      -      -      -      - 1814002

 sxadmin /us,dogfood/CN=sxadmin  16028      1    238 151082      1      1 167351
 Alpo Grrravy /us,dogfood/CN=Al    107      1     13     14      1      1    137
 mbadmin /us,dogfood/CN=mbadmin      1      1      1      2      1      1      7
 Zippy Pinhead /us,dogfood/CN=Z      1      1      1      4      1      1      9
 Duke Anomolous /us,dogfood/CN=     47      1     14     14      1      1     78
 ...
 wireless user /hq/CN=wireless       1      1      4     15      1      1     23
 nlsgw /hq/CN=nlsgw                  1      1      1     12      1      1     17

johnwhee

orphans...

Postby johnwhee » Tue Aug 22, 2006 7:32 pm

Yeah, I get a ton of orphan listings as well...

Apparently the output of omscan includes a one liner for each orphan, but it aught to be moving orphans to /var/opt/scalix/orphans (though I don't see that dir in my systems) and deleting anything in there that's older than 30 days.

--John

fredness

Postby fredness » Tue Aug 22, 2006 8:20 pm

Ok, RTFM'd and tried the following.

Note, beware that after the omscan -K, the omon -w omscan will promptly start thrashing the disk cleaning away the orphans. See top excerpt below :-)

Expect next ommaint run should have a nice consice weekly report - no more monster, root emails

fingers crossed, eyes crossed


Code: Select all

# omoff -d 0 omscan

  Disabling 1 subsystem(s).

# omscan -Z

  Resetting server cycle .... done.

# omon -w omscan

  Enabling 1 subsystem(s).
  Omscan Server               Started

# nice -n 20 top

  179 processes: 178 sleeping, 1 running, 0 zombie, 0 stopped
  CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
             total    0.1%    0.5%    1.7%   0.1%     0.1%   97.0%    0.0%
  Mem:  1027476k av, 1018952k used,    8524k free,       0k shrd,  109616k buff
                      766196k actv,  144264k in_d,   15708k in_c
  Swap: 2096440k av,  222880k used, 1873560k free                  517960k cached

    PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
   9543 root      17   2  5240 5192  3304 D N   1.9  0.5   0:19   0 omscan.server
    399 scalix    15   0  6176 3560  2236 S     0.3  0.3  10:04   0 omctmon
  10089 root      34  19  1224 1224   896 R N   0.1  0.1   0:00   0 top
      1 root      15   0   508  484   448 S     0.0  0.0   2:34   0 init
      2 root      15   0     0    0     0 SW    0.0  0.0   1:16   0 keventd
      3 root      34  19     0    0     0 SWN   0.0  0.0   0:00   0 ksoftirqd/0
      6 root      15   0     0    0     0 SW    0.0  0.0   0:26   0 bdflush
      4 root      15   0     0    0     0 SW    0.0  0.0  53:27   0 kswapd
      5 root      15   0     0    0     0 SW    0.0  0.0  35:27   0 kscand
    7 root      15   0     0    0     0 SW    0.0  0.0   0:25   0 kupdate


Return to “Scalix Server”



Who is online

Users browsing this forum: No registered users and 2 guests

cron