OT: I/O bound?

Denis Beauchemin Denis.Beauchemin at USHERBROOKE.CA
Tue Apr 27 14:38:21 IST 2004


Hi,

One of my servers (Xeon 2.4, 1.5GB, 73GB 10K rpm U160 disk) running MS 
4.29.7-1 + SA 2.63 + sendmail 8.12.10-1 on a RHEL 3 AS (all patches 
applied) seems to be suffering from high disk access:
# iostat -dk 5|awk '/^Device/&&p!=1{print;p=1}/^dev/{print}'
Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
dev8-0          160.89       132.67       685.66   10411181   53804828
dev8-0          224.34        14.91       941.23         68       4292
dev8-0          246.06        16.28      1056.49         64       4152
dev8-0          190.12        14.46       827.95         60       3436
dev8-0          133.94        11.76       602.71         52       2664
dev8-0          160.34         7.68       720.68         36       3380
dev8-0          156.07        25.24       672.82        104       2772
dev8-0          138.39         6.25       777.68         28       3484
dev8-0          295.57        13.99      1234.50         60       5296
dev8-0          132.68         4.36       574.29         20       2636
dev8-0           74.00         8.39       344.65         40       1644
dev8-0           94.46         8.87       414.19         40       1868
dev8-0          231.74       143.07       824.18        568       3272
dev8-0          176.82        10.91       761.82         48       3352
dev8-0          136.15         3.76       609.39         16       2596
dev8-0          207.82         9.78       880.20         40       3600
dev8-0          171.86       143.28       614.07        672       2880
dev8-0          122.09         2.79       534.88         12       2300
dev8-0          200.27       109.49       766.40        404       2828

It reads just a little bit but writes so much.  I run a caching NS 
(bind, as supplied by RH) and use tmpfs for 
/var/spool/MailScanner/incoming.  Anything I can do to make it go 
faster?  Is there some program I can run to figure out which program 
does all this I/O?

Here is the output of some commands:
# free
             total       used       free     shared    buffers     cached
Mem:       1545056    1456104      88952          0     232744     881236
-/+ buffers/cache:     342124    1202932
Swap:      2048276     309720    1738556

# vmstat 5
procs                      memory      swap          io     
system         cpu
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy 
id wa
 5  0 308612  35000 232736 885600    8    7    57    61  194    24 41 15 
31 13
 2  0 308608  74856 232736 885560    0    0     0   786  411  1587 67 
24  0  9
 0  0 308608  79408 232736 886624    0    0    13  1028  654  1886 34 17 
16 34
 2  0 308608  62488 232740 885744    0    0     6   441  454  3994 62 
21  8  8
 3  4 308608  53280 232740 885772    0    0     7  1161  456  1564 55 16 
17 12
 0  0 308608  66268 232752 885820    0    0     6   554  399  1643 44 24 
14 18
 3  2 308608  75588 232756 886092    0    0     3   550  419  1656 37 19 
33 10
 0  2 308556  50620 232784 886068    0    0     1   675  499  2065 40 15 
21 25
 1  0 308548  57600 232800 886264    0    0     1   269  269  1199 30 14 
51  4
 5  1 308548  49644 232836 886432    0    0     0   399  349  1864 42 15 
34  9
 2  0 308548  33224 232884 886832    0    0     3   562  378  1763 53 24 
10 13
 6  0 308548  59540 232896 886652    0    0     9   418  271  1260 59 18 
13 10
 1  0 308548  22212 232908 886852    1    0     4   606  384  2234 58 20 
13  9
 4  0 308544  56104 232660 887224    0    0    12   850  402  3748 35 20 
25 21
 2  0 308544  62052 232660 887496    0    0     2   710  372  1457 57 17 
15 11
 4  9 308500  71140 232676 887996    2    0    11   876  449  3826 65 
33  0  2

# top
 09:35:19  up 1 day,  1:22,  1 user,  load average: 5.12, 4.80, 5.28
145 processes: 144 sleeping, 1 running, 0 zombie, 0 stopped
CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle
           total   23.7%    0.0%   10.6%   0.0%     0.0%   65.0%    0.4%
           cpu00   17.4%    0.0%   12.6%   0.0%     0.0%   68.9%    0.9%
           cpu01   30.0%    0.0%    8.7%   0.0%     0.0%   61.1%    0.0%
Mem:  1545056k av, 1490064k used,   54992k free,       0k shrd,  232876k 
buff
                   1161456k actv,  265116k in_d,    7252k in_c
Swap: 2048276k av,  308184k used, 1740092k free                  897560k 
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
23985 root      23   0 44736  31M  2348 S    17.0  2.0   0:00   1 
MailScanner
23987 root      25   0  3348 3348  1920 S     3.4  0.2   0:00   1 pyzor
23814 root      23   0  4588 4588  1664 D     2.9  0.2   0:01   1 
mailscanner-mrt
    7 root      15   0     0    0     0 SW    2.4  0.0  12:22   1 kswapd
 1726 named     25   0 26668  25M  1568 S     1.4  1.7  19:41   1 named
 4093 root      20   0 44472  22M  2112 S     0.4  1.4   3:26   1 
MailScanner
23980 root      21   0 42852  28M  2260 S     0.4  1.8   0:00   1 
MailScanner

Thanks!

Denis

-- 
   _
  °v°   Denis Beauchemin, analyste
 /(_)\  Université de Sherbrooke, S.T.I.
  ^ ^   T: 819.821.8000x2252 F: 819.821.8045

-------------------------- MailScanner list ----------------------
To leave, send    leave mailscanner    to jiscmail at jiscmail.ac.uk
For further info about MailScanner, please see the Most Asked
Questions at    http://www.mailscanner.biz/maq/     and the archives
at    http://www.jiscmail.ac.uk/lists/mailscanner.html




More information about the MailScanner mailing list