<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
On 09/08/2010 23:32, Jules Field wrote:
<blockquote
cite="mid:EMEW3|3ca058362bcde6672ccb012c8e979e54m78NWj0bMailScanner|ecs.soton.ac.uk|4C608207.6080700@ecs.soton.ac.uk"
type="cite"><br>
<br>
On 09/08/2010 18:31, Desai, Jason wrote:
<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">Especially since this guy quarantines
more than 32,000 mail per day.
<br>
To delete "old" stuff then is making the users have less than a day to
<br>
retrieve their blocked mail. That's poor service.
<br>
<br>
His own suggestion of creating hourly catalogs sounded good to me.
<br>
<br>
</blockquote>
Also keep in mind that 32000 files in a directory will be less
efficient
<br>
to work through.
<br>
<br>
Hugo.
<br>
<br>
<br>
<br>
</blockquote>
</blockquote>
Are you sure there is a limit of 32,000 files in a directory? I think
there is a limit of 32,000 subdirectories. I'm not sure about 32,000
files though.
<br>
</blockquote>
The limit is actually 32,000 links to any inode. And I have hit it too.
It just hasn't caused me a major problem yet.
<br>
Unfortunately counting the number of entries in a directory is a slow
operation (effectively equivalent to an unsorted "ls") so I don't want
to do it every time I write a file. Which is why I haven't addressed it
before.
<br>
<br>
Can you think of any *fast* ways of overcoming this limit? Other
filesystems such as xfs handle a million or so files in a dir without
breaking a sweat, it's just a severe limitation of ext3 :-(
<br>
<br>
Jules
<br>
<br>
</blockquote>
<br>
or if your brave, convert it to ext4 on the fly with a command like
this:<br>
<br>
<code>tune2fs -O extents,uninit_bg,dir_index /dev/<device><br>
<br>
<br>
</code>
<div class="moz-signature"><br>
</div>
</body>
</html>