quarantine release might lose mail?
Glenn Steen
glenn.steen at gmail.com
Thu Dec 17 12:05:42 GMT 2009
2009/12/17 Frank Cusack <fcusack at fcusack.com>:
> On December 16, 2009 10:49:44 AM +0100 Glenn Steen <glenn.steen at gmail.com>
> wrote:
>>
>> If one has
>> a ... "lazy"... scheme (like only a big / and a very smallish /boot,
>> which is what I'd recommend for most systems these days, due to modern
>> HW and filesystem anatomy), the inode reuse problem is a non-issue.
>
> Do you say that because you expect that with lots of free inodes, that
> the "next" free one is always used (as opposed to, e.g., how open()
> works where the lowest fd is always assigned?)?
No, of course not. The amount of free inodes is irrelevant to this
reasoning. One can probably assume that an inode number "returned" to
the pool will be the next one used (as you show below). What it
entails is the rate of inode use as such... If you have one big
partition for all consumers (of inodes), the risk that a particular
inode will be used as a postfix queue file lessens. Factor in time,
and the risk of reuse resulting in the scenario your script tries ro
protect against... and the risk dwindles down even further. So even
though it is not that easily quantifiable, one can be fairly certain
that the released queue file (which is the one risking being
overwritten) will pass through unscathed;-). Haven't checked the
release code, but it just might have a safeguard that entirely
deflates the discussion (I'm too busy to go look! This year-end is
more "silly season" than ever!!!).
Notably, when I started using MailScanner/postfix/MailWatch, the
inode/queue file ID reuse rate was very high on my systems, due to a
"bad" partitioning scheme (basically, postfix had its own
partition/filesystem), so ... I got a lot of duplicates in my maillog
table ... since back then, the added "entropy" wasn't there. But even
so, the rate of reuse was never such that ut resulted in a problem
*outside* of MailWatch. Jules fixed the logging issue with the entropy
added to the queue ID (very ckever solution, that:-), even though I
seemed to be the only one seeing the problem. Well, once I shifted to
a "better"/"lazier" oartitioning scheme, I saw the rate drop rather
dramatically. So ... empirical "proof" is on my side here;-);-)
> On zfs and HFS+ it does appear that a new file always gets the "next" inum.
> No idea if that's a guarantee.
>
It's probably not. It will always be a limited resource.
> On ext3, which I imagine is what "most" folks use, that's not the case:
>
> [root at linux ~]# uname -a
> Linux linux 2.6.18-128.el5 #1 SMP Wed Jan 21 10:44:23 EST 2009 i686 i686
> i386 GNU/Linux
> [root at linux ~]# for i in 1 2 3 4 5 6 7 8 9 0 ; do touch m; ls -i m; rm -f m;
> done
> 8184004 m
> 8184004 m
> 8184004 m
> 8184004 m
> 8184004 m
> 8184004 m
> 8184004 m
> 8184004 m
> 8184004 m
> 8184004 m
> [root at linux ~]# df . | grep /dev
> /dev/mapper/VolGroup00-LogVol00
> [root at linux ~]# mount | grep `df . | grep /dev`
> /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
> [root at linux ~]#
>
> So for ext3 I would expect that recycling an inode has a high likelihood.
True. So this is why you'd want there to be "other" consumers...
Look, all I'm saying is that the risk is limited, depends on a few
factors like FS/partion-scheme, time ... and rate of messages
received, of course... and ... well, look at it this way: Noone has
shown that this has ever happened, anywhere. Else someone would've
complained about released messages not getting properly released etc
... and someone would've typed up something like your script to cure
that. That it hasn't happened says something;-).
Don't get me wrong, anything that make the MS/PF combo a better one is
nice, and well received... It's brilliant that you took the time and
elected to contribute!
> -frank
Cheers
--
-- Glenn
email: glenn < dot > steen < at > gmail < dot > com
work: glenn < dot > steen < at > ap1 < dot > se
More information about the MailScanner
mailing list