Found nn messages in the processing-messages database
maillists at conactive.com
Mon Apr 20 12:31:19 IST 2009
Glenn Steen wrote on Mon, 20 Apr 2009 12:44:14 +0200:
> Well, then you haven't got a particularly large/lengthy SQL log (like
> for MailWatch), have you?
I regularly purge. The log I'm looking at has about 60.000 messages. How many
do you need on a day for getting duplicates?
If you had, you'd see the problem. Also
> depends on fs, of course.
Is there some some documentation at postfix.org on this?
> > And I don't see how changing the algorithm for the five-letter-word would
> > solve the problem with the processing database.
> If you go from totally random to deterministic/file, you'd still get
> the needed uniqueness as well as the same key for the same queue file
> (in the processing DB)... Which seems to be the problem needing to be
Oh, if you use the file itself, yes ... I was thinking of something, uhm,
well, I don't know.
Anyway, the point is, that many of us will simply don't need this. It's just
some more milliseconds added to processing. In the recent past people have
already complained several times about performance. Why hamper performance
where it is not necessary?
Btw, Julian, you wanted me to remind you that the processing db option should
be set to off by default! :-)
Kai Schätzl, Berlin, Germany
Get your web at Conactive Internet Services: http://www.conactive.com
More information about the MailScanner