new Botnet plugin version soon

John Rudd jrudd at ucsc.edu
Thu Nov 30 12:06:55 GMT 2006


Things I'm putting into the new Botnet version (which will be 0.5):

1) someone noticed that some MTA's (specifically CommuniGate Pro) don't 
put the relay's RDNS into the Received headers, and thus Botnet 0.4 
always triggered "NORDNS" when run on that MTA.  In the new version, if 
Botnet finds that the relay it's going to look at has no rdns in the 
pseudo-header, then the _first_ time it looks it will try to lookup the 
relay (and store it in the pseudo-header if it finds it; or store -1 if 
not).  From then on, it will give the right answer for the other Botnet 
rules.  This avoids the performance problem of "every Botnet rule does 1 
or 2 DNS checks" that I tried to solve 1 or 2 versions ago, but does 
mean that at least 1 DNS check will be done (by the first Botnet rule 
that happens to get called) if the relay doesn't have RDNS.  This might 
happen even if you have network checks turned off.  If you're concerned 
about the small performance hit on this, then it might be a good idea to 
run a caching name server on the host where Botnet runs.

(I had also considered only doing this if the user set a new config 
option, "botnet_lame_mta_rdns", to 1 ... but I thought I'd try this first)


2) As suggested, I've added "botnet_pass_domains" -- regular 
expressions, anchored to the end of the hostname string, that look for 
domains to exempt from Botnet checks.


3) I modifed the "IP in hostname" check slightly.  It used to look for 
mixed deximal and hexidecimal octets in the hostname.  This caused a 
small problem with the following Received header:

Received: from badger07006.apple.com (badger07006.apple.com [17.254.6.173])

("ad" is hexadecimal for "173", and you can see "006" right in there, 
therefore 2 octets are present in the hostname)

To avoid this special case, I have made it so that it doesn't put the 
hexicecimal and decimal checks into the same regular expression.  This 
could, however, slightly reduce Botnet's effectiveness.  I'm going to 
re-evaluate it over time.

(note: I have ALSO addressed this by putting apple\.com into the 
botnet_pass_domains example; using botnet_pass_domains or botnet_pass_ip 
might be the better way to address these special cases in the future, 
but I'm not sure yet)


4) I've added "mx" to the included botnet_serverwords.  Technically this 
alone would exempt the ebay hosts that use "mxpool", so ebay wouldn't 
need a botnet_skip_domains entry ... but I also made such an entry for 
ebay.  I'm not sure yet if "mx" is a good idea to have in 
botnet_serverwords though.


5) In the past, I only had the 127.* localhost IP address block, and the 
10.* private IP address block in the example botnet_skip_ip config. 
 From a suggestion I received, I've added the other two private IP 
blocks as well ( 192.168.* and 172.(16-31).* ).


I have two questions:


Question 1: Someone suggested that, for botnet_pass_domains, I not 
re-invent the wheel.  SA already has several whitelist options 
(whitelist* and sare_whitelist* were specifically mentioned).  They 
suggested that I leverage them.  My first (two part) question is:

a) do any of them have a small enough value that they wouldn't counter 
botnet's default score of 5?  Meaning, if I "do nothing" with respect to 
those other whitelist mechanisms, they'll still "do the right thing" and 
let the botnet hosts through, right?

b) clearly I've gone ahead and done botnet_pass_domains ... but part of 
me wants to "do both".  So what is the right way to have Botnet 
recognize those other host/domain whitelisting mechanisms?

I have no idea what the sare_whitelist entries look like, but I was 
thinking maybe I could do take the whitelist_from argument, the 2nd 
argument to whitelist_from_rcvd, and maybe the whitelist_from_spf 
argument, and munge them into a domain name to exempt.  The catch is: if 
I do that, shouldn't I _also_ recognize the unwhitelist_* configs?  That 
starts to get a bit hairy, IMO.

For now, I'm not going to go down this path... but I'm interested in 
people's opinions about whether or not I should recognize whitelist*, 
sare_whitelist*, and unwhitelist* config options and somehow incorporate 
them into botnet_pass_domains.  I'd also consider code snippets that 
would be compatible with the code I already have for 
Botnet::parse_config.  My main hope, though, is that the scores for 
those mechanisms are already negative enough that they over-ride Botnet 
anyway.  Given that the ones in the base SA are scored at -6, -15, or 
-100 ... I think that's a comfortable assumption on my part.  I don't 
know if sare_whitelist fits into that or not, though.

(for similar reasons I'm currently not going to look at making the 
BOTNET meta rule's expression more complicated with references to DK and 
DKIM; the DK scores in the base SA are scored at -100 and -7.5 ... that 
seems useful enough to me; but I might look at putting in alternate meta 
rule expressions that are commented out, if people really want me to; 
that way people could just choose to comment and uncomment whatever 
seems most appropriate for their situation)



Question 2: someone asked why my module is "Botnet" instead of 
"Mail::SpamAssassin::Plugin::Botnet".  The answer is: when I first 
started this (and this is/was my first SA Plugin authoring attempt), I 
tried that and it didn't work.  If someone wants to look at it, and 
figure out how to make that work (but still have the files located in 
/etc/mail/spamassassin) I would happily incorporate it.



Last, someone offered to host this if I needed to.  I appreciate the 
offer.  I may decide to bite the bullet and host this on sourceforge at 
some point (assuming, say, the SA team doesn't like it enough to include 
it in their standard examples) ... but for now my existing location is 
working fine.


I expect to release the new version over the coming weekend.


More information about the MailScanner mailing list