[rbldnsd] Problems configuring BIND 9 with rbldnsd
Chris Gabe
chris at borderware.com
Tue Sep 5 20:45:20 MSD 2006
We would like to use rbldnsd to help scrape urls for our zone file
providers. Our rbldnsd implementation services several DNSBLs and url
block lists at once. It is problematic to use a log of *all* the
queries to obtain the list in question. It would be better if rbldnsd
provided the following capabilities:
- log each query that goes to one of a specified set of block lists,
which is not present in any of them. We don't want to include those
already in any of the lists; but queries will go to specific lists. We
also want to exclude queries to those lists that are not appropriate,
like ip4sets.
- support very frequent updates, required to fight quick DNS games by
spammers. E.g. log to a separate file each minute. wrap after, say, 60
minutes to the first file.
Or some equivalent that lets us efficiently process the output every
minute, i.e. sort | uniq, and submit it to the url block list server for
examination and potential inclusion.
We could get around this by keeping track of the position in the syslog,
I suppose. Or newsyslog every minute (blech)
It is both inefficient and problematic to do this using the existing -l
feature. It would require re-querying the server to see if it's in any
of the url lists, which creates a loop.
Is this something others are interested in?
I know we could run multiple rbldsnd's to get what we need, but that
would get pretty ugly.
Michael Tokarev - does this have merit or do you see it as unacceptably
beyond scope?
More information about the rbldnsd
mailing list