[rbldnsd] Problems configuring BIND 9 with rbldnsd

Michael Tokarev mjt at tls.msk.ru
Sun Jun 11 15:08:39 MSD 2006


Benny Pedersen wrote:
[]
>> With "plain" gzip, rsync will be almost as good as http/ftp/whatever
>> "plain" download mechanism you use (rsync being an "advanced" mechanism).
>>
>> Because even very small change in original data changes compressed stream
>> *alot*, so rsync is just unable to find any unchanged pieces and hence
>> acts just like plain (but very CPU-hungry) http.
> 
> -z, --compress              compress file data during the transfer
>      --compress-level=NUM    explicitly set compression level
> 
> it can be set off
> 
> makes sense to me if one rsync a compressed file to disable this compressing in rsync

Well, the two (-z option of rsync and compressing original data) are..
orthogonal to each other.  It makes good sense to rsync *uncompressed*
data, probably using rsync -z.  But it makes very little sense to rsync
already compressed data, either with or without -z, because of the above.

Rsync is good if you have one (previous) version of the file locally,
and want to bring it up to date, to be the same as currently is on the
remote site, where the two versions are differ only slightly.  This is
the most effective usage of rsync, because it will be able to find
pieces of the file which weren't changed, and transfer only changed
pieces, thus saving bandwidth etc.  But if the two versions are very
different, so there's no common parts in them, rsync will transfer
the whole file anyway (because "everything is changed"), but at the
same time it will use quite alot of CPU (and some more bandwidth),
trying to find those non-existing unchanged/common parts.

Again, if you change even a single *bit* in new version of the file,
and compress it, the two versions of *compressed* data will be very
different (unless special things are done, like using that --rsyncable
patch for gzip).

Using rsync to download a file (as opposed to "bringing local copy
up to date") *increases* both bandwidth usage (slightly) and CPU
usage (significantly) compared to HTTP or FTP.  And you're trying
to "bring local copy up to date" but the two versions are very
different (so there's no common parts), rsync becomes even more
CPU-hungry.

So there are several choices which makes some sense:

 o using rsync to bring up-to-date an already existing local
   copy of the *uncompressed* file, either with or without -z;
   a variant of this case is to use gzip --rsyncable (which does
   not always works ok, btw)

 o using http/ftp to download a compressed data, ignoring any
   presence of local version

 o and as an alternative, using rsync to download uncomressed
   data similar to http/ftp, but using -z option so the data
   gets compressed while downloading.  The same can be achieved
   by using "advanced HTTP" method, -- some HTTP servers and
   clients can use deflate compression method (the same as used
   by gzip, -z, zip and alot of other software) on-the-fly.

Everyhing else makes no good sense.

[]
> so wget can some times be more bandwidth saver then rsync ?

Definitely.

/mjt


More information about the rbldnsd mailing list