hammer: System has insufficient buffers to rebalance the tree.

If you try to use lower-specced machines for running HAMMER file systems, chances are good that you have already seen this error message:

  hammer: System has insufficient buffers to rebalance the tree.  nbuf < 3969

The precise number at the end varies, but what the error message wants to tell you is always the same and is best explained in the manpage:


      hammer: System has insuffient buffers to rebalance the tree.  nbuf < %d
      Rebalancing a HAMMER PFS uses quite a bit of memory and can't be done on
      low memory systems.  It has been reported to fail on 512MB systems.
      Rebalancing isn't critical for HAMMER file system operation; it is done
      by hammer rebalance, often as part of hammer cleanup.

So basically, this tells me that my server doesn’t have enough RAM to perform this specific part of the HAMMER cleanup task. I have seen this message on machines with 2 GB of memory and of course less.

The error hinges on the value of vfs.nbufs, which is a rather ill-explained kernel tunable that has to be set from the boot loader via /boot/loader.conf. There is an awful lack of explanation of that exactly this variable is supposed to represent; /boot/defaults/loader.conf tells us that setting kern.nbuf would set the number of buffer headers—whatever this means.

In theory, it’s as easy as following the error message to the letter: It complains about having less than 3969 buffer headers, hence you’d set kern.nbuf=3969 and you’d be set.

The defaults are set for a reason

I’m not alone with this question, though there haven’t been that many answers. On the mailinglist, there are a few questions about this:

But in the BSD world, every knob that is exposed usually is there for a purpose. I can’t find the precise message right now, but on the OpenBSD-mailinglist there once was a discussion about a particularly bad HOWTO collection, which routinely had users and rookie administrators fiddle with various sysctls, such as the TCP receive and send buffers and so on, without any concern about why they were set to the default values in the first place.

This is why I have a very uneasy time fiddling with this precies value. It is tuned to the amount of physical RAM of your machine: On Nur-Ab-Sal, which has 4 GB, it defaults to something in the neighbourhood of 6700. On the first specification of Rhaal at ¾ GB, it was set somewhere south of 1300. If I hard-wire this setting, it will no longer ’grow‘ with my machine; since presumably a whole lot of other parameters will scale to vfs.nbufs, I have the chance to inadvertendly diminish my whole system.

Trying to set it to some ridiculous value, say, 60000, apparently is safe. The value is capped to a maximum amount very early in the boot sequence:

  real memory  = 2146020352 (2046 MB)
  Warning: nbufs capped at 15908 due to physmem

Is this still safe? The (much lower) default value presumably was chosen for a reason, most likely because it was the safest choice which still offers a minimum level of performance. Setting kern.nbuf that high seems to wire too much memory for HAMMER operation—which makes sense, since the know is turned to eleven. This in turn makes the system swap quite soon, further increasing load and, I presume, some mysterious crashes under heavy I/O load. This is what you get for voodoo administration: Pricking something with pointed objects when you don’t know where you will end up has never been the brightest of ideas, and it applies here as well.

Also, where does that number 3969 come from? Does the number of required buffer headers scale with the size or amount of HAMMER file systems in use? It seems weird that rebalancing a 20 GB file system would need 4 GB of RAM.

I’m still investigating. Of course, I could simply stuff the machine with as much RAM as I could spare, but the stated goal was to spec it as low as possible. If I don’t need 16 GB of RAM, then I don’t want to pay for it unless that need manifests. I can still turn up the dial then. Also, HAMMER is slated to be much more resource-conservative than ZFS—and I can vouch that this is indeed the case.

Next time that I‘ll reboot the machine I’ll set the value just as high as necessary in order for hammer rebalance to still work. Perhaps I’ll find out more in the meantime. As a last resort, there’s always RTFS. ◀

Read the fucking source

FIXME I should have done this right from the start. /usr/src/sys/sys/vfs/hammer/hammer.h contains the answer, but I still have to investigate a little further. The relevant lines are around 969–976:

 * Minium buffer cache bufs required to rebalance the B-Tree.
 * This is because we must hold the children and the children's children
 * locked.  Even this might not be enough if things are horribly out
 * of balance.


uganádarom-luun/hammer-rebalance.85.txt · Zuletzt geändert: 2016-05-11 23:33 (vor 4 Jahren) von Stefan Unterweger