3 Replies to “32 terabytes of swap on DragonFly”

  1. I love it. I’d love even more to see DragonFly support 256 TB of swap like Windows does. That’d be incredible, but it’d probably take a lot of work too.

  2. My guess is that 32 terabytes of swap per machine is probably sufficient for now.

    The caveat with this kind of cache is that if a powerloss event happens for a single machine, that cache is blown and has to be replenished. When the power comes back on in a busy server environment, there will be a tremendous amount of stress on the server’s (typically) slower spinning disk backend as users access files that are no longer in the cache.

    In a clustered environment, you could probably cluster that swap across to the other machines in the cluster. If one machine goes down, the other machines in the cluster can continue to serve up files from hot cache until the machine that went down is serviced.

    This IMO, is where HAMMER2 might really show its muscle over other filesystems. It will then be trivial to keep terabytes of cache clustered across a bunch of inexpensive machines.

  3. At any rate DragonFly’s swapcache seems to be a more accessible cache upgrade compared to ZFS’s L2ARC. ZFS needs more RAM to manage the cached out data on L2ARC (even with compression). If the math is correct, you would completely saturate the RAM on a ZFS storage system with 8GB of RAM having an SSD of 500GB for L2ARC.

    With DragonFly’s swapcache the system would use 50% of that (4Gb) to manage the same amount of swap.

    Some datapoints from Oracle:

Comments are closed.