7 Replies to “5.4.1 release within a week”

  1. Well, it was based on the Illumos version before and now they’re switching to a different source. I can see why, for maintenance reasons.

    A meta-argument I have with that is it sets a bar – ZFS on FreeBSD can’t be better than ZFS on Linux, because it’s the same code base. ZFS was (reputedly) better on FreeBSD than most anywhere else for quite a while, and was a good selling point. Now it won’t be. Of course, I can complain like that all I want but if this is what it takes for up to date maintenance, that’s what it takes.

  2. I am prolific user of FreeBSD’s ZFS and ZFS on Linux (ZFS on Linux at work and FreeBSD ZFS both personally and for work ) and I’d argue that in general, as a practical matter, ZFS on FreeBSD is still has operational advantages vs ZFS on Linux unless you are manually building the latest ZoL release candidates or using niche distros.

    The ZFS you get in FreeBSD 12 is newer than what is in Ubuntu’s ZFS on Linux packages for 16.04 and 18.04. Debian’s official ZFS on Linux packages move extremely slowly and it takes a long time for bug fix releases to be made available. Also, FreeBSD ZFS is a part of the base system while on Linux it generally isn’t supported directly in installers and other tools particularly well if it all.

    I’m glad FreeBSD and ZFS on Linux are synchronizing, I see it being good all around for everyone. But ZFS on Linux for the average user or enterprise use is still pretty messy and behind FreeBSD in a lot of ways, even though the ZoL source repo has been pulling ahead.

  3. I agree. I have high hopes for HAMMER2. I’ve been using DragonFly more and more for personal testing and use.

    Ultimately I think the reasons why someone would want to use FreeBSD or DragonFly or OpenBSD etc go beyond a single feature, it is the system as a whole (and of course licensing plays a part as well).

  4. I have made progress on HAMMER2. Still no clustering yet, but I’m getting closer to being able to do it. The 5.4.1 release has an important update to the filesystem sync code and is able to make filesystem consistency guarantees on snapshot or crash (nlinks vs number of directory entries) that I could not make before. Before this update, HAMMER2 was only able to guarantee that the meta-data topology would not be corrupt, but directory updates vs inode creation or deletion could end up in different sync groups. With the update, the directoryinode dependency is properly handled.

    Also, this update significantly improves filesystem sync performance. In particular, front-end operations (running programs that modify the filesystem) can now continue to run and the kernel can even continue to flush portions of the buffer cache for those new operations to disk while a prior filesystem sync is still in progress. That’s a pretty big deal for performance.

    So those are the two big improvements. Clustering and redundancy is still a work-in-progress and won’t happen for a long time. I might be able to get redundancy working first, but it will still be a while. I can’t even begin to promise a time frame.

    -Matt

Comments are closed.