18 Replies to “DragonFly 3.2.1 is released!”

  1. I was just reading this http://www.dragonflybsd.org/performance/ article. Even though I appreciate Dragonfly’s performance gains, I found it rather shameful that they benchmarked bleeding edge BSD*s against an almost three (!) year old Linux kernel. Since 2.6.32 has been released, lots of scalability bottlenecks inside the kernel were removed (see, for example, the lseek scalability).

    I also disliked this comment: “DragonFly performs better than linux at lower concurrencies and the small performance hit at high client counts was given up willingly to ensure that we maintain acceptable interactivity even at extremely high throughput levels.”

    This suggests to the reader that Linux cannot maintain acceptable interactivity while DragonflyBSD can. But this was not tested!

    This is typical for BSD-people. You keep bashing how superior you are to Linux, where in reality you are years behind, both performance and (more importantly) implementation-wise. But keep telling yourself how great you are. I think you are at a stage where you actually believe it ;-)

  2. I can´t agree on “this is typical for BSD-people”. My experience is the exact opposite.
    I find BSD having more “torque” at high loads than Linux. When ripping a DVD, compiling code and downloading stuff with both bittorrent and wget, all at the same time, Debian will become slow and unresponsive on my computer. FreeBSD and NetBSD just keeps running, charing its resources evenly.

    Im very happy about this release! It works like a charm on my primary workstation.
    Thanks to all developers for a very nice OS and a lot of cutting edge features (Hammer, swapcache, LWKT-scheduler). I think DragonFlyBSD is the most Progressive opensource project out there today!!


  3. This does not address my comment about benchmarking bleeding edge BSD*s against an almost three year old Linux kernel.

  4. Max – Scientific Linux 6.2 was the most recent release of Scientific Linux until about two months ago. What Linux distribution ships a recent enough kernel (in a release, not a testing version) that would be considered recent enough?

  5. @Max Power:
    I find extreme exaggeration typical of Linux users. As you might imagine, the benchmarks have been going on longer than two months, so the scilinux was the current release at the time, as Justin said.

    Also, somebody might interpret the phrase you pointed at as meaning DragonFly could have been higher, but used different code to improve interactivity. No where does comment on Linux’s interactivity, but you read what you want to.

    And finally, I don’t see any claim to “superiority”. SciLinux was used as a benchmark, so clearly it was considered superior for this test environment and while the pre-24 client curves are better, the post-24 curves are lower. There is still room to improve.

    The next set of benchmarks will use scilinux 6.3 almost certainly.
    I always find it remarkable how linux fanbois always bash *BSD on each release. If they really hate them so much why does it blip on their radar at all? Is it that hard to not comment?

  6. Too bad you can’t publish benchmark of Solaris (the license states you can’t »disclose results of any benchmark test results related to the Programs without our prior consent.«).

    @Max P. »Do not trust any statistics you did not fake yourself.« Feel free to run your own benchmark. Maybe you got a little bit more to say then.

    Other than that, great release, keep it up!

  7. @Justin
    Why do you care so much about a release version? The author of the benchmarks used test versions of all the BSDs. What would have been so difficult to install – let’s say – Debian testing?

    The comparison just isn’t fair and I cannot believe that you’re just unwilling to accept that.

    What would you say if I did a test between a recent version of Firefox and a three year old version of Chrome and then came to the conclusion that Firefox outperforms Chrome in JavaScript benchmarks and implemented more standards? That might tell me something about the improvements Firefox made but in its context, this test would be absolutely meaningless.

    I do not doubt the test results. I think highly of Matt Dillon and his fellow developers. I just find the comparison unfair and therefore meaningless.

  8. Max – If you want whatever revision of the Linux kernel is most recent, that’s fine. You were complaining it was too old; I didn’t want to seesaw the other way and have someone complain it was too new. The DragonFly 3.2 kernel had several months of testing done at the point these benchmarks were taken.

    I say all this glibly, but I don’t have access to the system used for benchmarking, so I’m volunteering someone else’s time and planning here, of course.

  9. The fact that a stable server-grade release of Linux was used might have actually played to it’s advantage. Newer less tested kernels might have some sort of regressions for this test(although unlikely).

    I think the main point is that the scaling curve for both DF and Linux is now near linear while other BSDs are now on the defense.

  10. @Justin

    But why are there different standards for the BSDs and Linux? Why did the author use test releases of FreeBSD and NetBSD but didn’t do so with Linux? That’s my main argument. And to be honest, your argument that a recent kernel might be to new is rather week. Apparently, it wasn’t a problem for FreeBSD and NetBSD who couldn’t test their systems the way DragonflyBSD was tested.

    @Petr Janda

    A recent kernel might very well have a performance regression. It might also have a performance gain. The fact is that we won’t know till we have tested it.

    “I think the main point is that the scaling curve for both DF and Linux is now near linear while other BSDs are now on the defense.”

    Dragonfly’s scaling curve is now linear. Linux’s has been for years.

  11. Max – Francois Tigeot performed the tests, and used the current release of Scientific Linux, which is a reasonable thing to do. The test could be performed on a newer Linux kernel, which is also a reasonable thing to do. There’s not much else to argue about past that, at this point.

  12. Re: old Linux kernel

    To out this to rest, can someone rerun the test with the latest Dev release of Ubuntu?

  13. I think that SL6.2/centos6.2/RHEL6.2 (or which ever is the current release) is a good baseline for benchmarking because that is what most enterprises will use in a production environment. Not sure why you are so upset Max that DF project is using a production release of Linux to improve its code base. Benchmarking against Fedora or Ububtu (except the LTS releases) would be pointless. You need a solid baseline for benchmarking and I think using SL6.2 is a good base line. It doesnt mean that Linux sucks or that DF/FreeBSD/NetBSD are better. Its just a benchmark for the DF project to improve its code. You are reading far too much into this.

  14. @ fbar

    I’m no Linux fanboy but the reason why people want to see the newer Linux kernel tested is because they just fixed a huge lock with lseek that direct effects postgres.

  15. @Rick – I think the lseek fix improves linux/pgsql performance mainly with > 24 cores.

    Unfortunately I don’t think we have a 64 core server to benchmark either.

  16. As stated before, Scientific Linux was not chosen to compete against and show “see, we are better than SL”, but as a pointer to what is considered “good” on the market as of the time of the benchmarks. Same is true for all other OSes in the benchmark, be it release or master/head versions of the OSes – that#s just not the point and intention. Our goal is to improve Dragonfly in our own way. If anyone wants to know how actual versions of the Linux kernel compare to DragonFly 3.2 in pgbench regarding concurrent read-only all-in-ram workloads, set up a benchmark system and invest the dozends of hours of your time to do it. It is not our interest as the benchmarks were meant as a way to track our improvements over time – during an ongoing development.

Comments are closed.