9 Replies to “nginx and CPU affinity on DragonFly”

  1. This might be a really stupid question, but what exactly is CPU affinity?

  2. nginx’s implementation of reuseport + ‘worker_cpu_affinity auto’ makes the listen sockets processing completely cpu localized, i.e.

    listen socketX (assigned to cpuX by kernel) assigned to workerX, workerX binds to cpuX. (X — cpu number).

    With the upcoming ncpus netisrs, this will help a lot (probably latency wise).

  3. Why do so many top sites that are so highly network constrained use FreeBSD instead of Dfky?

    E.g. WhatsApp, Netflix, etc.

  4. Anonymous, I could list a few reasons.

    1) FreeBSD has a several orders of magnitude larger developer community. This critical mass means that there is much more available talent to source from when something needs to be fixed or worked on. Dragonfly’s developers are extremely talented, but there aren’t enough of them. If one of Dragonfly’s star developers decides to do something else at some point, that could represent a huge blow to the ecosystem. Heaven help us if Matt Dillon decides to go and bake cakes full time instead working on Dragonfly.

    2) FreeBSD publishes regular forward development plan updates. I.e. we already know what we can expect for FreeBSD 12 and 11 just got released a short while ago. When choosing an OS for a production server in a commercial environment, you kind of want to know what the future plans are for that OS.

    3) License, FreeBSD is generally GPL free (Clang is used instead of GCC as the base compiler), this matters for a *lot* of commercial outfits who don’t want to deal with viral license issues. Dragonfly is working on getting Clang up and running, but its not quite there yet from what I’ve read.

    4) FreeBSD’s documentation is excellent, massive and current. This is an area that is lacking under Dragonfly, but again if there were a larger community this could be sorted out. To be fair Dragonfly’s manpages are quite good.

    Basically if you are looking for an ultra high traffic server there are only two BSDs that are really viable at this point FreeBSD and DragonflyBSD. I remember reading that Netflix was able to push 100Gbps of traffic from a single FreeBSD machine. Sephe’s networking additions have been very impressive also.

    However, I would imagine that if a big player like Netflix decides to plug in some Dragonfly servers and invest resources and time, the critical mass would develop very quickly. It would make sense for them to have other OSes for diversification sake in their tool chain, Dragonfly would be an excellent choice if the above items can be sorted out.

  5. It would be great if someone would put together benchmarks.

  6. I think the general consensus is Anonymous is that everyone has access to the operating systems in question and can run their own benchmarks. Why not share your findings?

  7. FreeBSD has an extra flag to sendfile that makes it non-blocking at the disk, also. Nginx has an optimization to using async I/O to unblock the disk asynchronously, coordinated with that sendfile API. I imagine that Netflix would probably want that ported to DragonFly before using it in their OpenConnect appliances, since I can only assume that 99% of traffic goes through that interface.

Comments are closed.