It’s exactly what the title is: ipfw3 now does NAT in-kernel, without locking. I have no benchmarks to point at, unfortunately. The commit has usage examples.
This is a specialized use case, but Mono 4.x has some issues on DragonFly. Some minor testing has been done, but if you are already using it, please contribute.
Place Independent Executables are now supported on DragonFly, thanks to sumbitter ‘shamaz’.
The system I had for running a go builder died. I am running out of extra hardware. Is there someone who is using Go and DragonFly and is willing to commit to running a semi-dedicated builder?
There’s a new digital library in Kisumu, Kenya – and it’s running DragonFly for file storage.
Hammer2 now has inode indexing, which Matthew Dillon was avoiding while trying to create more efficient hardlink support. The result is now with that problem solved, more updates can come in: NFS support, mtime updates, output changes, code removal, and lots of other changes, not all of which I’m even linking.
If you have a NVMe chipset under DragonFly, you now can use a special utility to retrieve status information: nvmectl. Right now, only ‘info’ is implemented.
If you are running DragonFly 4.5 (i.e. bleeding edge), Sepherosa Ziehau made an ifnet change that will require a full buildkernel/world if you want things like netstat to keep working.
This is limited to some users of specific Intel video chipsets, but: if you get odd screen artifacts in X, the ‘vesa’ driver may work just fine for you. Or turn acceleration off. Or set ‘drm.i915.enable_execlists=0’ according to zrj on #dragonflybsd.
(Updated to reflect all the answers in the thread and elsewhere.)
If you didn’t already know about it, you will find this useful: DragonFly has a tuning(7) man page, about getting the best performance from your system. Matthew Dillon recently updated the man page with some tips about SSD setup.
Tomohiro Kasumi wrote a lengthy explanation of what “@@” means, in the context of the Hammer file system. It acts as a sort of signifier for each actual Hammer pseudo-file-system, since it’s possible to null-mount these anywhere in DragonFly, under all sorts of names. Don’t trust my summary, though – read his.
Sepherosa Ziehau needs to run DragonFly under Hyper-V at work, so he’s making improvements .
There are USB devices out there that are sort of like a mouse, as in they work as a pointing device, but they don’t show up as a mouse device. For example, the PowerMate USB Multimedia Controller. It’s possible to pipe the events from this or similar ‘weird’ devices to sysmouse, and use it the way you’d expect, with this fix from user tautology.
As part of his NVMe work, Matthew Dillon found I/O speed so fast that CRC checking actually got in the way of disk activity. He’s brought in a new CRC algorithm called xxHash. He also brought in Mark Adler’s hardware iscsi_crc32 implementation, but did not add it to Hammer2. There’s some work on read-ahead operations too, to deal with the NVMe throughput.
Remember how DragonFly now has autofs? That obsoletes amd, amq, and so on, in the am-utils suite. Now, am-utils has been removed. This may affect nobody, as am-utils wasn’t working well.
Did you know there’s a rescue image, created with crunchgen, in DragonFly? If your system can boot to single-user mode, you can use it to at least manipulate data on disk – it includes mined as a simple small editor. (Since vi assumes /usr is mountable.) This rescue image now includes undo, so you can back out changes on a Hammer volume.
Matthew Dillon has been testing on more NVMe hardware, or at least what is supposed to be NVMe hardware, and he has a writeup of the results that may be useful for anyone planning a shopping trip soon.
Remember Sepherosa Ziehau’s nginx tests on DragonFly? He’s using the same configuration to test performance of the accept(2) and close(2) calls. The result? Over 8000 concurrent connections, for 580,000 connections per second. That’s on one DragonFly machine.
Matthew Dillon has written a new, from scratch, driver for NMVe in DragonFly. If you haven’t encountered it yet, that’s SSD access over PCIe, which gives better throughput than ATA. He’s posted a summary of his work, and it’s possible to load it now as a module. It supports MSI-X, and there’s test results from using dd on supported NVMe hardware.