OpenBSD talk at Michigan User Group

This appears to be all audiovisual media week, because author Michael W. Lucas gave a talk at the Michigan Users Group about OpenBSD (he’s qualified), and it’s up now in two parts. ┬áHe describes it as:

“Among other things, I compare OpenBSD to Richard Stallman and physically assault an audience member.”

5 Replies to “OpenBSD talk at Michigan User Group”

  1. Lars Schotte says:

    This long long time_t problem is around for some time now. And the approach resembles to me the one on IPv6.
    Its always better to implement a solution sooner than later, because later its clear that even more code will come in that then needs to be rewritten. To IPv6 we see the same, like if we had adopted it way sooner, we would not have the need for NAT64 any more. So this additional work was not necessary.
    Now the same is true when people are complaining about time_t and what it will do to the filesystems. Well, also there a lot of work has been done since. Had they adopted long long time_t sooner, a lot of problems would be fixed already, together with some other things, like work on filesystems because of TRIM support or journaling or whatever.
    Passing problems into the future is never a good idea.

  2. yggdrasil says:

    I found the comments on DBSDs far future goal of being able to run one OS instance over a whole cluster very interesting. I hope to actually see that happen some day.

  3. yggdrasil – it’s unlikely to happen that way, now, because there’s so many cores getting packed into processors, there’s no benefit to shipping processing power back and forth between machines when it’s all right there.

    Shipping the data back and forth though – that’s worthwhile, and that’s why there’s Hammer.

  4. yggdrasil says:

    Justin: certainly not. There will always be use cases of more computation cores than are typically in a computer at the same time. But you bring up an interesting point: how will HAMMER fit into this?

  5. There’s always use cases – but not a lot of them compared to how it used to be, and perhaps not enough that the amount of work needed to have a process jump machines will be worth it.

    In any case, systems like Hammer enable you to have multiple machines working with the same data, and have native snapshots/history/deduplication/compression/mirroring without special hardware or having to pay a vendor for a closed source application.

Comments are closed.