User: Password:
|
|
Subscribe / Log in / New account

Choosing between portability and innovation

Benefits for LWN subscribers

The primary benefit from subscribing to LWN is helping to keep us publishing, but, beyond that, subscribers get immediate access to all site content and access to a number of extra site features. Please sign up today!

March 2, 2011

This article was contributed by Koen Vervloesem

Portability is a key concept in the open source ecosystem. Thanks to this killer feature, your author has migrated his desktop operating system during the last ten years from Mac OS X to Linux (various distributions) and eventually to FreeBSD, but throughout that process he could keep using most of the same applications. When you present a recent openSUSE or PC-BSD desktop system to a computer newbie, they won't notice much difference, apart from a different desktop theme, perhaps. The same applications (OpenOffice.org, Firefox, K3b, Dolphin, and so on) will be available. In many circumstances, it just doesn't matter whether your operating system is using a Linux or FreeBSD kernel, as long as it has drivers for your hardware (and that's the catch).

This portability, however, is not always easy to achieve. Now that Linux is the most popular free Unix-like operating system, it shouldn't be a surprise that some projects have begun treating non-Linux operating systems as second-class citizens. This isn't out of contempt for the BSDs or OpenSolaris, it's just a matter of limited manpower: if almost all the users of the application have a Linux operating system and if all the core developers are using Linux themselves, it's difficult to keep supporting other operating systems. But sometimes the choice to leave out support for other operating systems is explicitly made, e.g. when the developers want to implement some innovative features that require functionality that is (at least for now) only available in the Linux kernel.

Xfce 4.8 and udev

In January, version 4.8 of the Xfce desktop environment was released. In the beginning of its announcement, the developers expressed their disappointment that they couldn't offer all the new features of the release on the BSDs:

We hope that everyone will enjoy this release as much as we do. Sadly, this will not be the case as the folks using any of the BSD systems will notice a sudden loss of features. We think that this announcement is a good opportunity to express our disagreement with the recent "Linux-only" developments in the open source ecosystem, especially with regards to the utilities we need in desktop environments.

This somewhat cryptic remark was followed by a summary of the new features, but it was clear that it was aimed at the new desktop frameworks introduced in the last few years, such as udev, ConsoleKit and PolicyKit. udev is only available on Linux, but both ConsoleKit and PolicyKit are already supported by FreeBSD, so as LWN.net commenter "JohnLenz" supposed correctly in a comment on the announcement, the problem is for a large part on the testing side: how many FreeBSD users are using these frameworks? And how many of them test these frameworks regularly and spend the time to report bugs?

The remark in the release announcement probably puzzled a lot of BSD enthusiasts as well, because Xfce developer Jannis Pohlmann followed up a few days later with an explanation on his personal blog. There he named udev as the culprit for the non-portability of some Xfce features:

At least udev is strongly linked to Linux and as far as I know is not available on any of the BSD flavors. Unfortunately it is now the only good way to detect storage devices, cameras, printers, scanners and other devices using a single framework. That's why we use it in Xfce now in situations where HAL provided us with device capabilities and information to distinguish between the different device types before. The consequence is that thunar-volman no longer works without udev and thus only compiles on Linux. In Thunar itself udev remains optional.

But then Pohlmann points to the broader context:

I don't know what the porting status of the other frameworks is. But I am pretty sure not all of them have been ported to other platforms yet which is why I felt the need to express our disappointment in the announcement. For 2-3 years now all this has been a big mess. New frameworks were invented, dropped again, renamed from *Kit to u* and somewhere on the way it became impossible to keep Xfce as portable as it was before. I know that this is nothing new and that BSD folks faced the same situation as they do now back when HAL was invented but I don't think it has to be this way.

Some Linux users, who may be used to six-month release cycles, might not see a problem here, as they now have all those new features. But the BSD operating systems generally have a much slower development life cycle and haven't caught up yet with the whole redesign of the desktop stack. The comments on Pohlmann's blog post are instructive in this regard (although somewhat degenerating into a flame war at the end). For example, one commenter points out that HAL did acquire BSD (and Solaris) support, but only years after it had been mainstream in the Linux world, and the BSD developers only contributed the necessary patches to make it work when Gnome and KDE started making HAL mandatory.

The problem seems to be that udev is not as easy to port to non-Linux systems as HAL was. FreeBSD has the devd daemon to handle volume mounting, but devd's author Warner Losh commented that udev is not well documented, which hampers efforts to port it. However, this didn't stop Alex Hornung from porting udev to DragonFly BSD, although it's not yet a full drop-in replacement. The FreeBSD developers could take a look at his work, as DragonFlyBSD is a FreeBSD derivative.

OpenBSD developer Marc Espie also points to license issues: udev and other software close to the Linux kernel is using GPLv2, which the BSDs don't like to use. For example, OpenBSD developers don't add a component to the base system if it's less free than the BSD license, and the GPL is such a license in their eyes. However, the current problems are also clearly a consequence of different development styles. Components like udisks are part of the Freedesktop specifications (which are supposed to keep X Window System desktops interoperable), but the BSD developers didn't seem to participate in that effort. Maybe the PC-BSD developers can play a role in this, as they want to deliver a modern desktop operating system based on FreeBSD.

All in all, there are two possible solutions to a situation like the one the Xfce 4.8 release is facing. One solution is that the Xfce developers create an abstraction layer supported by as many operating systems as possible. The problem is that currently there is no such abstraction layer for detecting devices, which makes it perfectly understandable that the Xfce developers chose udev. It is used by their major development platform, Linux, and one can't expect them to support frameworks on operating systems they don't use. So the other solution is that some BSD developers port udev to their operating system, which is non-trivial but (as the incomplete DragonFly BSD port shows) doable, or that they propose an abstraction layer that could be supported on non-Linux platforms. As many desktop applications have already been rewritten in the last few years from using HAL to using udev, the latter won't be a popular choice and isn't likely to happen.

X.Org and KMS

Another important desktop component that is becoming more and more Linux-centric in recent years is X.Org. Recent open source video drivers (such as the Nouveau driver) require kernel mode setting (KMS), which is a problem for the BSDs and OpenSolaris, as these operating systems lack kernel support for KMS and Graphics Execution Manager (GEM). So a FreeBSD user who wants to get decent performance out of an Nvidia graphics card, currently has to use the proprietary driver. Fortunately, the FreeBSD Foundation recognized the gravity of this situation and announced last year that it was willing to sponsor a developer to work on KMS and GEM support in the FreeBSD kernel. Last month, the foundation announced that it had awarded a grant, co-sponsored by iXsystems (the developers of PC-BSD) to Russian developer Konstantin Belousov to implement support for GEM, KMS, and Direct Rendering Infrastructure (DRI) for Intel hardware. Matt Olander, Chief Technology Officer at iXsystems, says in the announcement:

Adding support for GEM/KMS will allow both FreeBSD and PC-BSD to run with enhanced native graphic support on forthcoming advanced architectures with integrated, 3d accelerated graphical capabilities. FreeBSD has long been dominant in the server market and this is one more step towards making FreeBSD a complete platform for netbooks, laptops, desktops, and servers. We are very pleased to be a part of this project.

More specifically, Belousov will implement GEM, port KMS, and write new DRI drivers for Intel graphics cards, including the latest Sandy Bridge generation of integrated graphic units. After this work, users should be able to run the latest X.Org open source drivers for Intel on their FreeBSD desktop. While the project is limited to Intel graphics, porting other drivers like Nouveau to FreeBSD will become a lot easier once Belusov's work is completed. And when KMS support is in place, FreeBSD users could run the X Server without root privileges, run the Wayland display server, and get access to a lot of other features that are until now only available on Linux.

This case also shows that cutting edge development often happens with Linux primarily in mind. During the last few years, X.Org's drivers were in a constant state of flux, with new technologies like KMS, GEM, translation-table maps (TTM), DRI, Gallium3D and so on being introduced one after another. As these are low-level technologies tightly coupled to the Linux kernel, porting them to FreeBSD is no small task, but fortunately the FreeBSD Foundation and iXsystems have seen that it is very important to follow the lead of Linux here.

systemd

An entirely different case is systemd: Lennart Poettering has no problem with the fact that systemd is tightly glued to Linux features. In a recent interview for FOSDEM, Poettering sums up the Linux-specific functionality systemd relies on: cgroups, udev, the fanotify(), timerfd() and signalfd() system calls, filesystem namespaces, capability sets, /proc/self/mountinfo, and so on. And then comes this quote, explaining why he designed systemd with Linux in mind:

Not having to care about portability has two big advantages: we can make maximum use of what the modern Linux kernel offers these days without headaches -- Linux is one of the most powerful kernels in existence, but many of its features have not been used by the previous solutions. And secondly, it greatly simplifies our code and makes it shorter: since we never need to abstract OS interfaces the amount of glue code is minimal, and hence what we gain is a smaller chance to create bugs, a smaller chance of confusing the reader of the code (hence better maintainability) and a smaller footprint.

Poettering took this decision because of his experience in writing some other low-level components in the desktop stack:

Many of my previous projects (including PulseAudio and Avahi) have been written to be portable. Being relieved from the chains that the requirement for portability puts on you is quite liberating. While ensuring portability when working on high-level applications is not necessarily a difficult job it becomes increasingly more difficult if the stuff you work on is a system component (which systemd, PulseAudio and Avahi are).

He even goes further with this provocative invitation to other developers to do the same:

In fact, the way I see things the Linux API has been taking the role of the POSIX API and Linux is the focal point of all Free Software development. Due to that I can only recommend developers to try to hack with only Linux in mind and experience the freedom and the opportunities this offers you. So, get yourself a copy of The Linux Programming Interface, ignore everything it says about POSIX compatibility and hack away your amazing Linux software. It's quite relieving!

Poettering touches some interesting points here. We have a family of standards that are known as POSIX (Portable Operating System Interface for uniX), defining the API of a Unix operating system. However, the POSIX specifications are not carved in stone and there are few operating systems that are fully compliant (Mac OS X is one of them since the Leopard release). POSIX is really an encapsulation of some choices that various Unix systems made along the way, rather than a body of text that got standardized and then implemented. According to Poettering, Linux should use its position as "market leader" (in the market of free Unix-like operating systems) and try out some new things. If developers don't force themselves into the constraints of the POSIX API, they could develop some really innovative software, like systemd shows. When these new developments happen to turn out really interesting, other operating systems could eventually adopt them as well.

The tension between portability and innovation

These three cases clearly show that there's a constant tension between portability and innovation, which are two important qualities of open source software. In a lot of domains, Linux is taking the lead with respect to innovation, and the BSDs are forced to follow this lead if they don't want to be left behind. While the BSDs will probably not be interested in adopting systemd, implementing KMS is a must-have because one cannot imagine a modern X.Org desktop any more without it. But the biggest portability problems will be in the layers right above the kernel that don't have suitable abstraction layers, such as the Xfce case shows. Will FreeBSD implement udev or will the problem be solved another way? These kinds of questions are important and choosing when to use the POSIX or the Linux API is a delicate balancing act: choosing a Linux-centric approach for a low-level component like systemd is understandable because of the performance and maintenance gains, but most applications won't necessarily benefit from that approach.

But maybe the biggest problem these cases hint at is that Linux development is being done at such a fast pace that other operating systems just can't keep up. Linux distributions and Linux-centric developers are used to the "release early, release often" mantra, including swapping out key components and breaking APIs each release. The BSD world doesn't work that way, and this makes working together on a modern cross-platform open source desktop increasingly difficult. The innovation of Linux inevitably comes at a price: Linux is the de facto Unix platform now, and hence more and more programs will not be portable to other operating systems.


(Log in to post comments)

Choosing between portability and innovation

Posted Mar 2, 2011 18:01 UTC (Wed) by djc (subscriber, #56880) [Link]

An example going the other way: recent versions of the OpenNTPD package (4.x) are not available in a portable flavor, because they use a system call that's not available on Linux (I forget what it's called and can't quickly find it right now, it's something like adjtime()).

Choosing between portability and innovation

Posted Mar 2, 2011 19:25 UTC (Wed) by jstultz (subscriber, #212) [Link]

On linux the syscall is adjtimex(), which should provide equivalent functionality as ntp_adjtime(). If not, I'd like to hear more about it.

Choosing between portability and innovation

Posted Mar 2, 2011 19:41 UTC (Wed) by jnh (subscriber, #69758) [Link]

...and if it is adjtimex, pretty sure that's been taken care of too...
(for example, see debian bug 593429)

Xfce, udev, and HAL

Posted Mar 2, 2011 18:06 UTC (Wed) by rfunk (subscriber, #4054) [Link]

Left out of the Xfce/udev example is an explanation of why software can't stick with HAL on systems that don't support udev. Is it just a matter of code divergence going too high up the abstraction layers, and not enough HAL-using testers?

Xfce, udev, and HAL

Posted Mar 2, 2011 18:40 UTC (Wed) by drag (subscriber, #31333) [Link]

Sound like a combination of lack of manpower and lack of application developers involved that use Unix systems that require HAL.

Xfce, udev, and HAL

Posted Mar 2, 2011 19:37 UTC (Wed) by nix (subscriber, #2304) [Link]

The whole reason for the existence of things like udisks is that they provide a consistent API that another package can implement to provide (e.g. disk) hotplugging info on systems that don't support udev. It's probably easier to implement that than to implement all of udev.

(If the problem is that FreeBSD doesn't have a useful hotplugging infrastructure yet, that's a different matter, but I thought it had one.)

Xfce, udev, and HAL

Posted Mar 2, 2011 19:43 UTC (Wed) by mezcalero (subscriber, #45103) [Link]

I am pretty sure that's not what davidz had in mind when he wrote udisks...

Xfce, udev, and HAL

Posted Mar 2, 2011 22:26 UTC (Wed) by nix (subscriber, #2304) [Link]

I was thinking of the original 'HAL is deprecated, use udisks/u*' email, but I didn't actually look it up so I could very well be wrong.

Xfce, udev, and HAL

Posted Mar 4, 2011 8:38 UTC (Fri) by michaeljt (subscriber, #39183) [Link]

> I am pretty sure that's not what davidz had in mind when he wrote udisks...

From a quick look at the API though it does look reasonably suitable for implementations on other platforms, even if that wasn't the idea. With a big exception for the LVM stuff.

Choosing between portability and innovation

Posted Mar 2, 2011 18:35 UTC (Wed) by paracyde (guest, #72492) [Link]

The BSD guys really need to become innovators instead of consumers. I ask myself: How many BSD guys are X.Org commiters? How many BSD guys work on the Freedesktop specifications? What about the GNU-project, KDE or Gnome? BSD guys like to bash GNU and Linux but never acknowledge, that their operating systems would've never gotten where they are now if GNU didn't exist (gcc, binutils, etc...).

In my opinion the solution is really simple: Take part in the design and development or be left behind.

Choosing between portability and innovation

Posted Mar 2, 2011 18:44 UTC (Wed) by vonbrand (guest, #4458) [Link]

The *BSD problem is (in large part) license mismatch... just like the fiasco with Solaris' ZFS and other goodies in Linux.

Choosing between portability and innovation

Posted Mar 2, 2011 19:33 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

X.org is MIT/BSD, so no license problems. GNOME is LGPL - also fairly acceptable.

Choosing between portability and innovation

Posted Mar 2, 2011 20:05 UTC (Wed) by vonbrand (guest, #4458) [Link]

MIT/BSD is acceptable for GPL types (like Linux), but not the other way around.

Choosing between portability and innovation

Posted Mar 2, 2011 20:16 UTC (Wed) by michaeljt (subscriber, #39183) [Link]

> The *BSD problem is (in large part) license mismatch... just like the fiasco with Solaris' ZFS and other goodies in Linux.

It is worth pointing out that all the KMS and DRM infrastructure is MIT licenced, and I believe that is at least in part for the benefit of the BSDs and Solaris.

Choosing between portability and innovation

Posted Mar 3, 2011 1:43 UTC (Thu) by wahern (subscriber, #37304) [Link]

The problem with BSD cooperation is that Linux (and Linus) don't like to cooperate. They like to experiment and do their own thing. Look at epoll+timerfd+signalfd+(dnotify/fanotify), etc versus kqueue. BSDs had that functionality for a decade while Linux experimented and went their own way.

Linux is about features, while BSD emphasizes design. This makes the BSDs conservative adopters; not because they don't want to adopt the feature, but because they're concerned about the API, and the littlest doubt about an API will prevent adoption of the feature.

This might be a result of the fact that Linux has a mob of people jockeying to rewrite huge subsystems at the drop of a hat, so that the quality of APIs matters less. (As for backwards compatibility, Linux has supported several competing interfaces simultaneously.)

All of this is a difference of degree, though. Of course Linux developers are concerned about design; but bad design isn't as costly, so there are more risks taken. Or like with the case of epoll, what matters is getting feature X out before even worrying about feature Y; let the APIs accrete just like the features.

Choosing between portability and innovation

Posted Mar 3, 2011 3:25 UTC (Thu) by foom (subscriber, #14868) [Link]

And epoll certainly has a *HUGE* misdesign in it, that anyone who actually understood what a file descriptor is should've seen coming. But if you look back in the history of epoll, you'll see that it looks like the implementors apparently didn't understand the difference between file descriptors and file descriptions. :(

epoll_ctl lets you register a file descriptor for monitoring. However, it *actually* registers the underlying file description for monitoring. But then it remembers in kernel-land the file descriptor number you passed in and reports all events for the file descriptor with that file descriptor number you passed in originally. No matter what you do to the file descriptor afterwards.

Furthermore, it has an automatic deregistration feature: if you close an fd, it'll stop monitoring it automatically...at least, that would make sense. So, no, it doesn't *really* do that. What it really has is automatic deregistration when the file description is closed (that is: when *all* file descriptors referencing the file description are closed). That's just a pain in the ass!

So, if you close an fd, but have in the meantime have fork()'d, dup()'d the fd to another fd, or something of that nature, epoll will continue reporting events with the number of a closed fd. And sometimes you can't even remove it from the set, since the fd number it's reporting back isn't the right file anymore!

So if you're designing a library where user code might close an fd out from under you (not an unreasonable thing to support), you need to have special case code to go back and recreate the epoll set from scratch in case you start getting bogus responses to workaround this misdesign. Quite obviously, epoll ought to work solely on fds: if you register fd 3, start watching fd 3, dup fd 3 to fd 4, and then close fd 3, epoll should not report events anymore. That would be sane. But nope, it doesn't work that way. Mutter mutter.

Choosing between portability and innovation

Posted Mar 3, 2011 16:53 UTC (Thu) by nix (subscriber, #2304) [Link]

Good grief. That's appalling. And you can't fix it without breaking anyone who *expects* this stuff to work across fork() (though I'd be inclined to call such code broken).

Choosing between portability and innovation

Posted Mar 6, 2011 0:27 UTC (Sun) by Tet (subscriber, #5433) [Link]

This is one case where the pain of breaking compatibility with existing code is probably less than the pain of living with a broken design.

Choosing between portability and innovation

Posted Mar 6, 2011 4:30 UTC (Sun) by nix (subscriber, #2304) [Link]

Quite (though it can be used properly, it's definitely in the negative half of that API correctness thing gregkh(?) put up a while back: an API for which the obvious use is wrong).

Thankfully epoll() is not widely used yet, and being Linux-specific all the users can fall back to other mechanisms. (However, the old syscall would have to stay, anyway: we'd need an epoll2() syscall that did things right.)

Choosing between portability and innovation

Posted Mar 6, 2011 6:41 UTC (Sun) by jthill (subscriber, #56558) [Link]

I wouldn't knock epoll, myself. I recognize the design. It seems odd to me that over the last few days I've run across a spate of gripes about epoll in various places, all of them based on misconceptions like the ones in this thread. It isn't just the original complaint (since retracted), it's the remaining gripes too.

There's an unavoidable race between event reporting and any close processing that's most efficiently handled by keeping your own books. It's not so hard to EPOLL_CTL_DEL before you close(), and defer event-tracking cleanup rcu-style until after a nonblocking epoll_wait returns 0, to catch events reported between your decision to close and the actual close.

Note that nothing you can do avoids that race. Having close do the deletion processing will not save you: the event may have already been posted.

IBM don't want their manuals accessed by non-customers so I won't link the description, but epoll is down there in big-O range of mainframe-style event reporting. The main difference I can see is that on the mainframe, the table is in your address space and sized by how many events can get backed up (posted but as-yet unhandled) before something is badly wrong -- for anything but large-scale work you just size it to handle everything anyway.

I notice that nothing in the reporting api constrains epoll to file-based events. Tying any asynchronous event at all to epoll should be possible. Timers come to mind, of course. epoll_pwait handles signals. Me, I think it'd be good to have an epolled_fork, so child termination comes in as just another thing on your events list.

Choosing between portability and innovation

Posted Mar 4, 2011 21:04 UTC (Fri) by nevyn (subscriber, #33129) [Link]

> And epoll certainly has a *HUGE* misdesign in it [...]
> epoll_ctl lets you register a file descriptor for monitoring. However,
> it *actually* registers the underlying file description for monitoring.
> But then it remembers in kernel-land the file descriptor number you passed
> in and reports all events for the file descriptor with that file
> descriptor number you passed in originally.
> No matter what you do to the file descriptor afterwards.

That's a pretty misleading description. You register for events and give the kernel a pointer or a number as your callback data ... you then get that piece of data and a set of flags, when something happens.

If you pass in your "original fd number" as the data you want back, that's what you get. Personally, when I've used it, I used a pointer to a struct as the callback data.

Choosing between portability and innovation

Posted Mar 4, 2011 21:45 UTC (Fri) by foom (subscriber, #14868) [Link]

No, it really wasn't misleading. The userspace-visible key *is* the fd. You're required to pass that in, that's what epoll uses to look up the file object, it will always pass that back, and that's what you have to pass to epoll_ctl for further modifications/deletions of the watch. You can of course *in addition* register arbitrary user data that it'll pass back to you. But that's optional. The fd is required.

Segue into another rant of mine....that's yet another reason why the auto-removal-upon-file-close functionality is broken. There's no way to get a notification that epoll has autoremoved an entry from the epoll set, so, it's not actually possible for you to ever *free* the data (and other resources) you allocated and passed into epoll_ctl. Sigh. (same bug exists with kqueue API btw).

Choosing between portability and innovation

Posted Mar 4, 2011 22:11 UTC (Fri) by nevyn (subscriber, #33129) [Link]

> No, it really wasn't misleading. The userspace-visible key *is* the fd.
> You're required to pass that in, that's what epoll uses to look up the
> file object, it will always pass that back

Yes, you need an fd to register/change the epoll events. But it isn't ever passed back:

epoll_wait() returns a list of "struct epoll_event", which is defined as:

struct epoll_event
{
uint32_t events; /* Epoll events */
epoll_data_t data; /* User data variable */
} __attribute__ ((__packed__));

...epoll_data_t is the union I talked about before, so you can use a pointer or a number for your callback data. You don't get the fd back from the API, unless you use that as your callback data but if you do that it's just "a number" ... so I wouldn't expect the kernel to do anything special with it.

Choosing between portability and innovation

Posted Mar 4, 2011 23:08 UTC (Fri) by foom (subscriber, #14868) [Link]

Okay, I must apologize, you are completely right.

I was getting myself mixed up with kqueue's API (which actually does have you specify a struct containing both the fd and userdata), and then mis-reread the epoll docs. (As you might be able to tell, I always passed the fd as the user data. :))

However, that doesn't change the main point I wanted to make, which is that it ought to be tracking fds internally instead of files -- I don't want it to do anything special with the value it returns, I want it to stop watching (and tell me that it did stop watching), if the *fd* I originally gave is closed, not if the underlying file is closed. Returning events on a file that I no longer have a handle for in the current process is annoying.

Choosing between portability and innovation

Posted Mar 6, 2011 19:04 UTC (Sun) by intgr (subscriber, #39733) [Link]

> However, that doesn't change the main point I wanted to make, which is
> that it ought to be tracking fds internally instead of files

> I want it to stop watching (and tell me that it did stop watching), if
> the *fd* I originally gave is closed, not if the underlying file is closed

Seriously, if you want it to stop watching, you use EPOLL_CTL_DEL before closing a file descriptor. The implicit deregister-on-close() is just a safety net -- because it's the only sane thing that the kernel can do when you have watches on a file that's being closed. It's not intended to be used that way.

You tried to spin it as a "*HUGE* misdesign", but in practice it's just a trivial edge case that shouldn't affect real applications. Wouldn't be the first time *BSD people criticize parts of Linux they don't actually understand.

Choosing between portability and innovation

Posted Mar 3, 2011 20:24 UTC (Thu) by clump (subscriber, #27801) [Link]

The problem with BSD cooperation is that Linux (and Linus) don't like to cooperate.
The article does a good job of articulating issues with resource constraints and rapid pace. You've mentioned functionality, but nothing to support the accusation that "Linux" doesn't like to cooperate.

Choosing between portability and innovation

Posted Mar 4, 2011 14:37 UTC (Fri) by trasz (guest, #45786) [Link]

Kqueue vs epoll is a pretty good example of lack of willingness to coooperate - kqueue was earlier, yet Linux folks chose to implement something incompatible and functionally inferior.

Choosing between portability and innovation

Posted Mar 4, 2011 20:59 UTC (Fri) by nevyn (subscriber, #33129) [Link]

kqueue is a _huge_ interface, combining a number of ideas, you basically need to design your kernel around it if you want users to use it. And much like SYSV streams, expecting other people to reimplement a giant redesign like that is often expecting _way_ too much.

On the other side, timerfd(), socketfd() and epoll() are all there to do a single thing. FreeBSD did the retarded thing when they reimplemented sendfile() roughly a week after Linux added it. TCP_CORK is still not implemented in FreeBSD. mremap() was NAKd, there's the whole MMAP_ANON vs. MMAP_ANONYMOUS, or the weird bits of mmap in general.

Cooperation is much more like to happen in the FreeBSD => Linux direction, IMO ... but that could also just be the much bigger developer pool.

Choosing between portability and innovation

Posted Mar 9, 2011 14:48 UTC (Wed) by mheily (guest, #27123) [Link]

Actually, you don't need to redesign your kernel to implement the kqueue API on Linux. The libkqueue project provides a userspace wrapper library that translates each kevent() call into the equivalent epoll/inotify/timerfd/signalfd/etc call for Linux. On Solaris, it uses the event port framework. On Windows, it will use the WaitForMultipleObjects() function.

(Disclaimer: I am the main author of libkqueue)

Choosing between portability and innovation

Posted Mar 9, 2011 20:52 UTC (Wed) by nevyn (subscriber, #33129) [Link]

My biggest concern with doing something like that would be how efficient it is compared to using the native interface (and why I said you'd need to redesign the kernel, so that it could be implemented efficiently), the only benchmark you have is vs. poll() (and uses ab) ... which is pretty sad.

epoll <=> kqueue is probably the best case test too, to be convincing you'd want something that benchmarked EVFILT_VNODE/SIGNAL/TIMER at least. To be really convincing you'd want PROC/USER/AIO and play with the EV_* flags.

Choosing between portability and innovation

Posted Mar 20, 2012 20:07 UTC (Tue) by scientes (guest, #83068) [Link]

> Linux is about features, while BSD emphasizes design.

While I can't exactly agree with this flame, I will note that this very thing your are describing with kqueue/epoll also happened with /dev/cryto which was on OpenBSD and there even existed a Linux patch, and then Linux did their own thing with AF_ALG. (now, they did do some benchmarking...)

Choosing between portability and innovation

Posted Mar 3, 2011 5:22 UTC (Thu) by AnthonyJBentley (guest, #71227) [Link]

I ask myself: How many BSD guys are X.Org commiters? How many BSD guys work on the Freedesktop specifications? What about the GNU-project, KDE or Gnome?

BSD is a smaller project. That’s a simple fact. There are BSD people in all of those projects, but proportionally so few that they simply don’t have as much influence.

Example: GCC. The dislike of GCC is not just because of the license. GCC is notoriously averse to committing patches from BSD, so downstream has to maintain their own forks of the compiler. GCC also drops support for architectures that BSD still uses; OpenBSD has various copies of GCC 2.x, 3.x, and 4.x in the tree to compile for the various architectures. Having to maintain these kinds of things redirects valuable manpower from other projects like Xorg.

Choosing between portability and innovation

Posted Mar 3, 2011 5:26 UTC (Thu) by JoeBuck (subscriber, #2330) [Link]

"GCC is notoriously averse to committing patches from BSD" ....

Where do you get that idea?

Choosing between portability and innovation

Posted Mar 3, 2011 11:01 UTC (Thu) by rleigh (guest, #14622) [Link]

" GCC also drops support for architectures that BSD still uses; OpenBSD has various copies of GCC 2.x, 3.x, and 4.x in the tree to compile for the various architectures. Having to maintain these kinds of things redirects valuable manpower from other projects like Xorg."

This, at least superficially, appears to be a huge waste of effort. How many actual users are doing new installs using the obsolete architectures supported by those ancient compilers? Is the cost/benefit actually worth it? Would time not be better spent just using the current GCC release and making sure the architectures you really care about are supported with it? Architectures get dropped from GCC when they aren't maintained; does BSD actively maintain their supported architectures in current GCC, or does it rely on others to keep them updated? Dropping old GCCs would allow direction of your efforts to where they would make a real difference, rather than spending it where it benefits only a few.

This brings significant additional costs too. If you're using GCC 2.x, you're missing out on stuff like ISO C99, ISO C++ and its standard library, which is another case of portability blocking progress. If that's your baseline, *no one* can use C99 features in the BSD tree, or C++. And this is 12 years after C99 was adopted. *That* is blocking progress--it's directly preventing the use of standard features in our core languages.

Regards,
Roger

Choosing between portability and innovation

Posted Mar 10, 2011 23:01 UTC (Thu) by phoenix (guest, #73532) [Link]

Hrm, no, if you actually look back at the development history, you'll note that the Linux devs need to rid themselves of NIH Syndrome.

Working, stable, performant wireless networking stacks were developed on BSD first. How many wireless stacks has Linux gone through so far?

Working, stable, performant device detection and /dev filesystem was developed on FreeBSD first. How many /dev filesystem setups has Linux gone through so far, and still re-writing them to this day?

Working, stable, performant RC systems were developed on BSD first. How many RC init systems has Linux gone through so far, and is still re-writing them to this day?

Working, stable, non-root X setups were developed on OpenBSD first. Canonical got to brag about being the first Linux distro maker to accomplish that this past year. But it's still no working right.

Working, stable, performant and portable packet filtering was developed on BSD first. How many packet filters has Linux gone through now?

Working, stable, performant USB stacks were developed on BSD first. How many different USB stacks has Linux gone through now?

Working, stable, in-kernel multi-channel, no-locking, sound mixing was available on FreeBSD before ALSA was ever even thought of. How many
"sounds daemons", "sound servers", and sound stacks has Linux gone through now?

And the list goes on. Re-writing sub-systems with every release of the kernel is not "innovative".

Choosing between portability and innovation

Posted Mar 11, 2011 12:36 UTC (Fri) by nix (subscriber, #2304) [Link]

Virtually of the things you whine about in this email haven't had any inkling of change in years, and to be honest it doesn't matter a damn who 'did it first', it matters which works better. And in many of these cases, BSD loses massively precisely because they don't reinvent.

- /dev filesystem: two implementations (you often have to write one and throw it away to learn how *not* to do it). One upstream, architecture pretty much unchanged since day 1 (though this all depends on what you mean by a '/dev filesystem setup': one of the lessons of devfs was to try to stick to the traditional /dev layout to avoid breaking things unnecessarily). But perhaps you're referring to the shift to a single upstreamed udev database? That's *optional* and several major distros (Debian for one) don't use it.

- RC init systems: again, ambiguous. sysvinit was *the* init system for many decades, but is terribly limited, so much so that most of its functionality is almost unused. Init scripts, well, we started with BSD's 'giant shell script' approach. The difference is, BSD stuck with it even though it sucks massively in systems with package managers. Linux moved on, and even now the init script format defined right after that is still accepted by the latest bleeding-edge init replacements.

- non-root X: BSD has revoke(). Linux doesn't. Not a problem of reinvention nor NIH, just 'this is a really hard problem'. Most of the BSDs still don't have non-root X because they are NIHing with respect to each other.

- Packet filtering: the last packet filter redesign was something like eight years ago. I know of iptables packet filters that've been working unchanged for all that time. Huge code churn this is not.

- USB stacks: how many? well, er, one (which didn't work very well until hotplugging worked properly: the same hotplugging you damn elsewhere). Yeah, we have support for things like USB 3.0. That's not a 'new stack', that's changes to an existing stack to support new hardware.

Sheesh.

Re: second class citizens

Posted Mar 19, 2011 18:50 UTC (Sat) by gvy (guest, #11981) [Link]

NB: writing articles on "treating non-Linux operating systems as second-class citizens" doesn't count! :)

Would be actually interesting to know why move from Linux to BSD (I suppose that the author isn't prone to "linux sux freebsd is tru unix" meme), *and* to moan subsequently on it lagging behind.

When we prepared our yearly FLOSS conference six years ago, some of local *bsd folks moaned that there were too few *bsd reports, and that Tux-branded packets for papers which were printed and generally available then are unacceptable (reminds me of "political correctness"). They were advised to, well, bring in reports, and take care of funding and printing custom packets. They organized a "registration" flashmob tagged "OSDN is not Linux" instead...

So my suggestion stays the same: either help or don't pop up.

Choosing between portability and innovation

Posted Mar 2, 2011 18:39 UTC (Wed) by cmorgan (guest, #71980) [Link]

Very interesting article!

As an app developer I've certainly made decisions based upon the most popular platform, why spend a lot of time to support >10% of the market? As an app user it's nice when you can easily port an application to another OS or system.

*BSD developers might try to cooperating with Linux developers in areas where Linux is getting far ahead. Why reinvent the wheel? Putting some egos aside might help.

Chris

It's beyond even OS assumptions.

Posted Mar 2, 2011 19:35 UTC (Wed) by ejr (subscriber, #51652) [Link]

We're in the "all the world's a PC" phase to echo ye olde "all the world's a VAX" age. Many pieces of software require system-wide installation. Home directories cannot be shared between OS versions and *definitely* not between architectures. And people assume it's the right and true way to operate.

It's beyond even OS assumptions.

Posted Mar 3, 2011 10:49 UTC (Thu) by Seegras (guest, #20463) [Link]

> Home directories cannot be shared between OS versions and *definitely* not
> between architectures.

I don't know which OS you use, but on Linux I can; between Slackware, SuSE, and Debian 1.3 to 6.0 (and in fact, I did, I'm still using essentially my home from 10 years ago), and also from i386 to ARM to Sparc to UltraSparc to x64.

There's some problems with *BSD and Solaris, because there you need to install a slew of ports in order for my "basic" configs of my home to work (most obviously, you need a decent shell, bash to be precise).

It's beyond even OS assumptions.

Posted Mar 3, 2011 15:28 UTC (Thu) by ejr (subscriber, #51652) [Link]

Ah, so you don't use any non-system-wide browser (or gimp, etc.) plugins, or you have customized all the required environment variables. Many different arch plugins are dropped into the same directory. Some systems will just skip different-arch plugins, but you have to take care to rename them *before* copying on top of existing ones.

It's beyond even OS assumptions.

Posted Mar 6, 2011 0:34 UTC (Sun) by Tet (subscriber, #5433) [Link]

Precisely. I share my home directory, and mostly it works fine. But the morons at Mozilla don't know how to write software properly, so my Firefox plugins break every time I log in from a different machine. Gimp is slightly better (in that it at least supports the concept of running different versions and multiple instances concurrently), but still problematic.

Choosing between portability and innovation

Posted Mar 2, 2011 19:40 UTC (Wed) by mezcalero (subscriber, #45103) [Link]

What I actually suggested in that interview was not so much that the BSDs should adopt the Linux APIs, but instead that people should just forget about the BSDs. Full stop.

In the first sentence the article declares "portability" a key concept in the open source ecosystem. That's a claim I don't agree with at all. "Portability" might be something rooted in the darker parts of Unix tradition but I don't see how this matters to open source at all. The fact that Unix was forked so often is a weak point of it, not a strong point. And only that forking put portability between OSes on the agenda at all. So it has something to do with the history of Unix, not with the idea of Free Software. And caring for portability slows down development.

I have a hard time understanding why "portability" in itself should be a good thing. it doesn't help us create better software or better user experience. It just slows us down, and makes things complicated. If you decide to care about portability you should have good reasons for it, but "just because" and "it's a key concept of open source" are bad and wrong reasons. (good reasons sound more like: "i need to reach customers using windows")

There are couple of other statements in this text I cannot agree with. Claiming that the fast pace of development on Linux was "a problem" is just bogus. It's neither a bad thing for Linux nor for BSD, since even the latter gets more features more quickly this way: in many areas the work on Linux does trickle down to BSDs after all. What *really* might be problem for BSD is that BSD development is otherwise really slow.

And on Linux, if we ever want to catch up with MacOS or Windows we probably even need a much faster pace.

And the last sentence of the article is actually the wrongest statement of them all: "The innovation of Linux inevitably comes at a price: Linux is the de facto Unix platform now, ..." -- wow, this is just so wrong. I'd claim right the contrary! The Unix landscape was splintered and balkanized during most its history. We finally are overcoming that, and that is a good thing -- that is a fantastic thing. That's not a price you pay, that's a reason to party!

Lennart

Choosing between portability and innovation

Posted Mar 2, 2011 20:51 UTC (Wed) by rfunk (subscriber, #4054) [Link]

While I agree with the idea of making use of what Linux provides, especially in low-level software, what you said in this particular comment is wrong in an incredible number of ways.

For one thing, your historical interpretation dismisses the fact that Unix was expressly written in a portable language, which was revolutionary at the time, and without that would never have gone beyond DEC minicomputers. It also dismisses the benefits that occurred specifically because of forks/reimplementations such as BSD and Linux. You seem to think that Linux is the pinnacle of Unix evolution, and beyond it is nothing but better Linux, which seems rather short-sighted to me.

Alternative implementations of parts of a system improve the longevity of the system, since different implementations may end up adapting better to new circumstances down the road.

Portability has mattered to the free software ecosystem since day one. Just like no one company has all the best programmers, no one operating system has all the best programmers either. We can gain from people on the BSD platforms, and they can gain from us. Remember that much Free Software started on Solaris or other non-Linux platforms, and even now MacBooks are becoming increasingly popular as programming platforms, and there's no reason to shun those people. The Free Software Foundation releases software portable to Windows (with great derision from the OpenBSD folks), treating that software as a gateway drug to help lead users away from Windows.

Yes, we should take advantage of advancements in Linux, and push for more, but we also shouldn't completely throw away portability.

Choosing between portability and innovation

Posted Mar 2, 2011 21:04 UTC (Wed) by boudewijn (subscriber, #14185) [Link]

Erm... Unix was written C to be portable between various kinds of hardware, right? Not between various kinds of Unix. Linux is fantastically portable between various kinds of hardware, and so is Linux software. Even if I, as a stupid application developer am thankful to receive the odd Arm patch for my code...

Choosing between portability and innovation

Posted Mar 3, 2011 9:18 UTC (Thu) by roblucid (subscriber, #48964) [Link]

Look at Linux history and you'll see feature sets applications depend on changing. Programs written to POSIX and keeping out of system details haven't needed much maintenance.
Those sacrificing portability and delving too deep, are those pesky things that keep breaking.

What's needed is a kernel/user space cooperation to develop good useful APIs, and make them easy to use for applications that can be later emulated on the BSDs with a different underlying implementation.

If it proves very difficult to write another implementation, it probably suggests that the API is a crock of **** and won't stand the test of time, and the ever onward march of progress. But be an ossified handicap to future progress. Good abstractions are things that can be built upon.

Choosing between portability and innovation

Posted Mar 2, 2011 21:05 UTC (Wed) by mezcalero (subscriber, #45103) [Link]

Don't mix up portability between CPU architectures and between OSes. This article is about the latter. Portability between CPU architecture is relatively easy and unproblematic in userspace, and much less of a headache (unless you hack assembly). I have no issue with CPU portability.

Choosing between portability and innovation

Posted Mar 2, 2011 21:19 UTC (Wed) by rfunk (subscriber, #4054) [Link]

In the Free Software world, I actually think the two types of portability are similar, have similar benefits, and both important; they're both platforms on which our code runs, and ideally can be swapped out underneath. But only one sentence of what I wrote was about CPU portability.

Choosing between portability and innovation

Posted Mar 2, 2011 22:36 UTC (Wed) by airlied (subscriber, #9104) [Link]

The thing is application portability support came about more likely because of fragmentation than as a means to allow fragmentation to happen.

Choosing between portability and innovation

Posted Mar 3, 2011 9:31 UTC (Thu) by roblucid (subscriber, #48964) [Link]

The ATT & BSD strands of UNIX diverged on features that were innovated like :

1) Reliable Signals
2) Virtual Memory
3) Shared Memory / Semaphores
4) Streams
5) FIFO's
6) TERMCAP & curses(3) (re-implemented by ATT as termlib with enhancements)

Given the widely perceived fragmentation of Linux distros, with tweaked features sets, for instance deb v rpm, it's naive to think you can write without regard to portability. Even with FHS, applications end up having to account for cosmetic differences between distros.

Portabiity is a requirement, otherwise you prevent change and innovation, and trust me you would not get much done stuck on Version 6 UNIX.

Look at the problems discussed with move to IPv6 last month!!! Portability allows the whole FOSS ecosystem to evolve and adapt.

Choosing between portability and innovation

Posted Mar 3, 2011 23:49 UTC (Thu) by jmorris42 (guest, #2203) [Link]

> Portability allows the whole FOSS ecosystem to evolve and adapt.

Amen. Without portability we wouldn't where we are now. More importantly, we will eventually be boned without it. Sooner or later Linux comes to the end of the road. The landscape will eventually change such that someone will realize it is time for a total redesign of what an OS is and if nothing is portable that new effort will fail, leaving us in a dead end to wither away.

Choosing between portability and innovation

Posted Mar 2, 2011 20:54 UTC (Wed) by hamjudo (guest, #363) [Link]

While I don't use BSD, I do have many embedded Linux systems that can't run a modern desktop, or in some cases, any desktop. Good monitors cost a lot more than most of those systems, so I really want to be sitting at the machine with my only good monitors.

I don't see a huge loss in losing access to modern desktop features in a particular OS. I've been remotely accessing systems for more than 3 decades. I don't see any reason to stop doing that now.

Standards are a wonderful thing. They let us innovate, while continuing to work with the rest of the infrastructure. I can "work on" all of my headless systems all I want, without having to make them try to keep up with my desktop. Likewise, I get to use the latest and greatest desktop, without worrying about what it will break on my development systems.

Choosing between portability and innovation

Posted Mar 2, 2011 21:35 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

"While I don't use BSD, I do have many embedded Linux systems that can't run a modern desktop, or in some cases, any desktop. Good monitors cost a lot more than most of those systems, so I really want to be sitting at the machine with my only good monitors."

So write a web-interface for these systems. Or use ssh/telnet. What's the problem?

Getting both portability and innovation

Posted Mar 2, 2011 21:59 UTC (Wed) by hamjudo (guest, #363) [Link]

I should have written: "While I don't use BSD, I do have many embedded Linux systems that can't run a modern desktop, or in some cases, any desktop. Good monitors cost a lot more than most of those systems anyways, so I sit at the system with the best monitors, and access the development systems using X Windows."

Standards like X11, ssh, NFS, tar, and many more that I don't even realize I'm using, make remote operation easy and painless.

BSD developers who want the latest desktop software can use it, if they want it. It just means that they'll need a copy of Linux running on a real, or virtual host.

Getting both portability and innovation

Posted Mar 2, 2011 22:29 UTC (Wed) by nix (subscriber, #2304) [Link]

You mean the X11 which Wayland is trying to shoot in the head? I'm sure when half your apps are Wayland apps tied to your physical machine that remote work using them will be *such* a lot of fun.

But nobody needs remote access, anyway. That's obsolete, like portability.

Bah.

Getting both portability and innovation

Posted Mar 2, 2011 23:05 UTC (Wed) by einstein (subscriber, #2052) [Link]

> You mean the X11 which Wayland is trying to shoot in the head? I'm sure when half your apps are Wayland apps tied to your physical machine that remote work using them will be *such* a lot of fun.

On the contrary, Wayland will include X11 support - but it won't be loaded by default. That makes sense, since 98% of linux users never use the remote desktop features of X11 - but for those who need it, it will be there. Think of OSX X11 support, done right.

Getting both portability and innovation

Posted Mar 3, 2011 0:06 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link]

The death isn't going to come from lack of X11 support in the system, though. It's going to come when apps are written for Wayland rather than X11. Yes, legacy apps will still work with network transparency, but all the development effort will be done on apps that depend on Wayland-only features. The stuff that works over the network will die a slow death from bitrot.

Getting both portability and innovation

Posted Mar 3, 2011 9:34 UTC (Thu) by roblucid (subscriber, #48964) [Link]

Exactly! And pixel scraping won't work too well, when there's no desktop to scrape pixels off.

Getting both portability and innovation

Posted Mar 3, 2011 0:57 UTC (Thu) by drag (subscriber, #31333) [Link]

Every single day I use remote desktop applications at work. So do most people at my organization.

The clincher is that none of it depends on X... at all.

Getting rid of X as your display server does not eliminate the possibility of using remote applications. Nor does it remove the possibility of using X11 apps either.

It's mostly a red herring when discussing Wayland vs Xfree Server.

Besides all that...

X11 is obsolete, slow, and a poor match with todays technology. It could be something nice, but that would require X12 and if you ever noticed: nobody is working on that.

In fact I think that people are now using more remote applications then they ever did in the past. It's just that relatively few people actually use X11 networking to do it. It's a poor choice for a variety of reasons. I am not describing what I would like it to be.. I just telling it the way it is. The remote access boat has sailed and it's captain is named 'Citrix'.

I would like to change this fact, but X11 networking is not going cut it.

wayland does not preclude application remoting

Posted Mar 3, 2011 1:28 UTC (Thu) by nwnk (guest, #52271) [Link]

I grow increasingly tired of this strawman.

No one has defined a remoting protocol specifically for a wayland compositing environment yet. This is good, not bad, because it means we have the opportunity to define a good one. The tight binding between window system and presentation in X11 means that while purely graphical interaction is relatively efficient, any interaction between apps is disastrously bad, because X doesn't define IPC, it defines a generic communication channel in which you can build the IPC you want, which is a cacaphony of round trips.

You want a compositing window system. You probably want your remoting protocol in the compositor. The compositing protocol and the remoting protocol should probably mesh nicely but they're not the same problem and conflating them is a fallacy.

You'll note that wayland also does not define a rendering protocol. In fact it goes out of its way not to. Yet for some reason, wayland is not accused of killing OpenGL, or killing cairo, or killing any other rendering API (besides core X11 I suppose). If anything, that people so tightly mentally bind remoting, rendering, and composition is a tribute to the worse-is-better success of X11.

Unlearn what you have learned.

wayland does not preclude application remoting

Posted Mar 3, 2011 16:38 UTC (Thu) by nix (subscriber, #2304) [Link]

I hadn't thought of putting remoting in the compositor. It seems... odd, but not fundamentally any odder than doing it in the thing which draws the graphics, and I suppose it should work, since the compositor sees everything flowing to and from the user. I suppose you could put all of X11 support in there as well and then not need to support remoting anywhere else. (Which makes me wonder why the X11 compatibility stuff isn't already being done that way... or is it?)

Choosing between portability and innovation

Posted Mar 2, 2011 22:42 UTC (Wed) by brugolsky (subscriber, #28) [Link]

Yes, and the biggest crying shame is that the networking setup in most distributions make so little use of features (like traffic shaping) that have been available since Linux 2.2 in 1999! [ALT Linux and the /etc/net framework is an exception here.]

Choosing between portability and innovation

Posted Mar 3, 2011 16:39 UTC (Thu) by nix (subscriber, #2304) [Link]

Available... but documented? In English? Anywhere? (lartc.org, I suppose...)

Choosing between portability and innovation

Posted Mar 3, 2011 21:51 UTC (Thu) by xyzzy (guest, #1984) [Link]

I'd have to agree, last time (quite a few years ago) I tried to use the traffic shaping the documentation wasn't much help. The lartc howtos were great as cookbook "do it this way" examples, but if I started from scratch I was unable to write a tc ruleset that worked. That's not-worked as in wouldn't pass any traffic at all.

If there's better documentation now I'd love to know about it.

Choosing between portability and innovation

Posted Mar 4, 2011 16:45 UTC (Fri) by jeremiah (guest, #1221) [Link]

LOL..amen. Gotta love the high black arts of Linux network administration. Maybe it's time for O'reilly and Olaf to come out with a revised version of the Linux Network administrator's guide. My version is from 1995.

On Tradition of Portability in UNIX

Posted Mar 3, 2011 9:07 UTC (Thu) by roblucid (subscriber, #48964) [Link]

Portability was a key feature of UNIX itself from early days. It was often simpler to port the OS, to new architectures than port applications to different OSes. That included moving from 16 bit minis to 32 bit mainframe style machines, and application programs benefitted from a sane and simple model for file handling (compared to most 60's & early 70's OSes).

There's nothing wrong with components implementing a layer taking advantage of system specific features. The API provided by that layer of components should however be designed to be cleanly re-implementable and "cut" where much detail can be hidden. So that applications see reduced complexity and focus on the essential essence rather than implementation details.

Exposing everything to applications, just leads to a morass of complexity and poorly done buggy reimplementations of the same old thing. The whole Linux Audio story with OSS/ALSA and your difficulties with Pulse Audio ought to show why it's VERY BAD to have implementation specifics leak into widely distributed applications.

Design of API's is a KEY selling point and if the BSD's, truly have problems emulating a hot disk layer for the desktop, then it suggests an overly highly coupled badly abstracted implementation that replaced HAL.

On Tradition of Portability in UNIX

Posted Mar 3, 2011 17:52 UTC (Thu) by dlang (guest, #313) [Link]

If I am not mistaken, the person calling for ignoring BSD and portability is the author of pulse audio

Choosing between portability and innovation

Posted Mar 3, 2011 9:08 UTC (Thu) by dgm (subscriber, #49227) [Link]

> it doesn't help us create better software or better user experience.

Wrong. It *does* help create better software. Every time I try to build my software with another compiler I discover things that can be done better. Every time I try to port my code to a new platform I find better abstractions for what I was trying to achieve.

Maybe you don't care about that. Maybe you don't care about good, solid code that works because it is properly written. Maybe you just care about getting the thing out and declare you're done?

I guess I will stay away from systemd for a while, then.

Choosing between portability and innovation

Posted Mar 3, 2011 11:00 UTC (Thu) by Seegras (guest, #20463) [Link]

> > it doesn't help us create better software or better user experience.
>
> Wrong. It *does* help create better software. Every time I try to build my
> software with another compiler I discover things that can be done better.
> Every time I try to port my code to a new platform I find better
> abstractions for what I was trying to achieve.

Absolutely.

I've got to hammer this home some more. Porting increases code quality.

("All the world is windows" is, by the way, why games written for windows are sometimes so shabby when released. And only after they've been ported to some other platform they get patched to be useable on windows as well).

Choosing between portability and innovation

Posted Mar 3, 2011 12:32 UTC (Thu) by lmb (subscriber, #39048) [Link]

That is true: porting increases quality.

However, only to a point, because one ends up with ifdefs strewn through the code, or internal abstraction layers to cover the differences between supported platforms of varying ages. This does not improve the quality, readability, nor maintainability of the code indefinitely.

Surely one should employ caution when using new abstractions, and clearly define one's target audience. And be open to clean patches that make code portable (if they come with a maintenance commitment).

But one can't place the entire burden of this on the main author or core team, which also has limited time and resources. Someone else can contribute compatibility libraries or port the new APIs to other platforms - it's open source, after all. ;-)

And quite frankly, a number of POSIX APIs suck. Signals are one such example, IPC is another, and let us not even get started about the steaming pile that is threads. Insisting that one sticks with these forever is not really in the interest of software quality. They are hard to get right, easy to get wrong, and have weird interactions; not exactly an environment in which quality flourishes.

If signalfd() et al for example really are so cool (and they look quite clean to me), maybe it is time they get adopted by other platforms and/or POSIX.

Choosing between portability and innovation

Posted Mar 3, 2011 17:53 UTC (Thu) by nix (subscriber, #2304) [Link]

Linux innovations like this routinely get adopted by POSIX (obviously not things like sysfs, but I would not be remotely surprised to find things like signalfd getting adopted, since it cleans up the horror of signal handling enormously), but new POSIX revisions are not frequent and even after that it takes a long time for new POSIX revisions to get implemented in some of the BSDs (and, for that matter, in Linux, in the rare case that it didn't originate the change).

Choosing between portability and innovation

Posted Mar 3, 2011 13:02 UTC (Thu) by epa (subscriber, #39769) [Link]

Porting increases code quality.
By more than if you spent the same number of hours on some other development task without worrying about portability? Working on portability does have positive side effects even for the main platform, but it takes away programmer time from other things. So you have to decide if it's the best use of effort.

Choosing between portability and innovation

Posted Mar 3, 2011 17:54 UTC (Thu) by dlang (guest, #313) [Link]

yes, because dealing with portability forces you to think more about the big picture and by seeing different ways that things can be done.

Choosing between portability and innovation

Posted Mar 3, 2011 14:51 UTC (Thu) by mezcalero (subscriber, #45103) [Link]

If you want to improve the quality of your software the best thing you can do is actually to use a tool whose purpose is exactly that. So, sit down and reread your code, or sit down and use a static analyzer on it. But porting it to other systems is not the right tool for the job. It might find you a bug or two by side-effect. But if you want to find bugs then your time is much better invested in a tool that checks your code more comprehensively and actually looks for problems rather than just a small set of incompatibilities with your software.

Porting is in fact a quite bad tool for this job, since it shows you primarily issues that are of little ineterest to the platform you actually are interested in. Also, due to the need for abstraction it complicates reading the code and the glue code increases the chance of bugs, since it is more code to maintain.

So, yupp. If you want to review your code, then go and review your code. If you want to staticly analyze your code, then do so. However, by porting you probably find fewer new issues than it might introduce.

Choosing between portability and innovation

Posted Mar 3, 2011 17:43 UTC (Thu) by jensend (guest, #1385) [Link]

But static analyzers, etc won't help you realize how your API etc is badly designed and where it needs improving, while porting will. I know it's hard for you to conceive since you think that your gift of PulseAudio to mortals was the biggest deal since Prometheus gave fire to mortals, but not everybody thinks that it's terribly well designed. In its earlier versions, it was miserable to deal with.

The pattern keeps repeating itself- those who are developing new frameworks where portability is an afterthought tend to have tunnel vision and the resulting design is awful. Sure, the software gets written, but it's only to be replaced by another harebrained API a couple years down the road. This is what gets us the stupid churn which is one of the prime reasons the Linux desktop hasn't really gotten very far in the past decade. I'll give two examples:

If people had sat down and said "what should a modern Unix audio subsystem look like? What are the proper abstractions and interfaces regardless of what kernel is under the hood?" we wouldn't have had the awful trainwreck which is ALSA and half of the complications we have today would have been averted.

The only people doing 3d work on Linux who don't treat BSD as a third-class citizen are the nV developers. Not coincidentally, they're the only ones who have an architecture which works well enough to be consistently competitive with Windows. The DRM/Mesa stack has seen a dozen new acronyms come and go in the past few years, without much real improvement for end users. Frameworks have often been designed for Intel integrated graphics on Linux and only bludgeoned into working for other companies' hardware and - only recently- for other kernels. Even for Intel on Linux the result is crap.

Choosing between portability and innovation

Posted Mar 3, 2011 18:08 UTC (Thu) by nix (subscriber, #2304) [Link]

The DRM/Mesa stack has seen a dozen new acronyms come and go in the past few years, without much real improvement for end users
Shaders and a shader compiler and acceleration and 3D support for lots of new cards isn't enough for you?

Choosing between portability and innovation

Posted Mar 4, 2011 3:30 UTC (Fri) by jensend (guest, #1385) [Link]

It's true that there have been a lot of changes and advances in hardware and in OpenGL; keeping up with these takes effort and new ideas.

But if you go back a decade to when Linux graphics performance and hardware support (Utah-GLX and DRI) were closer to parity with Windows, things weren't simple then either. There were dozens of hardware vendors rather than just three (and a half, if you want to count VIA), each chip was more radically different from its competitors and even from other chips by the same company, etc.

While there's been progress in an absolute sense, relative to Mac and Windows, Linux has lagged significantly in graphics over the past decade. Graphics support is a treadmill; Linux has has often been perilously close to falling off the back of the treadmill.

I don't mean to say the efforts of those working on the Linux X/Mesa stack alphabet soup have all been pointless; nor do I claim that all of the blame rests with them. The ARB deserves a lot of the blame for letting OpenGL stagnate so long. It's a real shame that other graphics vendors and developers from other Unices haven't taken a more active role in helping design and implement the graphics stack, and while I think more could have been done to solicit their input and design things with other hardware and kernels in mind, they're responsible for their own non-participation.

Choosing between portability and innovation

Posted Mar 4, 2011 8:53 UTC (Fri) by drag (subscriber, #31333) [Link]

It's because the graphics in Linux was not so much designed as it was puked up by accident. It's just something that has been cobbled together and extended to meet new needs instead of undergoing a entire rework like every other relevant OS (aka Windows and OS X). You have no less then 4 separate drivers running 3 separate graphics stacks. They all have overlapping jobs, use the same hardware, and drastically need to work together in a way that is nearly impossible.

It's one thing to treasure portability, but it's quite another when the OS you care about does not even offer the basic functionality that you need to run your applications and improve your graphics stack.

Forcing Linux developers to not only improve and fix Linux's problems, but also drag the BSD's kicking and screaming into the 21st century is completely unreasonable.

Ultimately if you really care about portability the BSD OSes are the least of your worries. It's OS X and Windows that actually matter.

Choosing between portability and innovation

Posted Mar 4, 2011 9:15 UTC (Fri) by airlied (subscriber, #9104) [Link]

The problem was while Windows and Mac OSX got major investments in graphics due to being desktop operating systems, Linux got no investment for years.

So while Linux was making major inroads into server technologies, there was no money behind desktop features such as graphics. I would guess compared to the manpower a single vendor has on a single cross-platform or windows driver, open source development across drivers for all the hw is about 10% the size.

Choosing between portability and innovation

Posted Mar 3, 2011 19:33 UTC (Thu) by mezcalero (subscriber, #45103) [Link]

Uh, you got it all backwards. PA has been portable from the very beginning. I am pretty sure that this fact didn't improve the API in any way, in fact it's not really visible in the API at all. This completely destroys your FUD-filled example, doesn't it?

I think the major problem with the PA API is mostly it's fully asynchronous nature, which makes it very hard to use. I am humble enough to admit that.

If you want to figure out if your API is good, then porting won't help you. Using it yourself however will.

From your comments I figure you have never bothered with hacking on graphics or audio stacks yourself, have you?

Choosing between portability and innovation

Posted Mar 3, 2011 22:59 UTC (Thu) by nix (subscriber, #2304) [Link]

Still, I think the asynchronous API was the right approach, if just because (as the PA simple API shows) it is possible to implement a synchronous API in terms of it, but not vice versa.

Mandatorily blocking I/O is a curse, even if nonblocking I/O is always trickier to use. Kudos for making the right choice here.

Choosing between portability and innovation

Posted Mar 4, 2011 4:19 UTC (Fri) by jensend (guest, #1385) [Link]

I'm no expert in this area, but I thought I remembered people grumbling that Pulse was cross-platform in name only and that it wasn't just a lack of people putting in time to make it work but also a number of design issues. I could be wrong. My main example here is ALSA, not Pulse.

Choosing between portability and innovation

Posted Mar 3, 2011 22:23 UTC (Thu) by airlied (subscriber, #9104) [Link]

nvidia are being paid money by a customer AFAIK to make stuff work on FreeBSD. If you pay me money I'll make any graphics card work on FreeBSD, but it would cost you a lot of money

otherwise you have no clue what you are talking about. Please leave lwn and go back to posting comments on slashdot.

Choosing between portability and innovation

Posted Mar 8, 2011 0:49 UTC (Tue) by lacos (guest, #70616) [Link]

people should just forget about the BSDs

You are promoting vendor lock-in. Under this aspect, it does not matter if a given application is free software: unless the end-user has the resources to make the port happen, he's forced to use the kernel/libc you have chosen for him.

Portability of application source code is about adhering to standards that were distilled with meticulous work, considering as many implementations as possible. The answer to divergent implementations is not killing all of them except one, and taking away the choice from the user. The answer is standardization, and a clearly defined set of extensions per implementation, and allowing the user to choose the platform (for whatever reasons) he'll run the application on.

The Unix landscape was splintered and balkanized during most its history. We finally are overcoming that, and that is a good thing

You seem to intend to overcome diversity by becoming a monopoly. The freeness of software (free as in freedom) doesn't matter here, see above.

Choosing between portability and innovation

Posted Mar 11, 2011 14:04 UTC (Fri) by viro (subscriber, #7872) [Link]

Dear duhveloper. Do not use the worst misdesigns shoved down our throats by Scamantec and its ilk. Yes, I mean fanotify(). No, the fact that it's got in does *NOT* mean it's good to use or should be enabled. Sigh... Lusers - can't live with them, can't dispose of the bodies without violating hazmat regs...

Choosing between portability and innovation

Posted Mar 2, 2011 23:44 UTC (Wed) by aggelos (subscriber, #41752) [Link]

As regards udev in DragonFlyBSD, it is not actually a port of any code. Alex tried providing a Linux-udev-compatible API in the library (libdevattr) he created, but the implementation is completely different to Linux's udev. Unfortunately, a quick load of http://www.kernel.org/pub/linux/utils/kernel/hotplug/libu... reveals that the udev api is intimately familiar with sysfs, so part of the API is pointless unless one reimplements sysfs. That said, it is encouraging that the portable part of the udev API was so close to our requirements that we decided to just adopt it where possible.

Choosing between portability and innovation

Posted Mar 3, 2011 9:39 UTC (Thu) by roblucid (subscriber, #48964) [Link]

Sounds like a bad API leaking internals

There seems to be a fashion against good abstractions and layering. The FOSS ecosystem has provided choice, because of modular components, allowing re-implementatin and improved features without breaking the application base.

Choosing between portability and innovation

Posted Mar 3, 2011 0:58 UTC (Thu) by jengelh (guest, #33263) [Link]

>Linux-centric developers are used to [...] breaking APIs each release. The BSD world doesn't work that way,

Right, in the BSD world, they even break the ABI. They don't have much of a problem with removing a system or a libc call.

Choosing between portability and innovation

Posted Mar 3, 2011 9:53 UTC (Thu) by roblucid (subscriber, #48964) [Link]

How badly have end users been affected by those ABI changes? They either have core binary release with OS, or the ports system to compile non-core packages. Most Linux end users seem to think they need freshly compiled source for their latest distro, and repos make this convenient. 3rd party generic software releases often breaks due to differences between distros.

The lack of kernel driver ABI for evil proprietary modules, probably has pained Linux end users more, than small clean ups to system call & libc intefaces would; after all the complaint is that only POSIX features get used.

Choosing between portability and innovation

Posted Mar 3, 2011 1:31 UTC (Thu) by i3839 (guest, #31386) [Link]

I just hope that Linux systems keep working without any of the
ConsoleKit, PolicyKit, systemd, HAL, PulseAudio or dbus crap running or installed. Sadly, dbus has to be installed now, and is started automatically by programs using dbus. It's not just BSD and other systems under pressure, it's plain Linux as well.

Choosing between portability and innovation

Posted Mar 3, 2011 10:33 UTC (Thu) by AndreE (guest, #60148) [Link]

boilerplate trolling? At least try to be original

Choosing between portability and innovation

Posted Mar 4, 2011 4:42 UTC (Fri) by Frej (subscriber, #4165) [Link]

He did keep the buzzword count low, unlike a few others :-).

Choosing between portability and innovation

Posted Mar 4, 2011 7:01 UTC (Fri) by i3839 (guest, #31386) [Link]

Could you elaborate? I'm not familiar with the term.

I thought I brought up a real problem, maybe it doesn't concern you, but I'm hopefully not the only one that's worried about the software ecosystem becoming less heterogeneous and flexible, with more and more "mandatory" dependencies, even intrusive low level system ones. It's like graphics software not only depending on a specific toolkit, but also on a whole desktop system installed.

Choosing between portability and innovation

Posted Mar 8, 2011 1:10 UTC (Tue) by lacos (guest, #70616) [Link]

boilerplate trolling? At least try to be original

Okay, I'll try to be "original" for him.

All this desktop integration goo is getting increasingly difficult to get rid of. There are users who don't need their features, and certainly don't want their bugs, possible inter-app privacy holes, and very probable security vulnerabilities. So while these components are helpful for most people, there's a small group of "power users" who consciously want to remove them, and it is more and more difficult.

The only thing I run on my "interactive systems" (both home desktop and work laptop, different distributions), out of "ConsoleKit, PolicyKit, systemd, HAL, PulseAudio or dbus" is D-Bus; and even that only because I can't remove it. I like to know why my UID runs a process; for PulseAudio, I was unable to find any reason.

I have four unencrypted pendrives, one encrypted pendrive, one flash card reader, one dumb digital camera; obviously an optical drive, and an encrypted hard disk in a USB disk enclosure. Two non-root users can mount different sets of these, with reasonable file permissions and good IO performance afterwards (for a change). I don't buy gadgets each second day, so it's not hard to update my static config. The inability to mount unseen pieces of hardware is a feature.

Choosing between portability and innovation

Posted Mar 9, 2011 13:49 UTC (Wed) by nix (subscriber, #2304) [Link]

There are good reasons why PA runs as you and not as a systemwide user (though it can). PA can be asked to load modules providing new features at runtime: this is obviously forbidden for systemwide daemons. PA can operate in a zero-copy mode, transferring audio data directly over shared memory: for obvious security reasons this must be avoided if the daemon may serve more than one user, forcing everything to be serialized and deserialized again.

PA itself is plainly necessary: most modern sound hardware can't mix, so the first open()er blocks all the rest. This is more than slightly aggravating to users (the difference between a very long block and a crash is not very large from a user perspective).

PA is really not that bad. Yes, it was buggy once upon a time (mostly because it used features that had never been used by anyone else), but these days it largely Just Works.

Choosing between portability and innovation

Posted Mar 11, 2011 3:46 UTC (Fri) by phoenix (guest, #73532) [Link]

As has been shown by the BSDs, you do not need PulseAudio, or any other sound daemon, sound server, etc, in order to have non-blocking, multi-source/single-output sound setup.

Just because Linux can't do it, doesn't mean it's not possible, nor that it should be avoided.

Choosing between portability and innovation

Posted Mar 11, 2011 12:40 UTC (Fri) by nix (subscriber, #2304) [Link]

Yeah, great, you can do all the flaming horror of sound mixing in kernel space. Thanks but no thanks.

Choosing between portability and innovation

Posted Mar 3, 2011 3:01 UTC (Thu) by josephrjustice (subscriber, #26250) [Link]

I can think of one reason why portability can be desirable, and why focusing development efforts solely on the Linux platform might be undesirable, that I haven't seen mentioned yet. Namely, on the grounds of avoiding a operating system software monoculture.

My understanding is that software monocultures are often thought of as undesirable, because when a sufficiently severe problem occurs within the members of the monoculture it can wipe them all out (similarly to how a disease or pest can destroy an ecological monoculture vulnerable to that disease/pest) and, if there are no other alternatives available for use, then the functionality provided by that software will be lost until (and if!) the software can be altered to resist or become immune to the problem. When that software is the operating system, which provides the foundation required by all other uses of the computer (such as application software), it seems as if it would be even more important than usual to avoid the risks caused by a software monoculture.

We can easily see a real-life example of exactly this sort of situation in the problems regularly encountered by users of the Microsoft Windows family of operating systems. Historically, every so often we see a great furor erupt in the mainstream media, as another vulnerability is discovered and exploited within Windows which results in many people world-wide losing the use of their computer for a time, or even losing their data, until the vulnerability is patched or otherwise remedied. Of course, Windows is known to be especially vulnerable to this sort of thing due to its historical underpinnings and ancestry, as well as due to the fact that many Windows users are non-technical and unwilling (or unable!) to be proactive in keeping their systems (relatively) secure.

However, the fact that Windows is an especially vulnerable target with relatively unskilled users, while Unix-like operating systems (including Linux) are relatively resistant targets with (presumably) relatively knowledgeable users, does not excuse the latter group from having to beware the risks inherent in being a software monoculture! We know that Unix-like and Linux-based operating systems are known to have vulnerabilities too, and if we don't know this we can simply look at the ongoing series of vulnerabilities announced by CERT and listed in LWN every week. And, we've had our moments of furor and panic in the mainstream media spotlight too -- perhaps the most well known of these was Robert Morris's Internet Worm. (See, for instance, http://en.wikipedia.org/wiki/Morris_worm .)

A second reason for considering software monoculture to be undesirable is that it reduces the opportunity for instances of hybrid vigor (see http://en.wikipedia.org/wiki/Hybrid_vigor ) to occur. In the context of software, this would be the porting or reimplementation of desirable features or capabilities originally provided by some other independent implementation of that type of software (such as operating systems). Of course, with software, there are unique impediments to the act of hybridization that can occur, such as incompatible software licenses. Even so, we see instances of software hybridization occur all the time at all levels of software.

I agree that it is desirable and usually a good thing to try to fully use the capabilities provided by an operating system, including those capabilities which might not be portable to or currently unavailable on by other sibling operating systems. (If nothing else, we want to try to use those capabilities to see if they're worth the effort to reimplement elsewhere!) I agree that worrying about portability issues can make software harder to implement and slow down the pace of development. I agree that, in at least some instances, it may not be worth worrying about portability issues, or at least considering them only at very most a distant second (if not even lower) in terms of importance. In other instances, perhaps worrying about portability should be left to "somebody else", while the original developer concentrates on advancing the state of the art on the primary development platform.

However, I disagree that we should blithely agree to establish or limit ourselves to a software monoculture, especially an operating system software monoculture, without duly recognizing and considering the risks of this decision and the costs accompanying those risks (which costs might reduce or even eliminate the benefits of living within a software monoculture), and also recognizing the benefits and value potentially provided if we do not limit ourselves to a monoculture environment.

Joseph

Choosing between portability and innovation

Posted Mar 3, 2011 5:43 UTC (Thu) by drag (subscriber, #31333) [Link]

> I can think of one reason why portability can be desirable, and why focusing development efforts solely on the Linux platform might be undesirable, that I haven't seen mentioned yet. Namely, on the grounds of avoiding a operating system software monoculture.

The thing is that a OS and desktop environment exist for the sole purpose of running applications. It's a abstraction layer of sorts designed to facilitate the use and development of software.

Therefore anything to make the system more stable, make developer's lives easier, make user's lives easier, make things run smoother, faster, etc etc. Anything that the OS does to improve the attributes of the programs running on the system is a fantastic thing.

Avoiding monoculture for the sake of avoiding monoculture is extremely dubious approach and just ends up making the systems worse, not better.

> We know that Unix-like and Linux-based operating systems are known to have vulnerabilities too, and if we don't know this we can simply look at the ongoing series of vulnerabilities announced by CERT and listed in LWN every week.

The biggest problem for Linux on the desktop is that it's entirely unnecessary for somebody to actually root your system to do very significant damage to you. The /home/username/ and user account is a soft target. It is were all your passwords are stored, were you carry out online commerce, were you communicate, etc etc. Your only line of defense is not your OS, but your applications. Firefox or Chrome is the software that keeps you safe, not the Linux kernel.

At least not yet. I'm hoping that Ubuntu's use of AppArmor (or Smack or SELinux) will eventually provide some layered security.

Some divergence and trying different approaches is valuable, but security does not happen by accident. It's not like 'Oh we are different therefore we are secure'. It has limited utility against 'dumb' automated attacks from very basic viruses and worms... but really it provides almost no real benefit from real attacks other then by complete accident. Against a intelligent attacker with a directed focus it's just a paper tiger.

Windows security sucked not because everybody was using Windows 2000, but it was because Windows security was a pile of shit. Microsoft has made huge strides and security is no longer a significant selling point for Linux desktop usage, if it ever was..

> http://en.wikipedia.org/wiki/Hybrid_vigor

Software is not a biological thing. Be careful of drawing conclusions from false analogies. If there is a problem with the software it's fixable. Not so much with DNA. At least not yet.

Especially if we used layered designed with (a minimal amount of) formal APIs/ABIs. That way you can fix problems in a layer without perturbing the software above it or below it, unless it's very necessary.

Remember 'layered design' concept of 'Unix' and TCP/IP?

That's exactly what the kernel does. Formal API/ABI layer between it and 'userspace' creates a significant amount of freedom for the kernel to change and develop. Just don't break userspace and developers can do most anything they want if they are smart enough to pull it off. It's not perfect (sysfs), but it may not be that big of a deal as long as breakage is kept to a minimum.

Compare and contrast this with a non-layered approach like your X Server from a couple years ago. The same application that had access to your PCI bus to configure hardware, provided network services, provided your terminal services, and touched every almost every single application you used. Even the stuff that ran in your xterms had to get it's input from you filtered through X. Which also, of course, runs a setuid root. It's not only extremely questionable design security-wise... it makes for a very fragile system.

Thank goodness for DRI2/KMS/GEM/TTM/etc...

Choosing between portability and innovation

Posted Mar 3, 2011 5:57 UTC (Thu) by dlang (guest, #313) [Link]

Linux was able to grow and prosper due to the fact that the code was portable and therefor Linux was able to run it.

If Sun had managed to get people programming just for it (the way that people are advocating programming only for Linux) back in the days when it was the premier OS, Linux would have been much harder to get started.

Linux developers today owe it to everyone (including themselves) to not raise the bar for for the eventual linux replacement higher unnecessarily.

that being said, having software take advantage of the latest features is a good thing, but the software should degrade gracefully in the absence of those latest features. This may mean falling back to something not as good, or it may mean disabling some features where there is no fallback.

Choosing between portability and innovation

Posted Mar 3, 2011 7:38 UTC (Thu) by airlied (subscriber, #9104) [Link]

So you should have a whole lot of fallbacks that nobody is testing? and will most likely bitrot into all hell since nobody runs them except maybe some hero once every 2-3 years.

Choosing between portability and innovation

Posted Mar 3, 2011 10:13 UTC (Thu) by roblucid (subscriber, #48964) [Link]

It's called error handling, a major pain yes but robust programs have it.

If applications are no longer designed that way, then when there's a call for Linux 3.0 for some currently unforeseable reason, there'll be a terrible chicken & egg problem, which will make the KDE4.0 saga look minor.

The X redevelopments are actually a good example for need for this, as they have NOT provided uninterrupted functionality to the end users, very many complain about much breakage, and missing features over last couple of years.

Choosing between portability and innovation

Posted Mar 3, 2011 15:03 UTC (Thu) by mezcalero (subscriber, #45103) [Link]

It's really not so simple. You cannot claim: "by having N competing but mostly identical projects a bug can only be used to take down 1/Nth of the computers". It is more like: "by having N competing but mostly identical projects you increase the number of bugs in your system N times".

And if we had 10 competing implementations than we'd have 10x more bugs. That sounds pretty bad to me.

I think nourishing this kind of competition is a bad tool to combat computer insecurity. If you have a single well reviewed implemention I think you are much better off than having 10 badly reviewed ones.

Choosing between portability and innovation

Posted Mar 3, 2011 17:05 UTC (Thu) by nix (subscriber, #2304) [Link]

If you have ten implementations you have ten different sets of bugs: nobody will be hit by all of them at once.

Your implicit claim that a single well-reviewed implementation can somehow be free of security holes, or indeed any kind of bug, is laughable on its face. I don't know of any software product of any kind that this has ever been true of (even TeX).

Choosing between portability and innovation

Posted Mar 3, 2011 17:45 UTC (Thu) by martinfick (subscriber, #4455) [Link]

10 implementation may have 10 sets of bugs. But nothing prevents a bug from being in all 10 sets. Remember ping of death?

Choosing between portability and innovation

Posted Mar 3, 2011 18:06 UTC (Thu) by nix (subscriber, #2304) [Link]

Well, yes, but that was in all descendants of a single implementation, wasn't it? (More relevant perhaps is cases where buggy algorithms have been implemented out of books into lots of unrelated programs.)

Choosing between portability and innovation

Posted Mar 3, 2011 18:45 UTC (Thu) by martinfick (subscriber, #4455) [Link]

If windows inherited this bug from unix, I would say that there is just as good a chance that free unix implementations will inherit bugs from each other, if not a much greater one.

Choosing between portability and innovation

Posted Mar 3, 2011 22:58 UTC (Thu) by nix (subscriber, #2304) [Link]

Linux, almost uniquely, didn't use the BSD TCP stack. Windows did (for a long time, if not anymore).

So, no, unless it was an algorithmic error Linux would not have inherited the ping of death (at least not *that* ping of death).

Choosing between portability and innovation

Posted Mar 3, 2011 20:11 UTC (Thu) by jg (subscriber, #17537) [Link]

I think N being a small integer is useful. Stifling of innovation generally occurs when N=1.

N going to infinity (e.g. 10 and greater) is insanity...

We can argue about things in the middle...

Choosing between portability and innovation

Posted Mar 6, 2011 12:53 UTC (Sun) by pjm (subscriber, #2080) [Link]

>If you have a single well reviewed implemention I think you are much better off than having 10 badly reviewed ones.

That sounds good in itself. However, your advocated position is to standardize on a kernel that's featureful and consequently full of bugs (despite having so many developers contributing to it), to the complete exclusion of any kernel with fewer bugs. That's not to say that your advocated position is a bad one; but I do believe it runs counter to a goal of combatting computer insecurity.

(Incidentally, if you really did think that 10× the number of bugs is such a bad thing, then I think you'd probably use a different kernel. But most people do use Linux even if they know it has lots of bugs, and would even if they knew it had 10× the number of bugs of some other unixy kernel, even for just one or two extra features important to them. Similarly, each of the N competing implementations presumably has one or two features or attributes not present in (and not feasible to add to) the others.)

and other featureful-but-buggy software .)

Choosing between portability and innovation

Posted Mar 5, 2011 2:00 UTC (Sat) by skissane (subscriber, #38675) [Link]

Personally -- forget about the Linux monoculture -- what about the Unix monoculture, or the LUW (Linux/Unix/Windows) monoculture? People forget that Unix and Windows are relatives -- Windows shows significant influence of Unix. Some examples:
  • Base OS only supports unstructured bytestream files, rather than base OS support for more complex file structures (such as record structure, indexed files, forked files, etc.) -- this is Unix influence on Windows via CP/M then DOS
  • Filesystem does not have explicit recording of file types, only convention of file extensions -- again, Unix influence on Windows via CP/M then DOS
  • Hierarchical directories -- DOS 2.0 explicitly modelled these on Unix
  • C programming language -- direct Unix influence on DOS/Windows
  • Berkeley sockets influence on WinSock
If you want a non-monocultural OS, don't look for yet another Un*x or Windows. They are parts of the monoculture. Examples of non-monocultural OSes would be any IBM mainframe/midrange OSes (like z/OS, z/VM, z/VSE, z/TPF, OS/400), HP MPE, Siemens BS2000/OSD, Unisys ClearPath, language-specific OSes like Lisp Machines or PICK (back when PICK was an whole OS not just a database), etc. Of course, many of this stuff is rather ancient and decaying, but it just goes to show how the IT industry was far less monocultural in previous decades than it is today

I don't like hierarchical filesystems. We end up with these massive random directory trees and cannot find anything. Better would be a filesystem where we give files tags, some unique and some non-unique, and we can find files by their tags -- in library science terms, hierarchical filesystems are like Dewey Decimal or Library of Congress classification, I think we should adopt faceted classification instead

I really like the idea of getting rid of operating systems and making applications run directly on bare hardware. Especially with virtualization, who needs a general purpose OS to run a web server or database server? Why not just run the application directly on the hypervisor, with as thin a level possible in between. This is like MIT's exokernel research.

I think Oracle's WebLogic VE is a good implementation of this idea -- run the JVM directly on the hypervisor, with just a very thin custom OS that exists solely to meet the JVM's needs, no general purpose OS needed in between. I'd like to see the same idea extended to other areas (e.g. databases). (Full disclosure: I work for Oracle but this is just my personal opinions not those of my employer)

Choosing between portability and innovation

Posted Mar 4, 2011 9:27 UTC (Fri) by liljencrantz (guest, #28458) [Link]

Rarely tested code is a breeding ground for bugs. Abstraction layers and compatibility layers tend to consist mostly of edge cases and rarely tested code. They cause significant amounts of bugs.

The best approach that I've found is to decide on an API to code against, preferably the API of your main target platform or at least something very similar to it. Any porting effort should then concentrate on providing that same API, even on platforms that are missing critical libraries. The upside of this is that you write your regular code just as if you where only targeting a single platform. Any bugs caused by the porting effort are unlikely to be noticed on the main platform, and neither does most of your users have to pay for the overhead of routing every system call through endless layers of abstraction.

As for whether it is actually worth bothering to port applications to less used platforms, I think the answer has a lot to do with what type of application you're writing. systemd is pretty tightly coupled to the Linux kernel. It relies on many features of the Linux kernel that only have very rough equivalents on other platforms. While it would be possible to provide a unification layer that made systemd run BSD, doing so would likely be more work than rewriting systemd from scratch for BSD. This is not the case for higher level applications like emacs or Firefox. While they occasionally make use of low level kernel interfaces, that is the exception rather than the rule. I think we should accept that some subsystems, even though they run in userland, are tightly coupled to a specific kernel. It makes sense that init should be such a subsystem.

Choosing between portability and innovation

Posted Mar 4, 2011 12:26 UTC (Fri) by nix (subscriber, #2304) [Link]

Agreed. It's not like sysvinit ran on non-Linux systems either: they all have their own inits.

What *would* be nice is, when systemd calms down a bit and we start to see which of its features are used by other programs, if some other inits could start to gain those features. (I've never heard of another init-like program which could even be described as having features that other programs could use. systemd is a big step forward in that way.)

Choosing between portability and innovation

Posted Mar 4, 2011 17:44 UTC (Fri) by dlang (guest, #313) [Link]

I almost agree with you.

the thing is that the API you code to should be flexible and evolve over time, with some of the abstraction layers that others have mentioned being in the API -> platform level

In the end your API may not directly match any of your destination platforms.

the linux kernel does this. Linus has said in interviews that the kernel is not written directly for any type of system, but instead is written for an idealized system that doesn't actually exist, with 'shim code' on every single platform (including x86) to simulate portions of the 'system' the kernel is written for. He says that the kernel is much better as a result of this work.

Choosing between portability and innovation

Posted Mar 9, 2011 17:50 UTC (Wed) by schabi (guest, #14079) [Link]

I agree here.

But note that there is a huge difference between writing "non-portable" code targeted for an idealized platform (with a PAL in place to emulate that platform in the real environment), and writing with portability in mind (hundreds of #ifdefs, missing functionality, etc.).

Choosing between portability and innovation

Posted Mar 9, 2011 23:44 UTC (Wed) by nix (subscriber, #2304) [Link]

Writing with portability in mind does not imply 'missing functionality' (on platforms capable of it), and *certainly* does not imply 'hundreds of #ifdefs'. Indeed, the latter is generally a symptom of software that was not written with portability in mind, but which had limited portability jammed crudely into it at a later date.

Centralized vs Decentralized

Posted Mar 10, 2011 1:42 UTC (Thu) by ldo (guest, #40946) [Link]

To me, this BSD-versus-Linux thing revolves around the difference between a centralized model of development versus a decentralized one.

Linux is just a kernel, whereas each BSD is a whole distribution. Linux distros take their “upstreams” from thousands of different sources, whereas the BSD folks seem to want to maintain their own copies of everything. And they still seem to think CVS is a good idea for version control, though they do try to offer Subversion as an alternative for the radical young kids. While on the Linux side, look at the number of projects that have moved onto distributed VCSes like Git and Mercurial—the Linux kernel itself never used any centralized VCS.

This decentralized approach, not controlled from any single point, is why the Linux ecosystem has been able to move so much faster than the BSD ones. BSD had a thriving community back when Linux was still a snot-nosed little upstart that could very well have died in childhood; yet look at all the great things it has been able to go on to achieve, while the BSDs haven’t really moved much at all.

Centralized vs Decentralized

Posted Apr 2, 2011 2:45 UTC (Sat) by JasperWallace (guest, #74025) [Link]

> And they still seem to think CVS is a good idea for version control, though they do try to offer Subversion as an alternative for the radical young kids.

The problem the BSD's have is they have the entire history for kernel+userland+X in one repo and the history. If they switched to a dvcs then every developer would have to have have a copy of the history which would take massive amounts of disk space.

Also converting 15 years+ of history is not straight forward.

NetBSD does have a git repo that shadows the main cvs modules:

http://mail-index.netbsd.org/current-users/2009/10/13/msg...

Choosing between portability and innovation

Posted Mar 11, 2011 1:52 UTC (Fri) by gcooper (guest, #73533) [Link]

Some history review
  • kqueues came before epoll and some of the equivalent functionality in inotify.
  • BSD sockets came before the Linux implementation (obviously)? XSI streams are basically dead, hence FreeBSD doesn't make an effort to implement them, along with some other deprecated XSI functionality.
  • OSS came before ALSA and was technologically superior to ALSA. It was in Linux for a while, and has been adopted in *BSD for a long period of time, but was abandoned for licensing disagreements (BSD licensing) and bad blood between Linux devs and the OSS devs and because OSS was done largely within the kernel. FreeBSD (at least) also has had software mixing for well over a decade now (vchans) and Linux is still struggling with this functionality (why outsource to PulseAudio)? What about arts, ESS, etc?
  • systemd is another take on something that upstart was supposed to answer by itself, which is in turn something that SysV init did back in the day (and a simple init clone already does to a large degree on FreeBSD without as much complexity for well over the past decade); granted it has to use devd and devfs for some minor help with event notification, but have you tried creating jobfiles in upstart? Oh yeah, and OSX has launchd, which is probably superior to the rest of the init-like daemons, but isn't really portable after Leopard (surprise, surprise), so it hasn't moved outside of OS X.

So with that in mind, how is tying Freedesktop to Linux better than trying to be somewhat portable (if I screwed up the history lesson, feel free to correct me)?

Code churn

Brute forcing software design through several revisions of code or several forked projects detracts from the collaboration effort and I would question whether or not true innovation is being made, or if it's just code churn.

How often does OSX, Windows, etc dramatically change system interfaces or how components interact with one another, and how do end-users receive the changes? I know based on my experience that people absolutely loathed the Windows 2000 -> XP and XP -> Vista/7 transitions and those only happened every 3-5 or so years. Freedesktop and other associated projects (Gnome, KDE, LXDE, XFCE) pulls this stuff every release, which is either biannually or annually!

Impact of churn and becoming less portable
  1. Do the devs have the performance figures to support the fact that the new software truly functions faster?
  2. Can the devs qualify all of the wasted hours...
    • Doing the initial development and unit testing.
    • Effort needed to track the versions qualify the software in multiple releases by package maintainers?
    • Effort needed to track changes required for end-users who depend on Linux or other platforms like BSD, OpenSolaris (now IlluminOS), etc?
An example of good collaboration / design

I find the work that Intel is doing with GEM (even though it's a bit misguided in tying itself to Linux, oh well) to be really good work, because it's a relatively abstract framework for their display drivers. I will applaud them for this effort and others because several folks in Intel have also works on making their NIC drivers and other things sane for non-Linux Unix users.

Conclusion:

I was a devout Gentoo Linux user for 5 years and gave up running it on my primary system because of the fact that some developers that develop for core components in Linux lose sight that an OS distribution is so much more than a kernel, and stuff broke all too frequently. If Freedesktop devs aren't willing to slow down and think more -- and thinking about portability is one of those things -- then Unix won't be my primary desktop / portable platform, I won't be the only user (BSD, Linux, etc) dropping FreeDesktop software like a hot rock. There's always screen/tmux with SSH on my Mac if I'm pushed to do things that way, so even if folks go the way of Wayland I'll have a way to get my work done.

Choosing between portability and innovation

Posted Mar 11, 2011 9:31 UTC (Fri) by patrick_g (subscriber, #44470) [Link]

>>> OSS came before ALSA and was technologically superior to ALSA. It was in Linux for a while, and has been adopted in *BSD for a long period of time, but was abandoned for licensing disagreements (BSD licensing)

Rubbish ! The Linux community abandoned OSS because the main dev (Hannu Savolainen) made his support for newer sound devices and improvements proprietary.

Choosing between portability and innovation

Posted Mar 11, 2011 12:04 UTC (Fri) by gcooper (guest, #73533) [Link]

I stand corrected on that point; thanks for the clarification.

Choosing between portability and innovation

Posted Jul 24, 2011 17:29 UTC (Sun) by gvy (guest, #11981) [Link]

> OSS came before ALSA and was technologically superior to ALSA.
I'd ask for some evidence.

And regarding
> FreeBSD (at least) also has had software mixing for well over a decade
> now (vchans) and Linux is still struggling with this functionality
-- I've actually forgot since when alsa dmix just works, maybe 5 years or so? (esound would do either back in 1998 IIRC)

> pulls this stuff every release, which is either biannually or annually!
ORLY

Choosing between portability and innovation

Posted Mar 24, 2011 12:24 UTC (Thu) by daenzer (subscriber, #7050) [Link]

Not sure GEM is a really good example, as the people behind it seem to be firmly in the 'who cares about portability, only Linux matters' camp. In fact, it was with GEM that the DRM code stopped being developed shared between Linux and the BSDs.

This is all the more interesting as one of those people was previously a major FreeBSD graphics driver developer. Something rather dramatic seems to have happened there at some point, still not sure what that was.

Choosing between portability and innovation

Posted Mar 20, 2011 17:52 UTC (Sun) by jcm (subscriber, #18262) [Link]

Awesome story. Well said. What is needed, in my personal opinion, is a giant push-back against some of the trends in the Linux area, and back toward standardization across platforms. In the process, a lot of "progress" is going to have to be undone, but that's a good thing.

Choosing between portability and innovation

Posted Mar 20, 2011 18:01 UTC (Sun) by jcm (subscriber, #18262) [Link]

The solution (on the POSIX front) is to get newer Linux APIs standardized before forcing everyone to switch to them. It is totally unacceptable to say "we're number one, you do it like this". I'm not singling anyone out, but it's just plain wrong to have that attitude. Not only does it hurt portability, but it also makes it increasingly difficult to document how Operating Systems like Linux are supposed to work. Ever wonder why (even given the demise of the publishing industry) there are so few books coming out these days for developers to read? It's because this train is moving so fast that only those riding it have a hope of making the next stop.

Choosing between portability and innovation

Posted Mar 24, 2011 12:13 UTC (Thu) by daenzer (subscriber, #7050) [Link]

Nice article, just some minor corrections for the part about X.org and KMS.

Konstantin Belousov shouldn't need to implement any new DRI drivers for BSDs. Once the KMS and GEM APIs are available in the kernel, the only userspace changes required (if any) should be in libdrm(_intel).

AFAIK TTM stands for Translation Table Manager.

Gallium3D isn't tightly coupled to the Linux kernel at all. In fact, it's an explicit goal of it to cleanly separate parts which are specific to an OS (and windowing system etc.) environment from parts which are specific to hardware (and each in turn from parts which are specific to APIs).


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds