Can Debian offer a Constantly Usable Testing distribution?

Debian’s “testing” distribution is where Debian developers prepare the next stable distribution. While this is still its main purpose, many users have adopted this version of Debian because it offers them a good trade-off between stability and freshness. But there are downsides to using this distribution and the “Constantly Usable Testing” (CUT) project aims to resolve those. This article will present the project and the challenges involved to make it happen.

About Debian unstable & testing

Debian unstable is the distribution where developers upload new versions of their packages. It happens frequently that some packages are not installable due to changes in other packages or due to transitions not yet completed.

Debian testing, on the contrary, is managed by a tool that ensures the consistency of the whole distribution: it picks updates from unstable only if the package has been enough tested (10 days usually), if it’s free of new release-critical bugs, if it’s available on all supported architectures, and if it doesn’t break any other package already present in testing. The Release Team (RT) controls this tool and provide “hints” to help it find a set of packages that can flow from unstable to testing.

Those rules also ensure that the packages that flow into testing are reasonably free of show-stopper bugs (like a system that doesn’t boot, or X that doesn’t work at all). This makes it very attractive to users who like to regularly get new upstream versions of their software without dealing with the biggest problems associated to them. This is all very attractive, yet several Debian developers advise people to not use testing. Why is that?

Known problems with testing

Disappearing software

The release team use this distribution to prepare the next stable release and from time to time they remove packages from it. Either because it’s needed to ensure that other packages can migrate from unstable to testing, or because they have long-standing release-critical bugs without progress towards a resolution. It also happens that they remove packages on request of the maintainers because they believe that the current version of the software cannot be supported (security-wise) for 2 years or more. The security team also regularly issues such requests.

Long delays for security and important fixes

Despite the 10-day delay in unstable, there are always some annoying bugs (and security bugs are no exceptions) that are only discovered when the package already has migrated to testing. The maintainer might be quick to upload a fixed package in unstable, and might even raise the urgency to allow the package to migrate sooner, but if the packages gets entangled in a large ongoing transition, it will not migrate before the transition is completed. Sometimes it can take weeks for that to happen.

This delay can be avoided by doing direct uploads to testing (through testing-proposed-updates) but this is almost never used, except during a freeze, where targeted bugfixes are the norm.

Not always installable

With testing evolving daily, updates sometimes break the last installation images available (in particular netboot images that get everything from the network). The debian-installer (d-i) packages are usually quickly fixed but they don’t move to testing automatically because the new combination of d-i packages has not necessarily been validated yet. Colin Watson sums up the problem:

Getting new installer code into testing takes too long, and problems remain unfixed in testing for too long. […] The problem with d-i development at the moment is more that we’re very slow at producing new d-i *releases*. […] Your choices right now are to work with stable (too old), testing (would be nice except for the way sometimes it breaks and then it tends to take a week to fix anything), unstable (breaks all the time).

CUT’s history

CUT finds its root in an old proposal by Joey Hess: it introduces the idea that the stable release is not Debian’s sole product and that testing could become — with some work — a suitable choice for end-users. Nobody took on that work and there was no visible progress in the last 3 years.

But recently Joey brought up CUT again on the debian-devel mailing list and Stefano Zacchiroli (the Debian project leader) challenged him to setup a BoF on CUT for Debconf10. It turned out to be one of the most heavily attended BoF (video recording is here), there is clearly a lot of interest in the topic.

There’s now a dedicated wiki and an Alioth project with a mailing list. The rest of this article tries to summarize the various options discussed and how they’re supposed to address the problems identified.

The ideas behind CUT

Among all the ideas, there are two main approaches that have been discussed. The first is to regularly snapshot testing at points where it is known to work reasonably well (those snapshots would be named “cut”). The second is to build an improved testing distribution tailored to the needs of users who want a working distribution with daily updates, its name would be “rolling”.

Regular snapshots of testing

There’s general agreement that regular snapshots of testing are required: it’s the only way to ensure that the generated installation media will continue to work until the next snapshot. If tests of the snapshot do not reveal any major problem, then it becomes the latest “cut”. For clarity, the official codename would be date based: e.g. “cut-2010-09” would be the cut taken during September 2010.

While the frequency has not been fixed yet, the goal is clearly to be on the aggressive side: at the very least every 6 months, but every month has been suggested as well. In order to reach a decision, many aspects have to be balanced.

One of them (and possibly the most important) is the security support. Given that the security team is already overworked, it’s difficult to put more work on their shoulders by declaring that cuts will be supported like any stable release. No official security support sounds bad but it’s not necessarily so problematic as one might imagine. Testing’s security record is generally better than stable’s one (see the security tracker) because fixes flow in naturally with new upstream versions. Stable still get fixes for very important security issues sooner than testing, but on the whole there are less known security-related problems in testing than in stable.

Since it’s only a question of time until the fixed version comes naturally from upstream, more frequent cut releases means that users get security fixes sooner. But Stefan Fritsch, who used to be involved in the Debian testing security team, has also experienced the downside for anyone who tries to contribute security updates:

The updates to testing-security usually stay useful only for a few weeks, until a fixed version migrates from unstable. In stable, the updates stay around for a few years, which gives a higher motivation to spend time on preparing them.

So if it’s difficult to form a dedicated security team, the work of providing security updates comes back to the package maintainer. They are usually quite quick to upload fixed packages in unstable but tend to not monitor whether the packages migrate to testing. They can’t be blamed because testing was created to prepare the next stable release and there is thus no urgency to get the fix in as long as it makes it before the release.

CUT can help in this regard precisely because it changes this assumption: there will be users of the testing packages and they deserve to get security fixes much like the stable users.

Another aspect to consider when picking a release frequency is the amount of associated work that comes with any official release: testing upgrades from the previous version, writing release notes and preparing installation images. It seems difficult to do this every month. With this frequency it’s also impossible to have a new major kernel release for each cut (since they tend to come out only every 2 to 3 months) and the new hardware support that it brings is something worthwhile to many users.

In summary, regular snapshots address the “not always installable” problem and changes the perception of maintainers towards testing, so that hopefully they care more of security updates in that distribution (and in cuts). But they do not solve the problem of disappearing packages. Something else is needed to fix that problem.

A new “rolling” distribution?

Lucas Nussbaum pointed out that regular snapshots of Debian is not really a new concept:

How would this differentiate from other distributions doing 6-month release cycles, and in particular Ubuntu, which can already be seen as Debian snapshots (+ added value)?

In Lucas’s eyes, CUT becomes interesting if it can provide a rolling distribution (like testing) with a “constant flux of new upstream releases”. For him, that would be “something quite unique in the Free Software world”. The snapshots would be used as starting point for the initial installation, but the installed system would point to the rolling distribution and users would then upgrade as often as they want. In this scenario, security support for the snapshots is not so important, what matters is the state of the rolling distribution.

If testing were used as the rolling distribution, the problem of “disappearing packages” would not be fixed. That’s why there have been discussions of introducing a new distribution named “rolling” that would work like testing but with adapted rules, and the cuts would then be snapshots of rolling instead of testing.

The basic proposal is to make a copy of testing and to re-add the packages which have been removed because they are not suited for a long term release while they are perfectly acceptable for a constantly updated release (the most recent example being Chromium).

Then it’s possible to go one step further: during freeze, testing is no longer automatically updated which makes it inappropriate to feed the rolling distribution. That’s why rolling would be reconfigured to grab updates from unstable (but using the same rules than testing).

Given the frequent releases, it’s likely that only a subset of architectures would be officially supported. This is not a real problem because the users who want bleeding edge software tends to be desktop users on mainly i386/amd64 (and maybe armel for tablets and similar mobile products). This choice — if made — opens up the door to even more possibilities: if rolling is configured exactly like testing but with only a subset of the architectures, it’s likely that some packages migrate to rolling before testing when non-mainstream architectures are lagging in terms of auto-building (or have toolchain problems).

While being ahead of testing can be positive for the users, it’s also problematic on several levels. First, managing rolling becomes much more complicated because the transition management work done by the release team can’t be reused as-is. Then it introduces competition between both distributions which can make it more difficult to get a stable release out, for example if maintainers stop caring of the migration to testing once the migration to rolling has been completed.

The rolling distribution is certainly a good idea but the rules governing it must be designed to avoid any conflict with the process of releasing a stable distribution. Lastly, the mere existence of rolling would finally fix the marketing problem plaguing testing: the name “rolling” does not suggest that the software is not yet ready for prime time.


Whether CUT will be implemented remains to be seen, but it’s off for a good start: ftpmaster Joerg Jaspert said that the new archive server can cope with a new distribution, and there’s now a proposal shaping up. The project might start quickly: there is already an implementation plan for the snapshot side of the project. The rolling distribution can always be introduced later, once it is ready. Both approaches can complement each other and provide something useful to different kind of users.

The global proposal is certainly appealing: it would address the concerns of obsolescence of Debian’s stable release by making intermediary releases. Anyone needing something more recent for hardware support can start by installing a cut and follow the subsequent releases until the next stable version. And users who always want the latest version of every software could use rolling after having installed a cut.

From a user point of view, there are similarities with the mix of usual and long term releases of Ubuntu. But from the development side, the process followed would be quite different, and the constraints imposed by having a constantly usable distribution are stronger: any wide-scale change must be designed in a way that it can happen progressively in a transparent manner for the user.

This article was first published in Linux Weekly News. If you want to see more articles like this, join Flattr and click on the flattr button below every article that you like.


  1. Riociq says

    Rolling release Debian is would be great and I think it’s more than needed to the Linux community, actually there is big userbase which are not comfortable with Ubuntu and don’t use Debian only because the age of packages.

  2. Tim Richardson says

    The distribution aptosid (used to be sidux) is a community-supported debian sid. In other words, aptosid is a CUS (Continuously useful sid).
    I’ve used it for nearly two years on a few machines (laptops) and it works better than I could ever get Testing to work (the delays to get packages into testing are too often very long).
    Note: it’s a KDE-focused community. The community support works well; the tricks needed to survive with sid have been worked out. Good manual, live CDROM, small technically proficient community, and 99.9% native Debian. Some Debian Developers are active in the aptosid project.

    • says

      This project has been mentioned several times already. I wonder what it takes to get those people to join and work on CUT inside Debian instead of doing it as a separate project.

      Is that realistic at all? What would be the limitations to overcome on the Debian side and on the aptosid side? It seems to me that when testing is not good enough is precisely the time where some targeted help could help make a difference and get a package migrated sooner (even by way of recompiling within testing if needed). But we need people that care about this and it seems to me that the people doing aptosid could be interested in this.

  3. Mithat says

    This is great news. Of the two options, I’m much more excited about the prospect of a rolling release. I use Debian Testing on two “it’s ok if they break every once in a while” desktop machines and Ubuntu on my “it needs to work every day” desktop. Having a Debian rolling release would make picking a desktop distro a no-brainer: Debian warm-and-fuzzies and community support, fresh packages (compared to something released on a ~six month cycle), and no “push the button every N months and hope for the best” anxieties (arguably replaced by numerous distributed, smaller anxieties).

    I really like what aptosid is trying to do, but it seems the nature of the best results in some not insignificant usability issues for desktop use (e.g., needing to do updates without X running). Until/unless those usability issues get solved, aptosid can’t really be considered a general-use distro. I’m assuming a rolling-release Debian would be free of those issues and thus be poised to become the leading desktop distro.

  4. tim says

    Another thing that struck me about this initiative is that it seems to striving for the same end result as the backports effort: deliver a secure, usable system with up-to-date software. Should Debian split its resources across two efforts? Which is more likely to work? Which builds better on what Debian does well: very stable, multi-architecture releases?

    Re aptosid: don’t tell anyone, but updating while still in KDE has never caused me any problems, although it is definitely not supported.

    • says

      Tim, backports and CUT offer something similar but very different at the same time. Backports are made to support users of Debian stable to cherry-pick some specific new upstream version of a specific package in which they have a special interest.

      Using the backports helps you but it doesn’t really help Debian to build the next stable version because you only test few updated packages in an old environment. On the opposite, with CUT all the packages are always very fresh because you’re only using packages that are targeted for a future stable release.

      And since backports are based on the testing package, they always come after CUT. So it’s not really duplication because you can’t have a backport without an updated testing package.

  5. Sam says

    sid/aptosid is also frozen during debian testing freezing. Will this cut suffer from this symptom?

    Re tim: never had the courage to do so.

    • says

      Sam, clearly the goal of the rolling distribution is to never freeze. But it might not happen immediately because it will have impacts on the stable release process that have to be carefully considered, discussed and agreed upon.

  6. Sanders says

    I think a rolling release is the way forward for desktop usage, in fact I won’t name it Debian Rolling, but “Debian Desktop”.

    If correctly made (no doubts about it btw), it would have all the benefits of Ubuntu “without Ubuntu”, a stable combination of Linux Kernel/Base System/Xorg and always updated apps (Wine anyone?) would be absolute delight (Linux nirvana), and with the strong influence of Debian an absolute winner!

    And stable for the server of course! One could not ask for better IMHO!

    (PD: Ubuntu’s only good is that you can (mostly) install it (most of the time) and have a reasonable working desktop out of the box in no time)

    • Gabe says

      You hit it on the nose! I just made a thread about that on kubuntuforums (and it’s mirrored somewhere on ubuntuforums). You’re absolutely right, this would be essential.

      Spend some serious time getting the hardware supported, then freeze it with constantly updating user software above it. Windows and Mac have done this. (How often do they push major kernel upgrades? Once every 3 years or so? XP is still supported… Office 2010, latest Firefox, apps, etc.)

      • Micheas says

        The issue about not including new kernels is that new kernels include new hardware support.

        Which means that if you are targeting the desktop, the latest kernel is sort of important, not for the latest scheduler, but for the latest device drivers.

        • Gabe says

          Except you can easily backport many (if not most!) device drivers as modules for older kernels. Ironically, Ubuntu, one of the foremost culprits in kernel upheaval, actually does this quite well.

  7. says

    I’ve run Sid and sidux on more than a few desktop machines, and Sid wins hands down. sidux gives me broken kernels and their very helpful community keeps me running in circles to work around whatever issue the latest update brought. Soon enough I’ll be moving those machines to Debian Sid proper.

    This article on CUT was in two weeks ago, exactly the same as this article. Did you write it, or lift it?

      • says

        It was a well-written article and I am excited for this possibility, and happy to see that Debian on the desktop is receiving more attention. The *buntus don’t even compare.

    • Sam says

      “I’ve run Sid and sidux on more than a few desktop machines, and Sid wins hands down. sidux gives me broken kernels and their very helpful community keeps me running in circles to work around whatever issue the latest update brought. Soon enough I’ll be moving those machines to Debian Sid proper.”

      Interesting. Though I don’t quite understand how is that possible since sidux, as I know, use same kernel version as sid, with some “stabilizing” fixes.

      I’ve never encountered any problem at all with sidux, only with grub2 exactly 1 time.

      You are the first one, for me, saying sid is more stable than sidux. Admittedly, I hang around sidux community.

      • says

        > sidux … use[s the] same kernel version as sid
        No, they’re not the same.

        Sid currently is stuck on 2.6.32-5 while sidux has at least 2.6.34, maybe newer (I’ve not upgraded that system in a while, waiting for the move to Debian as I said), and anything abut the sidux 2.6.32-8.slh.3-sidux-686 kernel causes my machine to panic at boot.

        sidux also changes the default runlevel (Debian is 2, sidux is 5) and some other things that aren’t needed to be done. If sidux (or aptosid) wants to build up Debian I don’t know why it would be changing these simple things, causing some various HowTos out there to not work on one system or the other.

        • says

          > and anything abut the sidux 2.6.32-8.slh.3-sidux-686
          > kernel causes my machine to panic
          Sorry, anything *above* that sidux kernel causes panic.

  8. sam says

    I would rather not make our precious developers have even more work to do. Squeeze is already over a year late. Personally I have had very little issues with only using testing on my desktops and laptops. Before I update (once every couple of months), I usually check out critical packages like grub and mdadm through reportbug-ng before hand. Sometimes I might hit up the forums. For the most part, I would be fine with a wiki type community were us CUT’ers can keep each other informed on broken packages and other problems. That would be much simpler than trying to do a whole nother distro

  9. Anymous says

    Constantly Updating Testing?
    Way to go Debian, this is probably the next step (after volatile, backports, sloppy) to It really seems to me that Debian is just not able to bring releases out, and because of that weird ideas like this pop up to try to solve that issue.
    Good bye Debian, welcome Ubuntu!
    (don’t tell me that ubuntu is based on debian!)

  10. Snaga says

    Aside from the additions mint brings to the table, how would this differ from the new Linux Mint LDME distribution?

    • says

      By doing it within Debian proper, you’re more likely to have official Debian maintainers caring about the state of their packages in the new rolling distribution. So hopefully the result is better for everybody (including Mint).

      Otherwise yes it’s very similar to what Mint and aptonsid are doing, yes.

  11. says

    I think this is a great idea! This is definitely something that should have been done years ago. But, it bothers me to read some of the comments above in which they seem to imply this is something new. Not all the comments do this. But, I just wanted to point out for the sake of fairness that this is not a new novel ideal. Projects like PCLinuxOS and Arch have been doing this for a long time. PCLinuxOS in particular is very successful and mature at doing this right. Not to take anything from Debian though. If Debian can do this as well as PCLinuxOS has does this already, it will be an excellent desktop distro. For now, I still think that PCLinuxOS is tops. Bring it on!

    • says

      Arch Linux as you said, also does a great job on this, I use it as my main OS for more than a year, this might sound crazy, but I even use it to power some servers, and I had no or too little problems, in that time.

  12. Kasumi_Ninja says

    Great idea! I really like Debian, however I find testing/sid to problematic for my desktop and stable way to old. Squeeze hasn’t been released yet and already it’s 75% obsolete ( )

  13. Brent says

    Personally I think we need something between Testing and Stable.

    There is a demand for newer and updated software and the Debian leaders need to understand that.

    My question is this, as there is already the following:

    Debian testing, on the contrary, is managed by a tool that ensures the consistency of the whole distribution: it picks updates from unstable only if the package has been enough tested (10 days usually), if it’s free of new release-critical bugs, if it’s available on all supported architectures, and if it doesn’t break any other package already present in testing. The Release Team (RT) controls this tool and provide “hints” to help it find a set of packages that can flow from unstable to testing.

    Why cant after 10 days the package thats in Testing move to the repository between Stable and Testing.

    Whats obvious from all the “angle of view” to using testing, there is and alway will be problems and challenges to over come. There is no escaping this, but the bigger picture of “people want newer releases of software” needs to be seen.

    When ever i read these post, im after left asking my self. What is it and how is it that FreeBSD is able to get it right. What is it that they are doing. And there is no question that FreeBSD is stable too.


    • says

      Personally I think we need something between Testing and Stable. […] Why can’t after 10 days the package thats in Testing move to the repository between Stable and Testing?

      I’m not sure what distribution you’re referring to… and why do you think that testing (or rolling) can’t respond to the need to have newer and updated software?

      Picking packages that are 10 days old in testing is unlikely to further increase the quality of the packages… you would need other checks and the inability to fix bugs directly in testing does not make it appealing as a solution. And you add lots of work to ensure the consistency of dependencies on the whole distribution that you introduce.

  14. Marie says

    I really like the simplicity of Debian and the package system, but yeah Stable packages are always too old and Unstable or Testing are just that. This is why I go with Ubuntu or Mint. They’ve just made it easier. Why can’t Debian just release every six months like Ubuntu?

    • Gabe says

      Ubuntu’s release cycle is a very, very bad idea for normal longterm use.

      A stable base with constantly updated software is key. Periodic “refresh” images can be issued a la Debian 5.0.x with this philosophy, creating a consistently usable, stable, but constantly up-to-date distribution.

    • says

      Marie, do you know that unstable and testing are not as bad as the name suggest? For instance, Ubuntu is based on Debian unstable. On top of this they add their own enhancements and updates.

      We can’t release a new stable version every 6 months because we want a level of polish that such a short timeframe doesn’t give us. Also there are users (think large deployments) who prefer a release every 18 months, in Ubuntu-land they would use the LTS release.

      This article explains a proposal to make intermediary releases based on testing, this would surely fit your needs, don’t you think so?

  15. Calvin says

    As attractive as the rolling idea sounds, it would be a mistake. By far the best direction for a “Desktop” version of Debian would be a large increase in resources devoted to Backports.

    The main shortcoming of the stable+backports approach is the low prioritization of Backports. This is strange since the model is effectively the one that all commercial operating systems use, and for very good reasons. You want the core of the operating system to be as stable and highly polished as possible, with new versions of applications built on and tested against it.

    Microsoft has *vastly* more resources at its disposal, yet it still chooses not to implement a “rolling” release because it would not only take a great deal more testing resources on its part (to maintain quality with more frequent “releases”), it would also require the same additional testing input from the application vendors, who currently only have to test against the fixed targets of releases every 2-5 years. The FOSS community can’t afford to squander its limited resources more than we already do.

    At the same time, the Ubuntu 6-month release cycle is a disaster. It drives people away from FOSS with it’s frequent upgrade breakages and major bugs in every release. Ordinary users looking to leave the Windows universe will never be impressed with the amatuerism that is the predictable byproduct of a chronically under-tested distribution.

    While it’s true that the rolling releas idea is very popular right now, anyone who has looked seriously at existing rolling release distributions realizes there is a considerable loss in quality and stability vs. the freeze-and-release Debian Stable approach. This is fine for the tech-savy hobbyist who doesn’t mind periodic broken packages, but for the FOSS community (or Debian in this case) it takes time and resources away from what should be the real goal — the highest possible quality open source distribution with reasonably up-to-date software. If Backports could commit to having major applications available soon after they are released, then it could be *more* up-to-date for most users than Ubuntu (with its 6-month cycles). There is so much redundency in the FOSS world, that the targeted “major applications” could be a tiny fraction of all packages, and yet still please the vast majority of users. If those backported packages were also well tested, then we could have new version of Debian: “Desktop”

    This will require a democratic process of prioritization to decide which applications will receive this status, and thus get regular updates. It of course would not freeze other applications at old versions, but they would be taken on a more case-by-case basis.

    Even though such a version of Debian would not have every new package, given Debian Stable’s incomparably high quality, it would still be far more useful for the vast majority of users (particularly if paired with the polish that Linux Mint has provided) than *any* existing Linux distribution.

    What I’m arguing for is essentially what Mepis does (using Debian Stable as it’s base), but I believe Debian could do it far better, and on a non-commercial basis.

    • Gabe says

      Beautiful. Fully agreed.

      This is how it should be, for all the reasons you mention and more. Like myself and a few other commentators above have argued, the base + backports (although the very word “backport” is ludicrous because it carries a negative connotation) is the most attractive solution by far.

      • Mithat says

        While the base+”newports” is attractive for the reasons mentioned, my wee brain tells me there are at least two issues:

        (1) Dependency spirals. This is a problem with the current backports system. Say you want the latest version of Foo (from “newports”). That requires the latest version of Bar (from “newports”). Which means you essentially need to install everything from “newports” if you want to install anything. The chances for some kind of breakage are pretty good. Guaranteeing that there won’t be breakages leads to …

        (2) Extra work. With the CUT proposal, most the work of preparing packages will already have been done for testing packages, and the packagers are assured reasonably modern libs to link against, etc. A backports-based system means an entire new set of packages will have to be built–sometimes linking against and/or using older stuff that will lead to extra debugging, etc.

        One of the reason the “newports” system works in other OSes is that when the OSes version of a library doesn’t provide the needed support, linking to a local copy (one shipped with the app) is very easy. This path ultimately leads to added security issues.

        I think an issue that has to be decided is whether CUT or stable+newports or whatever is meant to address the needs of enterprise or of desktop users. If it’s the former, even stable+newports might be too squirrelly; if it’s the latter, then I’m not entirely sure what benefit stable+newports will bring over CUT.

        • Gabe says

          1. Dependency spirals. In the subset of most popular software (browser, office, viewers, user software), dependency spirals are a lot more uncommon than you’d think, especially if the distinction is made early on between “user software” and “system software.” Every “newport” should be built upon the base version of its dependencies, if possible (this is a lot more likely than you’d think). Cases where newer core dependencies are required (core libs, for instance) can be reviewed on a case-by-case basis and built statically like you mention. Any system software should NOT be “rolling,” of course. The benefits are ridiculously obvious and many concrete examples are given in my thread. Rolling kernel/drivers/xorg = rolling problems/instability.

          The “added security issues” you mention are also largely imaginary, as introducing any new software alone carries the same risks — but only with that piece of software. Similarly, static libraries built into an application are only used with that application (so can be considered part of that application anyway). Stable + newport (especially if “newport” is an optional repository) is best for both desktop and enterprise users, because it’s all desktop users care about (that their system has up-to-date, competitive applications) and the system stability needed for enterprise use.

          It is a win-win-win. It will, however, require more work on the part of Debian. Ubuntu could implement such a system much more easily by diverting the resources it currently squanders on half-baked monthly releases to this more rational LTS + newport setup.

          • Tim Richardson says

            I’m pretty sympathetic to the idea of doing more with backports. But there is a huge difference between the way Microsoft works and the way linux works: hardware support. Microsoft gets third parties to implement hardware by providing drivers with the hardware. With Linux, hardware support comes in the kernel. So a stable system + backports will not appeal to people who need recent hardware supported, which is a lot of desktop users. Porting drivers back to old linux kernels is not easy. So for new hardware support, you need new kernels, and the stable+backport approach is not going to be the answer.

    • says

      Calvin, it’s not an “either … or …” situation. Backports are always possible whether CUT exists or not. And Tim is very right, one of the major reason people want newer release of Debian is for the hardware support that it brings: in 18 months between 2 stable releases, lots of new hardware hits the market.

      Also if you assume that doing one takes resources away from the other, then it also means that doing more backports means less people fixing bugs in the rolling distribution that serves as a basis for the next stable version. I don’t think this reasoning is valid in particular when the backport maintainer is often not the usual package maintainer.

      • Gabe says

        I needed support for my wireless card in Karmic. So I installed a kernel from the ppa along with wireless-modules-backports. Hardware support is not such a huge deal when you can just drop in a brand-new kernel whenever you like.

        It’s ironic. They have a repo of kernels they backport without a second thought and stick in a ppa. Yet the user software doesn’t get backported or even touched, and versions of software lag ridiculously, often losing essential compatibility and feature parity with their up-to-date counterparts.

        This situation is exactly the reverse of what you and Tim are implying, is it not? Can such a system as I described previously function, with the user optionally enabling specific drivers (or even a drop-in kernel a la ubuntu) when necessary?

          • Gabe says

            Nope, double-clicking two .debs is easier than the average Joe’s long hunt for drivers (and then the installation thereof) under Windows. Even the most trivial CD installers requires Joe to understand what he’s doing, or at the very least keeps him pressing “next” many times… and perhaps even setting some preferences!

            In any case, the effort spent on backports for a solid base pays off however you divvy it up. For “Needs something to work out of the box,” you can’t possibly do better than a well-tested base replete with modern “newport” software. THAT’s what average Joe needs practically — not OpenOffice 2.2, for instance, which has fallen way behind in compatibility with Office 2007 formats, excel functions, and general feature parity.

            Solid, stable base that works — check. Modern software atop it — check. How would “Two WHOLE NEW operating systems [and attendant breakage] every year, guaranteed!” sound to someone who just wants to use his computer with the latest software? People are still on XP, sadly, because it runs well on their computers and supports all the latest software they need (though MS is forcing a change, along with OEM’s).

            It’s not exactly rocket science to see where the popularity lies for the “average Joe,” who clings to this model like a safety net. Heck, so do coorporations. So do I. I hate spending hours (days!) troubleshooting every few months when the next Ubuntu breaks everything…JUST to have relevant, up-to-date user software. It’s pathetic. My machine worked just fine with the old one — worked just fine with Hardy, as a matter of fact.

            And that’s the Ubuntu example. For Debian, there’s a lot more power here — the base is already stable, and the philosophy is already “release when ready.” The backlash from “average Joe” is having to use a system with ridiculously out-of-date user software — he couldn’t care less how the computer interacts with his hardware internally. That’s why stable base + rolling user software (CUTs thereof) would be the most optimal.

            And like I was trying to convey above, new hardware can be supported via a different kind of backport — Ubuntu’s method works great in that regard, ironically. (Just click wireless backports in your package manager, for instance).