Thanks sam!

I really appreciated your last Bits of the DPL.

I discover a DPL taking position on hot topics of the moment. I’m glad to have a DPL who is trying to fulfill his duty of leading discussions amongst developers. He gave his opinion on the current vote about “endorsing the concept of Debian Maintainers” (he’s in favor because it dilutes power) and also about Apt’s change to install Recommends by default. I’m glad to hear the encouraging news concerning volunteers for ftpmasters.

By the way, if you have voted for Sam, and if Sam’s opinion bears any importance for you, you still have until saturday midnight (UTC) to change your vote if you wish so (like Russ did). Right now, only 289 DD have voted.

DM and internal politics

If you don’t follow debian-vote, you have missed this.

It’s really worth a read before casting your final vote on this issue. As I explained in my reply to Russ, this vote is not about details but whether we want to have an intermediate level between DD and nothing, or not.

If you don’t give an initial policy, then people against DM will use that “hole” to block it because “it’s not how DM must be done” (and then you’ll need another GR to define a correct implementation and overrule those who are blocking). Yet people keep mixing issues when discussing DM. For some, DM is okay if we had a working NM system. For some, DM would be okay if the responsibility to give upload rights didn’t rely on DD but on a sort of QA committee. For some, DM would be okay if it were integrated in NM. There are also people who are opposed to this second class of contributors but I don’t think they are a majority. Still we might loose a nice opportunity because people want to solve too many things at once instead of doing a first step in a new direction.

Assembling bits of history with git: take two

Following my previous article, I had some interesting comments introducing me to git-filter-branch (which is a new function coming from cogito’s cg-admin-rewritehist). This command is really designed to rewrite the history and you can do much more changes… it enabled me to fix the dates/authors/committers/logs of all the commits that were created with git_load_dirs. It can also be used to add one or more “parent commits” to any commit.

In parallel I discovered some problems with the git repository that I created: the tags were no more pointing to my master branch. This is because git rebase won’t convert them while rewriting history.

This lead me to redo everything from scratch. This time I used git-filter-branch instead. The man page even gives an example of how to link two branches together as if one was the predecessor of the other. Here’s how you can do it: let’s bind together “old” and “new”… the resulting branch will be “new-rewritten”.

$ git rev-parse old
0975870bb1631379f2da798fa78736a4fe32960a
$ git checkout new
$ git-filter-branch --tag-name-filter=cat --parent-filter \
"sed -e 's/^$/-p 0975870bb1631379f2da798fa78736a4fe32960a/'" \
new-rewritten
[...]
Rewritten history saved to the new-rewritten branch

Short explanation: the only commit without a parent commit (thus matching the empty regex “^$”) is the root commit and this one is changed to have a parent (-p) which is the last commit of the branch “old”.

At the end, you remove all the temporary branches, keep only what’s needed and repack everything to save space:


$ git branch -D old new
$ git prune
$ git repack -a -d

Is forking NM good?

In a discussion with Bdale, he suggested that DM is seen as forking NM. And some people do not like forks. They are not opposed to DM in principle but do not want it outside of the current NM team.

Obviously DM tries to respond to cases that NM is not prepared to handle. Furthermore, the DM discussion has been active for quite some time and the various members of the NM team (Frontdesk, DAM) have not participated much in the public discussion. Only when it comes to a vote do we hear some more (negative) opinions. I don’t see that as a sign of willingness to integrate DM or something similar in the current NM structure.

So people who are requesting DM to be integrated in NM, please take it up with the frontdesk/DAM… and don’t oppose the principle just because of organizational matters.

Internal organization always change and adapt themselves to the situation. Joey is right when he compares this to the introduction of the sponsorship process. I was one of the main actor in that process. I introduced the concept without the consent of the NM team (James Troup, Martin “Joey” Schulze) at that time.

It was a fork, a new way to proceed and it became mainstream with the creation of the current NM process. It’s the natural way of doing things in a free software project.

That said, I’m not opposed to improving our NM process. It really needs to be reworked in a “Membership Process” and be open to various kinds of contributors. That’s why I created a dedicated wiki page: http://wiki.debian.org/Projects/ReformedMembershipProcess

Let’s see if we’re ready to really fix that! I hope to have comments from all the people who look to be so eager to fix the NM process. :-)

Assembling bits of history with git

The dpkg team has a nice history of changing VCS over time. At the beginning, Ian Jackson simply uploaded new tarballs, then CVS was used during a few years, then Arch got used and up to now Subversion was used. When the subversion repository got created, the arch history has not been integrated as somehow the conversion tools didn’t work.

Now we’re likely to move over git for various reasons and we wanted to get back the various bits of history stored in the different VCS. Unfortunately we lost the arch repository. So we have disjoints bits of history and we want to put them all in a single nice git branch… git comes with git-cvsimport, git-archimport and git-svnimport, so converting CVS/SVN/Arch repositories is relatively easy. But you end up with several repositories and several branches.

Git comes with a nice feature called “git rebase” which is able to replay history over another branch, but for this to work you need to have a common ancestor in the branch used for the rebase. That’s not the case… so let’s try to create that common ancestor! Extracting the first tree from the newest branch and committing it on top on the oldest branch will give that common ancestor because two identical trees will have the same identifier. Using git_load_dirs you can easily load a tree in your git repository, and “git archive” will let you extract the first tree too.

In short, let’s see how I attach the “master” branch of my “git-svn” repository to the “master” branch of my “git-cvs” repository:

$ cd git-svn
$ git-rev-list --all | tail -1
0d6ec86c5d05f7e60a484c68d37fb5fc31146c40
$ git-archive --prefix=dpkg-1.13.11/ 0d6ec86c5d05f7e60a484c68d37fb5fc31146c40 | (cd /tmp && tar xf -)
$ cd ../git-cvs
$ git checkout master
$ git_load_dirs -L"Fake commit to link SVN to older CVS history" /tmp/dpkg-1.13.11
[...]
$ git fetch ../git-svn master:svn
$ git checkout svn
$ git rebase master

That’s it, your svn branch now contains the old cvs history. Repeat as many times as necessary…

On Debian Maintainers

I won’t re-explain in great length why I think it’s good to endorse the concept of Debian Maintainers. I have been enough involved in the debian-vote discussions (and the previous debian-project discussions).

I would just like to remind everybody that we elected sam to see changes and progress on many areas. We haven’t seen many results yet, but I know that sam has been working hard and I’ve been helping him as much as I can on the problem of our DSA team (one day I might blog about it or even start a GR if the situation continues to not improve despite the numerous efforts that we’ve put into it).

Here we have a concrete proposal for a change, and I believe a change for the better. But just like for every change, people have fears: they fear people who only care about technical excellence and not too much about philosophy, they fear that the quality level will drop, they fear that nobody will care about NM afterwards, etc.

It’s legitimate to have concerns, to express them and to discuss them. But we should not let them take us over or we’ll end up abandoning all initiatives that are required to evolve and adjust to our moving environment.

We need to encourage people who are ready to try out new things. In cases where it’s not entirely clear how the situation will evolve, it’s better to try out and react accordingly instead of doing nothing and hoping that people will wait us. When we do things while hoping for the best, the worst won’t come up true so easily.

Have faith in our future.

Alioth and OpenID

Stratus seeks for comments from Alioth admins.

Yes I’d like to see OpenID integration with Gforge. The upstream situation is a bit difficult, so I don’t think that you’ll have official opinion from them.

In my case, I want OpenID integration because it would be cool to offer a standard wiki and be able to define ACL on some pages which refer to the Alioth accounts that people are used to use. In the longer term, we have other web services which are going to need authentication (DWTT, new version of the PTS, …) in order to provide customized content and it would be great to rely on OpenID for that part.

I’m waiting your patches! ;-)

And next time you want an opinion from the Alioth admins, please mail us instead of hoping that we won’t miss your blog entry in planet.debian.org.

More fun with Linux and serial ports on slow hardware

This is a never ending story for me. The first time I’ve had problems with Linux’s handling of serial UART dates back to 2005 (see my previous blog post on buffer overruns). At that time I could improve the situation by applying two patches (kernel-preempt and low latency).

One year later, I have a situation where the buffer overruns are again easily reproducible at slower baudrate (54 kbauds), arguably there’s more than a serial application running this time, and it looks like the load generated by other processes (mainly watching digital I/O) renders the system less reliable with respect to its handling of serial ports.

This time I follow the advice to try out the 2.6 kernel because many “real-time improvements” (coming from the -rt branch, check its wiki) and “embedded improvements” (coming from linux-tiny) have been merged.

So I tried the stock kernel with very bad result. Results are better than the stock 2.4 but they are worse than the patched 2.4.
So I decide to try the -rt patch on 2.6, but this patch doesn’t work on my CPU card and my bugreport didn’t lead to any fix (nobody responded even though I tried hard to include the necessary information and I was ready to do whatever I would have been asked to try).

At the same time I explain my problem on the linux-kernel mailing list.
The discussion doesn’t answer my questions but still brings two ideas to try out. In the end, with two simple tweaks to the stock 2.6 kernel (mainly configuring the UART to send the interrupt as soon as the first character arrives, instead of waiting for 8 chars to accumulate in the FIFO) I have been able to get something better than the patched 2.4. And it turns out my first choices for the 2.6 kernel configuration have not been very wise so the comparison between stock 2.4 and stock 2.6 above doesn’t mean much.

Unfortunately, even if better than the patched Linux 2.4, it still doesn’t give good results in some conditions. So my primary question remains: is there a way to patch my kernel so that it will handle the serial related tasks (servicing interrupts from the UART mainly) as its primary job ? I don’t mind if such a change impact negatively the speed of the system if I can make sure that my serial exchanges are reliable.

And by reliable I mean of course no buffer overruns, but there’s a second similar problem that has been discovered: when using software handshake, the system can send out between 10 and 25 characters after the partner has sent its XOFF. Sending up to 6 characters after the XOFF is ok, more is asking for troubles because the UART on the other side will probably encounter a buffer overrun…

If you have any idea on how to resolve my problems, by any means, let me know.

Some words about dunc-tank

Madcoder decided to quote some bits of an IRC conversation held on #debian-devel-fr and (of course, given his current frustration) he munges the meaning of my quote.

I was explaining that for me dunc-tank was definitely not perfect but that it was a first step in a global direction that I’d like to explore. I explained my long term project with dunc-tank in a french blog entry and I also explained it to Bruce Byfield who interviewed me as a member of dunc-tank.

My long term project always involved that decision-making of what to fund would be shared between the donors and all Debian developers. I sincerly hope to avoid many of the current criticism with this infrastructure but I can’t be sure. In the mean time, the current experiment is not run because it’s perfect and ready for generalization but because we want to see if it’s possible to get enough funding, and if it’s actually worth to invest more time in developing something more acceptable to everybody. (And also because we would really love to have etch release in december)

This is my opinion: I’m not speaking for dunc-tank although I have the feeling that others members of the board are there for similar reasons.

Update: following sam’s bad interpretation I fixed my wording to say “…decision-making of what to fund would be shared…”.

Steering committee or board?

There’s a new idea floating around: creating a steering committee for Debian. I like the principle but I think we should aim for a broader change in the constitution.

I don’t think creating a new separate structure is a good idea, because in my opinion the DPL should be the steering committee. Of course a single individual can’t play that role. And in fact, this is true for almost any task that the DPL currently has to handle.

So we need to get rid of the DPL and to replace it with a board. And that board would be the steering committee and would also have the responsibilities that come currently with the DPL hat. Why?

First of all, the role of DPL is to provide a vision but most recent leaders have not been able to do that as they tend to get overwhelmed by simple administrative work. If you remove the hope to effectively lead Debian by giving that power to a committee, then you scare everybody that wanted to be DPL because it effectively become an administrative position with no interest.

Then, in recent years, the DPL position tended to become a multi-person thingie, first with the DPL team idea and this year with the 2IC (sort of a “DPL assistant”). So it looks like we’re ready to switch to a fully multi-personal leadership: one of an elected board.

And last point, since the DPL tends to be active only on internal organizational issues, for most persons he’s only working on “political” stuff and he’s not valued as contributor leading the distribution where it needs to be lead: to the next release.

That’s why I suggest that the board should be elected for an entire release with a maximum of 21 months. After each release we elect a new board and it should be in place for 18 months ideally.

Tying the term to a release seems like the right approach to me: at the beginning the board is effectively doing most of its work as steering committee (setting/approving release goals) and at the end, during the period where we’re freezing it has more time to concentrate on organizational issues.

The questions is now: how is that going to work with the release managers? Will that position become only an administrative work of low-level coordination without any influence on the whole distribution?