Friday, May 20, 2011

Creating my kitchen entertainment center (Part 3)

Building the case

I have finished my case (more or less). I still have to lacquer it, it will be white high gloss because my kitchen is painted with white high gloss. The cutting for the display will be framed with polished metal. Have to do this after lacquer the case. As you can see in the pictures, I have decided to place the screen on top with the 2 speakers at the bottom and an USB connector between them. The speaker are the one I have described in my first blog post, they are the X-mini II Capsule Speaker. I was really surprised by the good sound quality of these little speakers.

IMG_3039 IMG_3040 IMG_3035 IMG_3041 IMG_3037

I still need a small hole in the front of the case to place a switch to turn the eee on. Have to think about how I will do it, any ideas out there?

I have placed my eee beside the case, so you have something for the comparison of the sizes. That’s all so far. Still waiting for my touchscreen panel, that’s the next big thing in this little project.

Software Discussion Etiquette

As I read discussions about software, I find that there are quite a few people that don’t understand the proper way to act, which results discussions turning into heated, personal arguments. To help potentially lower the damage done in these discussions, I’m making this Rolling Release post about software discussion etiquette as the spiritual successor to Allan McRae’s blog post, “How To File A Bug Report“. In Allan’s post, he lists several things to keep in mind that also apply here. While I don’t claim to be an expert on etiquette, I know from first-hand experience that developers and designers want users to be as pleasant and simple as possible to work with, and by following these general guidelines (at least), you can help your favorite software project by not being a jerk ;) .

1: Be clear and concise

Like reporting a bug, all discussions need to have the proper context. Saying, “Help my computer won’t boot” is very vague, but “My desktop fails to boot, here’s a log of the error:” is much more precise. If you’re asking about design decisions or whether something is a regression, list examples of how it could possibly be a regression instead of simply stating that it is and leaving it at that. Nothing annoys developers more than a crowd of people talking about a regression without evidence that it is so, especially if the answers to their claims have been answered many times before (covered at number 3, below).

2. Be prepared to be told you’re wrong; stay calm, polite, and humble

When you publish something in the world of science, it is subjected to a lot of criticism and attempts to disprove your findings. This is expected. In the world of software design, lots of intense discussion about the “right” way to make the UI is done, and many proposals are rejected. Nobody likes a blow to their pride, but if you give a suggestion and the designers disagree with you, an angry response is not constructive and only makes them less likely to accept your proposal. Resorting to baseless insults and accusations that the developers or designers don’t know what they’re doing is hurtful, rude, and, in many cases, very wrong. Software mailing lists, for example, tend to have a very formal tone. You might get a reply saying that your statements are “invalid”, “incorrect”, or “unnecessary”. The developers and designers are usually trying to not be rude, and they usually don’t mean to sound that way. If a developer or designer rejects your proposals, you should either try to understand their way of seeing it or politely dismiss the issue.

One common example of not following this rule is when users label “Not Invented Here” syndrome (rejecting something solely because it was not made internally) on a group of developers, when that is usually not the case at all. When developers do not accept a patch, proposal, or any sort of addition, this is not necessarily because it was “not invented there”, but rather because they have at least one reason why they believe it should not be included. If you believe your suggestion or addition would be beneficial to the project, talk with them about it as soon as possible in the planning stages, not after it’s finished. Don’t develop it, submit it, and expect them to approve it no matter what. Sometimes as you develop an addition to a project, the developers can help contribute to the design, and if they aren’t involved, there’s a much higher chance that your addition will be rejected for reasons that are too late to change. For example, GNOME rejected libappindicator for various reasons (timeline of the drama surrounding it here), and recently there’s a discussion about LightDM being proposed as a replacement to GDM, which is very close to rejection by the looks of it for not being an adequate GDM replacement.

3. Do research before asking questions

“Why is there no omnipresent window list in GNOME 3?” This question rings in the heads of the people using the GNOME Shell mailing list, as it is asked at least twice per month (or so it seems). It’s very easy to explain why there isn’t a window list, and it has been explained several times before in many, many different threads and pages. All it would take is to skim one or two of those threads, read the design pages, or do a Google search to find out the answer. The designers and developers would much rather develop/design than having to answer the same questions over and over, so do them a favor and research the questions you have thoroughly before asking. This applies to IRC and other mediums of communication as well.

4. Do not “me too”

As with bug reporting, do not say replies containing what amounts to only “+1″, “I agree”, or “me too” without additional information. If you do say these things, be sure to add something after it, like a point of interest that who you’re replying to could consider, or a possible concern.

5. Uz Propp4r Gr4mmarz n Sp3ll1ng

Always run a spell checker just to be on the safe side, and re-read your messages before you send them. Even if you don’t need to correct spelling and grammar, you can always re-phrase sentences to make the message sound professional. Try to avoid 1337sp34k (leetspeek/chatspeak, like “LOL” or “noob”) at all costs, though the occasional “btw”, “atm”, “IIRC”, or “AFAIK” could be considered appropriate depending on where your discussion takes place. If English is not your first language, be sure to mention that if you feel that your grammar or sentence-structure might be hard to understand to native English-speakers.

In summary, whenever you are in a discussion about software, whether it’s on a mailing list, IRC, BBS, or bug tracker, be sure to stay patient, knowledgeable, and open-minded. The more pleasant you are to work with, the more work can be done on the software project in question. Remember these guidelines, and try to stay pleasant to work with :) !

Also, remember that you can submit articles to Rolling Release! We need as many submissions as we can get! Just log-in or register an account to participate.

Thursday, May 19, 2011

The Calligra suite

The second alpha release of the Calligra suite has been released yesterday and we packaged it in the [kde-unstable] repository.

Calligra is the new name of the KOffice suite, so the Calligra packages will replace the KOffice packages.

See the Calligra announcement for more.

Please read the wiki page about [kde-unstable] before add it. Remember that KDEPIM 4.6 packages are in [kde-unstable] too.

As usual please report any packaging bug to


Tuesday, May 17, 2011

How to not display floating point- a lesson from MySQL

If you've ever used MySQL, you've probably used either the SHOW STATUS; query or the equivalent mysqladmin status command at least once.

$ mysqladmin status
Uptime: 569459  Threads: 2  Questions: 17666  Slow queries: 3  Opens: 118
  Flush tables: 1  Open tables: 111  Queries per second avg: 0.31

Notice anything funny there? You should. Pulling out the trusty calculator tells me 17666 / 569459 is in fact closer to .0310, not 0.31. This a a classic case of floating point string printing gone wrong. Why does this happen? Turns out the internal printf() function used by MySQL doesn't support %f, the normal format specifier for floats. So instead, the float is printed using two integers separated by a period. Of course, this is total failsauce if you don't zero-pad your values.

The fix is easy; simply make sure you zero-pad the printed value right of the decimal point if doing division and modulo tricks. Here is the patch with a couple of extra lines of context to see what is going on:

diff -r -U8 mysql-5.5.12-original/sql/ mysql-5.5.12/sql/
--- mysql-5.5.12-original/sql/      2011-04-11 05:44:03.000000000 -0500
+++ mysql-5.5.12/sql/       2011-05-17 09:07:28.000000000 -0500
@@ -1303,17 +1303,17 @@
     if (!(uptime= (ulong) (thd->start_time - server_start_time)))
       queries_per_second1000= 0;
       queries_per_second1000= thd->query_id * LL(1000) / uptime;

     length= my_snprintf(buff, buff_len - 1,
                         "Uptime: %lu  Threads: %d  Questions: %lu  "
                         "Slow queries: %lu  Opens: %lu  Flush tables: %lu  "
-                        "Open tables: %u  Queries per second avg: %u.%u",
+                        "Open tables: %u  Queries per second avg: %u.%03u",
                         (int) thread_count, (ulong) thd->query_id,
                         (uint) (queries_per_second1000 / 1000),
                         (uint) (queries_per_second1000 % 1000));

Now the values print as expected when the decimal portion is less than 100:

$ mysqladmin status
Uptime: 3  Threads: 1  Questions: 2  Slow queries: 0  Opens: 33
  Flush tables: 1  Open tables: 26  Queries per second avg: 0.666

$ mysqladmin status
Uptime: 50  Threads: 1  Questions: 3  Slow queries: 0  Opens: 33
  Flush tables: 1  Open tables: 26  Queries per second avg: 0.060

I filed a bug report with the above patch, and got at least one piece of quick feedback, so we'll see if it gets fixed anytime soon. I also looked quickly for any other suspect format specifiers (the regex was '[ud]\.%[ud]') and didn't see any.

Shockingly this bug has been around since some changes in April 2007. I can't believe I was the only one noticing this and expecting it to get fixed at any time. I guess it just finally pissed me off enough today to do something about it.

Archiso testbuilds & feedback system

you can find the latest archiso testbuilds @
when you run them, please report your successes and failures on

the more quality feedback you provide, the quicker/better new releases will come.

And preferrably, try to test options which are not tested yet by others, or which were reported to fail, or which were tested with old images.

for more info, see … s_feedback … mages.html


Where are the new Arch Linux release images?

This is a question I get asked a lot recently. The latest official images are a year old. This is not inherently bad, unless you pick the wrong mirror from the outdated mirrorlist during a netinstall, or are using hardware which is not supported by the year old kernel/drivers. A core install will yield a system that needs drastic updating, which is a bit cumbersome. There are probably some other problems I'm not aware of. Many of these problems can be worked around with ('pacman -Sy mirrorlist' on the install cd for example), but it's not exactly convenient.

Over the past years (the spare time in between the band, my search for an apartment in Ghent and a bunch of other things) I've worked towards fully refactoring and overthrowing how releases are being done. Most of that is visible in the releng build environment repository. Every 3 days, the following happens automatically:

  • packages to build images (archiso) and some of which are included on the images (aif and libui-sh) get rebuilt. They are actually git versions, the latter two have a separate develop branch which is used. Normal packages get updated the normal way.
  • the images are rebuilt, and the dual images get generated
  • the images, the packages and their sources are synced to the public on
Actually things are bit more involved but this is the gist of it. All of this is now run on a dedicated VPS donated by airVM.

I never really completed the aif automatic test suite, somewhere along the way I decided to focus on crowdsourcing test results. The weight of testing images (and all possible combinations of features) has always been huge, and trying to script tasks would either get way complicated or insufficient. So the new approach is largely inspired by the core and testing repositories: we automatically build testing images, people report feedback, and if there is sufficient feedback for a certain set of images (or a bunch of similar sets of images) that allows us to conclude we have some good material, we can promote the set to official media.
The latest piece of the puzzle is the new releng feedback application which Tom Willemsen contributed. (again: outsourcing FTW). It is still fairly basic, but should already be useful enough. It lists pretty much all features you can use with archiso/AIF based images and automatically updates the list of isos based on what it sees appearing online, so I think it will be a good indicator on what works and what doesn't, and that for each known set of isos.

So there. Bleeding edge images for everyone, and for those who want some quality assurance: the more you contribute, the more likely you'll see official releases.

While contributing feedback is now certainly very easy, don't think that only providing feedback is sufficient, it takes time to maintain and improve aif and archiso as well and contributions in that department are still very welcome. I don't think we'll get to the original plan of official archiso releases for each stable kernel version, that seems like a lot of work despite all the above.

As for what is new: again too much to list, here is a changelog but I stopped updating it at some point. I guess the most visible interesting stuff is friendlier package dialogs (with package descriptions), support for nilfs, btrfs and syslinux (thanks Matthew Gyurgyik), and an issues reporting tool. Under the hood we refactored quite a bit, mostly blockdevice related stuff, config generation and the "execution plan" (like, how each function calls each other and how failures are tracked) in AIF has been simplified considerably.

Building a Virtual Army

Recently, in testing my latest new toy (geninit), I’ve needed to create a variety of different root device setups to put geninit through the proverbial ringer. Up until a few weeks ago, this would have been done through VirtualBox. However, I’ve never really been a huge fan of VirtualBox, increasingly due to my opposition of Oracle. The management is fairly straight forward, but the machines themselves feel fairly limited, and don’t take full advantage of processor extensions like VT-x. A few years back, it was even the case that VirtualBox was recommending not to enable VT-x at all, as it would actually hinder performance.

The other obvious option is QEMU, with KVM support. On a few occasions, I’ve poked around with QEMU, but was never really satisfied. Turns out, I really just didn’t give it enough time.

QEMU has a lovely feature that allows it to emulate a serial console — effectively giving you a VM in a terminal. In addition, it supports the virtio family of devices, which allows for much better performance, particularly in the realm of I/O, where the typical bottlenecks lie. Now things start to get more appealing. With a little bit of bash, and a fair bit of time with some documentation, things were starting to come together. I figured I’d share what I came up with, in case anyone else finds themselves in a similar situation.

Initial Setup

You’ll need a few packages to get started: qemu-kvm, vde2, and iptables for now. You’ll also, of course, want a liveCD for your favorite distro. Start by creating a qcow2 image, which will serve as the disk for your VM:

qemu-img create -f qcow2 imagename.qcow2 5G

qcow2 is the QEMU image format of choice, which supports compression, encryption, dynamic sizing, copy on write, and snapshots. There are other formats, but this is by far the winner for versatility. Creation should be instant.

modprobe -a kvm-intel tun virtio

Note that I’m using intel, but there also exists kvm-amd for you other folks. Make sure your processor actually supports this — you can grep for ‘vmx’ in /proc/cpuinfo, which will hopefully return results in your processor flags. You’ll also want to make sure that /dev/kvm is created with ‘kvm’ as group. Arch Linux provides a udev rule to do this by default. Your mileage may vary. Make sure that you add yourself to the kvm group and log out for the changes to take effect.


We’re going to be using VDE for networking support which will essentially create an internal VLAN for our guests. Start by creating the gateway for the VLAN:

vde_switch -tap tap0 -mod 660 -group kvm -daemon

This launches vde_switch, which creates a new network device: tap0. It doesn’t yet have an IP, so we’ll need to assign it:

ip addr add dev tap0
ip link set dev tap0 up

Note that I could have picked any RFC 4193 internal address, just as long as its not the same network as my LAN.

With our gateway created, we need to allow traffic to forwarded properly through it:

sysctl -w net.ipv4.ip_forward=1
iptables -t nat -A POSTROUTING -s -o eth0 -j MASQUERADE

A few points of interest here. The first is that you’ll want to add both these commands to a file that will be routinely read on bootup. I’ll leave it up to the reader as an exercise as to find the distro specific and recommended location for these. The second is that the iptables rule should be allowing any traffic whose source is the same as network our gateway device’s IP. The output for the rule, specified by -o, doesn’t necessarily need to be defined. In the case of my laptop which sometimes uses wlan0 and sometimes usb0, I left this undefined and routing rules take care of finding the correct path.

Launch It!

With networking in place and disks ready, we should be all set to launch the VM. I’d suggest making a small script out of this, as it’ll be useful later on:


mem='-m 1024'  # RAM allocation
cpus='-smp 2'  # CPU allocation
net='-net nic,model=virtio -net vde' # networking using virtio & VDE
drive='-drive file=/path/to/qcow,if=virtio' # disk image we created

qemu-kvm $mem $cpus $net $drive -cdrom /path/to/livecd.iso -boot d

Note that you don’t currently have the ability to acquire an address for guest’s network device via DHCP. I’ll cover that later as an optional feature. For now, just assign a static IP from within the guest:

ip addr add dev eth0
ip link set dev eth0 up
ip route add default via
echo 'nameserver' >> /etc/resolv.conf

Note how we’re using the IP of the host’s tap0 device as our default route, and assigning an IP on the same subnet. Install as per usual. Before rebooting, make sure that the serial console is setup. It needs to be defined in your bootloader, on the kernel command line, and possibly as a getty. There’s quite a few flavors for the moving pieces here — some simple googling should quickly lead to results.

Once the serial console is setup, you can boot the VM with the -nographic option, which should happily dump output into your terminal.

DHCP Server

Because I’m lazy, I decided that my little VLAN needs dhcp. The official ISC dhcp server is one option, and requires very little setup, but I was convinced that using dnsmasq was a better solution. It provides a lightweight DHCP server as well as DNS caching, which my desktop can benefit from as well. With dnsmasq installed from your trusty repositories, fire up your favorite editor and open /etc/dnsmasq.conf. We only need to make a few small changes. dnsmasq needs to be set to listen on addresses and/or interfaces, as well as to specify a dhcp range. My chosen settings were:


Now I can add as the primary nameserver for my desktop, and guests on my VLAN will have dhcp as well as the cached DNS goodness. You could easily take this one step further and add in a DNS server for the VLAN. The addition of dynamic DNS updates would be excellent for keeping track of your soldiers. A more blunt approach would be to simply create DHCP reservations. Note that for this approach, you would need to manually pick out MAC addresses for each of the VMs as they’re always the same 52:54:00:12:34:56 across all guests.

Tying it All Together

And last but not least, I expanded on the bash script I described earlier to start up the VM. It’s posted in my bin-scripts repo on Github. Rather straight forward — define a function called vm_GUESTNAME for each VM and add the appropriate options. All that’s required is the drives array.

Monday, May 16, 2011

2011.05-2 archboot "2k11-R2" ISO hybrid image released

Hi Arch community,

Arch Linux (archboot creation tool) 2011.05-2, "2k11-R2" has been released.
To avoid confusion, this is not an official arch linux iso release!

Homepage and for more information on archboot:

- bump to latest kernels and introduce pacman 3.5.2

Hybrid image file and torrent is provided, which include
i686 and x86_64 core repository. Please check md5sum before using it.

Hybrid image file is a standard CD-burnable image and also a raw disk image.
    - Can be burned to CD(RW) media using most CD-burning utilities.
    - Can be raw-written to a drive using 'dd' or similar utilities.
      This method is intended for use with USB thumb drives.

Please get it from your favorite arch linux mirror:

/boot for PXE/Rescue files are provided here: … 11.05/boot


- kernel / LTS kernel
- pacman 3.5.2 usage
- RAM recommendations: 320 MB

Kernel changes:
- bump to latest .38 series and bump lts to latest .32 series

Removed features:
- clamav removed

Environment changes:
- merged in latest initscripts and mkinitcpio changes
- updated pacman mirrorlist
- added cifs-utils, eject and file to environment
- switched to krb5 and dropped heimdal

setup changes:
- allow btrfs for grub2

hwdetect changes:
- none

quickinst changes:
- none

- FTP installation mode:
- CD installation mode:
Further documentation can be found on-disk and on the wiki.
Have fun!


Creating my kitchen entertainment center (Part 2)

While I’m waiting for my touchscreen I can write something about it.

The touchscreen
I have found several auctions at eBay which offers touchscreen panels for the EEE 701. My criteria for it were very easy, no soldering, not that thick (that it fits into the case of the eee) and support for Linux of course . I have decided to choose the touchscreen panel 4W-0701 from VisualTouchWorld which seems assembled by OneTouch, maybe there is some other assembler in the background again :-) As long as the device is running, I don’t bother about it. According to their website, they support Linux and it looks like the panel controller is supported by Linux. I have found two different drivers for that device, first the driver from the manufacturer (it’s the eGalaxTouch driver) and second the evtouch module.

Both modules seems to work, I have to check which one is the best solution and works without any problems. I will write another post about the configuration of the touchscreen panel and some pictures from mounting it to the eee. But first I have to wait one more week because it’s on the way from Hong Kong to Europe. Sadly you won’t find any reseller in Europe (especially Germany) which sell the panel, so I have to wait and you have to wait for the next article, too.

Saturday, May 14, 2011

My metal band

Since the audience of this blog is largely technical, I don't post much about other topics, but I feel it's time for a short summary about one of my "real life projects".
In the spring of 2009 I joined a progressive death metal band. I've been drumming since I was 17, but during the last 2 years I've been practising and rehearsing like never before.[1]
When you hear yourself on tape for the first time, it's a bit of disillusionment as you suddenly hear every imperfection, many of which you didn't realise you had (or didn't think were very noticeable).
So 2 years of practicing, rehearsing, test recordings, real recordings, mixing sessions (where you really grow a good ear towards imperfections) later we are now getting to the point where we can nail our stuff and are looking very forward to our first gig, which will be June 3rd in jh sjatoo in Kalken. We've written about 7 songs, of which at this point we play 5. I wish we had proper recordings of all of them, but "Total Annihilation" captures several aspects of our style: <object height="225" width="65%"> <param name="movie" value=""> <param name="allowscriptaccess" value="always"> <embed allowscriptaccess="always" height="225" src="" type="application/x-shockwave-flash" width="65%"></embed> </object>
In early 2010 I treated myself (found a nice 2nd hand deal) to a new pdp birch kit with Zildjian Z custom cymbals (that was actually at the time I was in the interview process for a Palo-Alto position at Facebook so I might have needed to sell it again soon after, but that didn't happen).
Here are some pics: 1, 2, 3, 4. More info about the band:

[1] Hence the need I had to find a new maintainer for Uzbl)

Friday, May 13, 2011

Creating my kitchen entertainment center (Part 1)

This blog post is the start of a series of blog posts abbout the creation of my “kitchen entertainment center”.

This week I have decided to create a “kitchen entertainment center” to have more than just my little radio in the kitchen. But what exactly do I want? The end result should be just a screen which I can control by touch. Nothing more. I don’t want to see a computer all day long in my kitchen which I must control with mouse and keyboard. It should play internet radio streams, maybe some videos and my music collection from other PCs. Heavily inspired by this blog post ( I decided to do something similar on my own.

Finding the right computer
I have looked at eBay and hoped that I will find a cheap old laptop which fit my needs. But no, the cheapest I have found with touchscreen costs 160€ (about 228$). A way too much for such a project in my opinion. After googling around for cheap alternatives I have found several posts and videos about modded EEE 701 with touchscreen. Reading around and found out that it isn’t that hard to add a touchscreen to my EEE, I decided to do so. The last usage of my EEE was in August 2010, so I think that won’t be a loss of a heavy used computer. A touchscreen panel for the EEE 701 costs only 35€ (about 50$) at eBay, I have already ordered one. So I have to mod my EEE with the touchscreen, turn around the screen that the screen is displayed if the lid is closed. This is necessary because it should be mounted to the wall in a slim way (a small case, etc). I have seen some pictures on the web which show such mods, so it’s possible. :-)

Finding the right software
Finding a good media software which can be controlled by touch isn’t that easy. My preferred solution is XBMC, it’s already running on my media PC (EEE Box) in the living room mounted at the backside of my TV. XBMC isn’t coded to control it by a touchscreen as default input. But thanks to the skin support, you are able to change the look&feel of XBMC completely. Looking at the forum of XBMC I have found a good skin for XBMC with touchscreen control in mind. You can find some screenshots of it at the forum. I have chosen the XBMC live cd, copied it over to an USB stick and booted from it. Trying to have a first look of everything, if it runs smooth on the EEE with his 900MHz Celeron CPU. But it’s nearly as smooth as on my EEE Box in the living room. Everything was running out of the box (screen resolution, wifi and all the other components which are necessary). After first tests I have installed the live cd on my EEE harddisk.
With the standard repository of the XBMC plugins I have access to youtube, vimeo, grooveshark, shoutcast and (this is a website which lists nearly all radio stations in Europe which are streaming their radio signal to the internet). That are more music sources I ever want… :-) Even plugins for podcasts are available which I will install later.

So, right now I have my EEE completely installed with all necessary plugins and skins. Now I have to decide how I will do the case and which speakers I will take because the builtin speakers are not that good. I have already thought about mounting the X-mini II capsule into my case that you can only see the front of the speakers somewhere around the screen. I will need some kind of access to at least one USB port for USB sticks or other temporary devices, then I need access with my DC plugger to charge the EEE from time to time. I have already a bigger battery which has nearly double the capacity of the original battery, so it has a very long uptime in battery mode, perfect for such kind of usage.

That’s it so far. I will continue to post my progress in this little project with pictures and videos. Hopefully I get something good and usable  at the end of this project.

If you have any tips or tricks which I haven’t thought about, feel free to comment.

Thursday, May 12, 2011

GNOME 3 Has Arrived! Impressions?

GNOME 3, the latest version of one of the most popular Linux Desktop Environments, was released last month and subsequently put in [testing]. It has been well over a week since it was moved out of [testing] and into [extra], so a good amount of GNOME users have probably tried it by now. Users of GNOME on Arch Linux, what are your impressions? Does GNOME 3 fit with your workflow, or does it require some tweaking to make it more usable for you? If so, what did you change?

If it would help, here’s two articles that explain customizing GNOME 3:

How To Tweak GNOME 3 To Your Needs (which I wrote)

Customizing the GNOME Shell (editing theme files directly and such)

Also, don’t forget to check the Arch Wiki and the GNOME Shell Cheat Sheet before complaining; you might find what you’re looking for there.

How To File A Bug Report

I have been noticing that there are some things that people could be improve when reporting bugs to the Arch Linux bug tracker. So here are some guidelines for what I personally like to see in a bug report. Following these would make finding and fixing the bug less work for me (and I assume other developers).

1. Check the bug is not already reported. This includes checking the recently closed bug reports as your issue may already be fixed and in the process of propagating out to the mirrors. This will prevent multiple bug reports for the same issue, which just creates more work for everyone involved.

2. Provide all information. In particular, do not assume the bug is obvious. What might seem an obvious bug to you may not occur on the developers system, so full information about the packages involved and detailed information on how to replicate the issue is essential. I always like to see at minimum the exact package version involved (including pkgver and pkgrel) and the architecture the bug is occurring on (i686 or x86_64). Do not just report it is the “current version” of a package as that can be useless within a few days on a rolling release distro. The more specific you can be with the package update where the issue started occurring, the easier it is to track down the change that caused it.

3. Stick to relevant information. The list of 50 packages you updated before noticing this bug likely contains the problem package, but there is also a lot of unrelated updates. Reduce this down as much as possible before reporting the issue. More time spent by a packager filtering down to only the relevant information means less time actually fixing bugs.

4. Do not report bugs via links. Providing a link to a forum/mailing list thread that describes the bug is not enough. You still need to provide a detailed summary. This is to keep all information in one place, but also prevents the bug being lost if the link goes dead.

5. One bug report, one issue. Even if you have multiple issues with a package, it is usually better to open a new bug report for each issue. This allows each bug to be closed as it is fixed (which might not be all at the same time…) and prevents issues getting lost among the others. The possible exception to this is when many minor issues with a package are being reported along with a PKGBUILD that fixes them all.

6. Report upstream bugs upstream. If a bug is clearly an upstream bug and not a packaging error, then it needs to be reported upstream. It is fine to also report the issue in the Arch bug tracker (if it is a bug and not a feature request) with a link to the upstream report so the Arch package maintainer can track and apply the fix when available. This not only saves the packager a lot of time (they have many bugs to deal with) but it is also useful for upstream to be hearing the bug information directly from the person experiencing it. The exception to this is glibc who do not accept bug reports from users…

7. Follow up any queries. This is particularly important for bugs that can not be replicated by the relevant Arch packager. The longer a query sits unanswered, the longer it will take to trace and fix your bug. If you can no longer replicate the issue due to (e.g.) changing hardware or distribution, then tell us and we can close the bug report.

8. Do not “me too”. There is a vote link on the bug reports you can use to show you also experience the issue. Posting “me too” as a comment only clutters the tracker.

9. Use “LANG=C” for output. Prefixing a command with “LANG=C” will result in the output using the strings in the original code (English for most software). That way, we do not have to reverse-translate messages to understand the error.

Finally, remember that a bug does not exist until it is in the bug tracker. Reporting a bug on the mailing lists, forum, IRC, jabber, etc does not count. These are all fine for tracing the source of your bug before reporting it, but remember that bug trackers exist for a reason.

Edit (2001/05/12): Added #9.

Wednesday, May 11, 2011

initscripts update

Tom Gundersen wrote:

Due to a clash with another package we have moved our newly introduced script from /sbin/rc to /sbin/rc.d.

rc.d allows you to start and stop daemons, as well as show the status of the daemons on your system.

Saturday, May 07, 2011

lv2core update conflict

This is a delayed warning, but the last update to lv2core (if you have not updated yet or intervened) has a file conflict which is safe to ignore:

error: failed to commit transaction (conflicting files)
lv2core: /usr/include/lv2/ exists in filesystem

Simply force an install with --force or -f, preferably pacman -Syf lv2core (in case there are any other notable conflicts in an update which you should not ignore). The file was previously created with an upstream tool during post-installation, but it was later proven to be unnecessary and that we could just package the file (symlink) in question. See the relevant feature request [1] for more information.


Monday, May 02, 2011

initscripts update

Tom Gundersen wrote:

New Features:

  • A new script, /sbin/rc, that allows to start/stop and list daemons.
  • Support FakeRAID (dmraid).
  • Support btrfs.

Functional change:

  • The adjustment of the hwclock for drift is moved into a daemon that should not be used in most scenarios as it can lead to subtle bugs (especially if using dual-boot or ntp). If you know what you are doing and want to adjust the hardware clock for drift, add "hwclock" to your DAEMONS array.
  • We now let udev deal with mdadm, and no longer call mdadm explicitly. This should make things more robust without losing any functionality.


  • We would like to remind everyone that initscripts expect all other packages (except for the kernel) to be up-to-date. This in particular includes udev, mdadm, dmraid and lvm.
  • We now strongly discourage the use of HARDWARECLOCK="localtime", as this may lead to several known and unfixable bugs. However, there are no plans to drop support for "localtime".

codemirror > editarea

So you’re working on some kind of webapp that needs some in-browser code editing functionality of some sort. You search google and discover that most folks are using editarea these days. You figure it must work to be so popular and decide to try it out.

Then as you start to use it, you realize it’s unmaintained, doesn’t work terribly well in Webkit browsers, doesn’t work at all in Internet Explorer 9, is poorly documented and hard to extend, expand, or fix. You’re frustrated.

But repeated web searches do not yield anything as functional, and you resign yourself to a half-working editor, or maybe even decide to write something from scratch.

I’ve been that person, and I was recently lucky enough, while searching for something completely unrelated, to stumble across CodeMirror a standards-compliant javascript-powered code editor that is well documented. It’s not as featureful as editarea, but if you add CodeMirror UI to the mix, you end up with a comparable featureset that actually works.

I’m not normally one for simply reposting links, but as I had so much trouble finding CodeMirror in the first place, I hope this post will either help others to find it, or at least increase CodeMirror’s page rank in search results.

The Quest For The Forty-Spotted Pardalote

Recently I went to Tasmania on a quest to see the Forty-spotted Pardalote. Some might say it was a work trip given I spent most of my time at a conference and work paid for the the travel. And they might be right… but there are twelve species of birds found only on Tasmania, so I might have had other ideas!

Given I only had a two and a half day window to actually have a look around Tasmania (which I had never visited before), and the Forty-spotted Pardalote is restricted to a small area of the island, my chances of success were always low. The odds were made worse given that I was not going to have time to go to either of the islands with the main breeding populations. But the internet had assured me that the Peter Murrell Reserve had a breeding population and it was only a ten minute drive from my hotel, so I could easily go there early morning before heading off elsewhere.

Day 1: Finished with work and had checked into the new hotel by 3pm, so a quick trip out to the reserve before the evening meal. Three hours, no Forty-Spotted Pardalote… but I saw plenty of other endemics including the Green Rosella, Yellow Wattlebird, Yellow-throated and Black-headed Honeyeaters and the Tasmanian Native Hen.

Day 2: This was my day to see Tasmania. A complete impossibility for one day, but I gave it my best shot. An early morning start, and I drove up the east coast from Hobart to St. Helens, cut across to Launceston and then back to Hobart. About 600km in total and a 12 hour journey by the time I stopped at a bunch of tourist attractions and did a couple of small beach and forest walks. I highly recommend anyone who visits the area to spend a lot more time exploring the region. I also recommend never renting a Nissan Micra to drive, as that was an underpowered piece of crap.

Day 3: Up early and headed to the reserve. Four hours of searching, no Forty-Spotted Pardalote… Decided to take a break and drove to Mt. Wellington for a view over Hobart and then to the Tahune Airwalk – a 600m walk through the treetops of a wet eucalypt forest (pictured). See the cantilever at the end there? That is about 50m off the ground… and there was no way I was walking out on it! Headed back to the reserve for another couple of hours. Still no Forty-Spotted Pardalote…

Overall, I managed to see eight of the twelve endemic bird species and thirteen bird species I have not seen in total. So a good haul overall. However…

Final score: Forty-spotted Pardalote 3: Allan 0

Sunday, May 01, 2011

External monitors

When I first started using Linux over a decade ago, dual screen was a pain to set up. When I got my first laptop four years ago, setting up an external monitor was also painful. Then came xrandr and life was good. Now there are nifty little monitor switching GTK apps that allow you to drag screens around just like in Windows or MacOS.

But that’s a lot of fiddling around. For the longest time, my use case has always been either:
a) I am using only my laptop
b) I am using my laptop with my 1920×1080 external monitor connected via VGA (It’s an old laptop)

To accommodate these two use cases, I had connected my “Switch Display” (fn+F7 on my thinkpad) key to the following simple script:

  if ! xrandr | grep VGA1 | grep disconnected  &gt;/dev/null ; then
      xrandr --output LVDS1 --mode 1024x768 --output VGA1 --mode 1920x1080 --above LVDS1
      xrandr --auto

Succinctly, if the external monitor is connected, enable it as “above” my laptop, otherwise, just enable the laptop monitor. All I have to do is plug in or unplug my monitor, hit Fn+F7, and my display would automatically adjust itself.

For the record, I used xbindkeys to connect the button to the script with the following .xbindkeysrc:


This served me well until I bought myself a new television that only operates at 1360×768 on the VGA port. Further, when I’m connecting my laptop to the tv, the television is usually below the laptop monitor rather than above, as my monitor is.

So now, my check_external script looks thusly:

  import subprocess
  positions = {
      "1920x1080": "--above", # Monitor
      "1360x768": "--below" # TV
  output = subprocess.check_output("xrandr", shell=True).decode("utf-8")
  for line in output.split("\n"):
      if external_connected:
          if "+" in line: # + represents the default resolution for that monitor
              resolution = line.split()[0] # + the resolution is in the first column
      if "VGA1 connected" in line:
  if external_connected:
              "xrandr --output LVDS1 --mode 1024x768 --output VGA1 --mode {resolution} {position} LVDS1".format(
                  resolution=resolution, position=positions.get(resolution, "--above")), shell=True)
  else:"xrandr --auto", shell=True)

This is Python 3 code, and works delightfully on my Arch Linux running awesome setup. I still have to do custom xrandr commands if I ever connect to someone else’s projector or monitor (this happens so rarely that I don’t think I’ve done it since Archcon last year), but normally I can get away with a quick “xrandr –auto” in those cases, which usually just clones the display. There are dozens of ways to set up monitors, but this works great for me, and I can normally have my display up and running the way I want it with a couple keystrokes.

Saturday, April 30, 2011

GNOME3 in [extra]

Ionuț Mircea Bîru wrote:

GNOME 3.0.1 is being moved in [extra]

This is a major update and you should take note of a couple of things:

  • GNOME3 is replacing GNOME2
  • GNOME3 has two modes, "standard" mode (gnome-shell) and "fallback" mode (gnome-panel + metacity)
  • Some packages, like applets using Bonobo, will be dropped in the next few days. A list will be available at
  • pulseaudio is now required to run the GNOME desktop

Update and installation instructions are available at

Only bugs related to packaging should be reported to

Crashes and feature requests should be reported to

gnome3 is moving to extra

## GNOME 3.0.1 is being moved in [extra]

This is a major update and you should take note of a couple of things:

* GNOME3 is replacing GNOME2
* GNOME3 has two modes, "standard" mode (gnome-shell) and "fallback" mode (gnome-panel + metacity)
* Some packages, like applets using Bonobo, will be dropped in the next few days. A list will be available at … es_dropped
* pulseaudio is now required to run the GNOME desktop

Update and installation instructions are available at

Only bugs related to packaging should be reported to

Crashes and feature requests should be reported to

What Happens in Early Userspace...

I know what you’re thinking, but it doesn’t stay in early userspace. If you’re stuck there, then you’re doing it wrong.

A few months ago, the Arch developer crew put up a “Developers Wanted” sign on their door in search of devoted folks to take over the reigns on some of the in house projects: initscripts, netcfg, and ABS. I had actually applied for the initscripts job, but when I was approached by one of the developers and told that it had come down to myself and Tom Gunderson, I joyfully passed the job onto Tom. He’s an extremely smart and capable hacker. I had really only applied to make sure that someone reasonable was given the spot.

I had to wonder though. Why wasn’t mkinitcpio on the list? Sorry Thomas, but I think its a mess. Even today, look at the open bug reports it still has. I spoke with the current maintainer directly and he stated that he was going to keep control of the project. I have plenty of respect for that if say you have the time, but I question his availability. Several bug reports which I posted patches for went untouched for months. Needless to say, I felt something needed to be done.

Before I continue, a little bit about mkinitcpio from an architectural standpoint. I consider it vaguely divisible into two pieces of functionality: what happens in every day userland (creation of the initramfs image), and what happens in early userland (on the initramfs itself). /sbin/mkinitcpio heads up the creation half of this, and pulls in files from /lib/initcpio/install and /lib/initcpio/hooks. ‘install’ coincides with files, binaries, and modules pulled in during the creation phase. ‘hooks’ are added to the resultant image as is and run during bootup. The resulting image is championed by a crudely written shell script which doesn’t even take advantage of ash’s features (which it runs under).

On a few occasions, I’ve sat down and tried to contend with the codebase. I understand the process just fine, but I constantly find myself pounding my desk and throwing things when I’m jumping from file tracing functions to other functions with similar names, all of which perform similar tasks. The state of the global variables in the code is a bit disturbing as well. Add in the typical “Bash” style I see in Arch’s projets, mix in odd commenting choices and you’ve got yourself a utility which I find myself surprised to work whenever I roll out a new kernel build.

I can do this. I can make it better. I’ve tried. I really have. But every time I sit down and start to make some small changes, I see myself in store for a full rewrite. So why not just rewrite it? Nuts to that, who has the time…

A few weeks ago, this insane idea floated into my head — the early userspace init can be written in C. There is somewhat of a sparking event, but I’m not going to go into detail. In the course of a few days, I threw together dinit which serves as a proof of concept for a compiled pid 1. I hacked up my copy of mkinitcpio and glued together an image. Would it boot? Yes! With the help of some other archers, I ironed out a few bugs. Awesome, we have a starting point.

I took this last week off from work, and decided that rather than take it as a mental break, I was going to write a whole lot of Bash and C. mkinitcpio shall be reborn! No rewriting sections and hoping the maintainer accepts it. Just fucking forge on and write your own. It shall be done. It’s been a long week, and I’m quite pleased to say that geninit not only works in theory, but it’s currently booting my desktop and a small army of testing VMs.

My design goals were/are pretty straightforward:

  • Be mostly compatible with mkinitcpio: keep the options and functionality largely the same. Some naming conventions will change. Your initramfs should not have the word ‘kernel’ in it.
  • More cleanly written: Lots of safe, best practices and commenting when logic might be unclear.
  • Faster than mkinitcpio: It’s not easy to profile shell code, but mkinitcpio does a lot of unnecessary forking for things that Bash can and should do in house.
  • Maintain modular architecture: This goes in hand with the first goal, and pretty much mimics what most other early userspace image creation tools do.

geninit still features things like presets and a very similar config file. mkinitcpio’s hooks all had to be redone as they’re not longer being run in the context of busybox, but being called as a fork/exec from the compiled pid 1. install files, now known as builders, were reworked slightly to use the new cleaner API that I’ve put in place.

Clean Bash is what I do. I write lots of it, and I’m damn good at it. I won’t hang onto any modesty on this point. I’m very knowledgable about what Bash can do in house and refuse to pass off work to an external unless it’s proven necessary.

Speed was important, because I hear a lot of complaints about how long it takes to generate new images. My initial implementation was roughly on par with mkinitcpio. autodetect images were a bit faster, and full images a bit slower. No good. The silver bullet was, of course, thinking outside the box. mkinitcpio uses a utility from the kernel source tree called gen_init_cpio. It requires that you build a list of contets in a specific file format which is passed to this utility which then poops out an image on stdout. The problem is this: inevitably, when tracing dependencies of modules or binaries, you find duplicates. You have to check for these and avoid adding them. Checking an external file for these things requires either iteration or grep. Both are slow. There’s another way. The Gentoo Wiki was a big help here in getting me the speed I was looking for. The tradeoff is that you end up using a few Mb of diskspace in /tmp during the process, but I think its worthwhile. It also means you’re unable to create devices for the initcpio as an unprivileged user. Not an issue for me.

On the modularity front: In addition to keeping the concept of “builders”, I split the code into 2 pieces — a main Bash script which lives at /usr/sbin/geninit, and /usr/share/geninit/geninit.api. The API is a public API for which “builders” (known to mkinitcpio as “install”) can draw from to add files to the resultant image. The main file remains as private API. Since its Bash, I can’t stop someone from using the “private” API calls, but it’s a friendly suggestion.

While I still consider geninit to be a bit of a beta, I’m very excited to see it come to life so quickly. Early adopters are very much appreciatedd to help iron out bugs. It’s, of course, on the AUR, and available directly from Github.

Happy booting!

Tuesday, April 26, 2011


I’m currently available for a new programming contract. I’m not very good at selling myself, so it’s lucky that my results speak for themselves! I’m educated in user interface design, exceptionally skilled in Python programming, and about as good a Django developer as you’re going to find. I tend to be drawn toward “new” development ideas (I’m not your best bet if you just want an e-commerce site set up); interesting tasks that haven’t been tackled before, or haven’t been solved adequately yet. I’m currently interested in developing mobile apps, not stuff for the Android or iPhone market, but rather, cross-platform, possibly offline-enabled, mobile web applications. However, I’ll likely be attracted to any off-the-wall projects that require innovative and unusual development.

Contact me as if you’d like to discuss a potential contract.

Saturday, April 23, 2011

Eee Kernel resurrected

A few days ago I finally dusted off my eee.git repository and bumped the kernel version from to (since updated to .4). I don't know how long I'll continue to do this, but my Eee found new life as a music and file server at my office, so I felt it was appropriate to update the package.

I tried to find a new maintainer but that didn't work so well, much of it due to my lack of timely responses to volunteers. Perhaps someone will step up now that I did the "hard" work to bring it up to date.

Thanks For The Book!

A big thank you to the Arch user who bought me a book from my Amazon Wishlist. Hopefully this will get me past the steep learning curve for Autotools and let me actually understand that changes I make rather than the “adjust and hope” approach I currently use…

Friday, April 15, 2011

nvidia-173xx and nvidia-96xx removed from [extra]

Gaetan Bisson wrote:

The nvidia-173xx and nvidia-96xx driver packages have been removed from our repositories as they are incompatible with newer xorg servers. This can only be fixed by an upstream update, which has not happened yet.

For most video cards, the best alternative should be xf86-video-nouveau; see:

As lower-grade options, you might also consider xf86-video-nv and xf86-video-vesa: simply remove the old nvidia driver(s), install these, and the xorg server will automatically pick the best at startup.

Tuesday, April 12, 2011

Customizing GNOME3

Now after GNOME3 is officially released more and more users are using it. First thing everybody realize is the non-existing tool to customize GNOME3 as you like it. Sure their is the gnome-tweak-tool but that isn’t really full of features.

I have found an interesting blog from Finnbarr P. Murphy which I want share with you:
He is explaining several topics about customizing the GNOME Shell or writing GNOME Shell extensions. Have a look at these great articles.

Thursday, April 07, 2011

GNOME3 in [testing]

Ionuț Mircea Bîru wrote:

GNOME 3.0.0 packages are now available in the [testing] repository. These bring with it an update to gtk2, as well as the new gtk3.

This is a major update and you should take note of a couple of things:

  • GNOME3 will replace GNOME2 once it gets moved to [extra].
  • GNOME3 has two modes, "standard" mode (gnome-shell) and "fallback" mode (gnome-panel + metacity).
  • Panel applets using Bonobo aren't supported anymore and packages depending on it will be dropped.
  • pulseaudio is now required to run the GNOME desktop.
  • Some packages exist in separate versions for gtk2 and gtk3. These typically have a name like "packagename3". Examples are vte3, libwebkit3, gtkhtml4.
  • pygobject is now available for Python 3 in the package "py3gobject".

Have fun testing these packages!

Update and installation instructions are available at

Bugs related to packaging should be reported to .

Crashes and feature requests should be reported to .

Tuesday, April 05, 2011

LILO to syslinux migration

I've been using LILO for a long time, from my first GNU/Linux installation. Some people have the idea that it's a dead project, but it's not, development is still active. It never failed to boot for me. Until last night, when it failed to use an XZ compressed Linux v2.6.38.2. I would never give up LILO for Grub, but I was contemplating a switch to syslinux after watching the Fosdem talk by Dieter Plaetinck last month.

I had no more excuses to delay it any longer, so I switched. All my machines have /boot as the first partition on the disk, they are flagged bootable, and use the ext2 file-system. Migrating to syslinux was simple:

# pacman -Syu syslinux
# /usr/sbin/syslinux-install_update -i -a -m

# cp -arp /etc/lilo.conf /etc/lilo.conf.bak
# pacman -Rns lilo
The syslinux-install_update options stand for installing it to the MBR, where it replaces LILO. Then I wrote a configuration file: /boot/syslinux/syslinux.cfg. Actually, I wrote one in advance, because even though you might find it funny, it was a big deal for me to continue using my boot prompt as is. I've been using the LILO fingerprint menu for years. Not being able to port it would be a deal breaker. But it wasn't complex, actually.

If you want to use the Fingerprint theme, grab it from KDE-Look, convert the BMP to the PNG format and save it as splash.png. I'll include my whole syslinux.cfg for reference, but the important bits for the theme are menu and graphics sections. I didn't bother to tweak the ANSI settings as I don't intend to use them, so I copied them from the Arch Linux menu. Settings below will give you a pretty much identical look and feel (shadows could use a bit more work), and also provide a nice bonus over LILO when editing the boot options (by pressing Tab, as in LILO):
# /boot/syslinux/syslinux.cfg
# General settings
UI vesamenu.c32

# Menu settings

# Graphical boot menu
# Fingerprint menu
#          element      ansi    f/ground  b/ground  shadow
MENU COLOR sel          7;37;40 #ffffffff #90000000 std
MENU COLOR unsel        37;44   #ff000000 #80000000 std
MENU COLOR timeout_msg  37;40   #00000000 #00000000 none
MENU COLOR timeout      1;37;40 #00000000 #00000000 none
MENU COLOR border       30;44   #00000000 #00000000 none
MENU COLOR title        1;36;44 #00000000 #00000000 none
MENU COLOR help         37;40   #00000000 #00000000 none
MENU COLOR msg07        37;40   #00000000 #00000000 none
MENU COLOR tabmsg       31;40   #00000000 #00000000 none

# Boot menu settings
LABEL linux
        MENU LABEL GNU/Linux
        LINUX ../vmlinuz26
        APPEND root=/dev/sda2 ro rootflags=data=ordered i915.modeset=1 video=VGA-1:1152x864 drm_kms_helper.poll=0
        INITRD ../kernel26.img

LABEL recovery
        MENU LABEL Recovery
        LINUX ../vmlinuz26
        APPEND root=/dev/sda2 ro rootflags=data=ordered
        INITRD ../kernel26-fallback.img
You can find the documentation, and menu options explained on the syslinux wiki. If you achieve an even more faithful copy send me an e-mail with the new values. Thank you.

Monday, April 04, 2011

Eric is now Eric 5

This is a long-overdue renaming of 'eric5' to 'eric'. Both versions of the Eric IDE will be in [extra] as soon as they move from the staging and testing areas.

* extra/eric -> extra/eric4
* extra/eric-plugins -> extra/eric4-plugins

* community/eric5 -> extra/eric
* community/eric5-plugins -> extra/eric-plugins

Be sure to agree to any replacement or removal of conflicting packages. If you are using 'eric' now, you must remember to install 'eric4' manually once the packages hit the repository. For best results, use this command:

pacman -Syu eric4

Also, the eric* symlinks will point to the version 5 binaries; the symlinks will not exist in 'eric4'. Users running the program with desktop entries, i.e menus, need not worry about this.

Saturday, April 02, 2011

Introducing play, a fork of cplay

I've been using cplay for close to ten years now. It is a curses front-end to various audio players/decoders, and written in Python. Sure, I've been an Amarok fan for half that time, but when I just want to hear some music I find my self opening a terminal and starting cplay. I manage my music collection in Amarok, I grab and listen new podcasts in Amarok. Sometimes I even use it to play music, but not nearly as much as I do with cplay. I have 4 workstations at home, and they all do the same. Same thing with the server connected to the best set of speakers. Sure, I have a remote and Oxine there, but when I just want to hear some music I don't want to spend 5 minutes messing with the remote.

Through the years I added various small patches to my copy of cplay. They accumulated over time, and except for my color-support patch I didn't plan on sharing them. But in 2009 I found that the project page of cplay disappeared. I spent a year thinking it will pop-up, but it didn't. Then I noticed the Arch Linux package for cplay pulls the source from the Debian repositories, and realized it's not coming back.

I decided to publish my copy of cplay, so there exists yet another place where it's preserved. But as I'm not acting in any official role, nor do I consider my self a worthy coder to maintain cplay I decided to fork it and publish under a new name. That also gives me the excuse to drop anything I don't personally use, like gettext support. My project is called play, just play and the Git repository is now public, on The first commit is an import of cplay-1.50pre7 from Ulf Betlehem, so if you're looking for that original copy you can grab it there.

Beside various bug fixes some of the more interesting new features are: color support, mplayer support, curses v5.8 support and pyid3lib support. Someone on IRC told me this week that they could never get cplay to work for them on Arch Linux, and they expressed interest in play. I decided to package it on AUR, and it's now available as play-git.

Friday, April 01, 2011

The Canterbury Project

Pierre Schmitz wrote:

We are pleased to announce the birth of the Canterbury distribution. Canterbury is a merge of the efforts of the community distributions formerly known as Arch Linux, Debian, Gentoo, Grml and openSUSE to produce a really unified effort and be able to stand up in a combined effort against proprietary operating systems, to show off that the Free Software community is actually able to work together for a common goal instead of creating more diversity.

Canterbury will be as technologically simple as Arch, as stable as Debian, malleable as Gentoo, have a solid Live framework as Grml, and be as open minded as openSUSE.

Joining the Canterbury Project Arch Linux developer Pierre Schmitz explained: "Arch Linux has always been about keeping its technology as simple as possible. Combining efforts into one single distribution will dramatically reduce complexity for developers, users and of course upstream projects. Canterbury will be the next evolutionary step of Linux distributions."

Gerfried Fuchs, who gave a talk about Debian at last year's openSUSE conference, said: "While DEX (Debian Derivatives Exchange) might have been a good idea in principle, its point of view is too limited. We need to reach out further for true success."

Robin H. Johnson, lead of the Gentoo Infrastructure team, in a panel of core Gentoo developers at SCALE9x: "I really hate compiling-induced downtime. I've been looking forward to installing packages with just a couple of keystrokes. By building on the efforts of other successful distributions, we can take the drudgery out of system maintenance."

Michael Prokop, founder of the Grml live CD, can be quoted on the effort that "we managed to create a universal live build framework with grml-live. Our vision was always that it will be universally usable to further the spreading of Free Software."

Last year's openSUSE conference had the topic of "Collaboration Across Borders". Klaas Freitag, a respected member of the community, mentioned that "the conference motto was set intentional and actually this is what I had in mind as a positive outcome for the conference."

Stefano Zacchiroli, Debian Project Leader, comments on the Canterbury distribution: "during the last year, Debian has worked a lot on the topic of collaboration with other distributions. Some initiatives have been targeted to Debian Derivatives Distribution (e.g. the Derivatives Front Desk, DEX, etc.), but we have also been happy to participate in conferences and panels with other distributions such as openSUSE and Fedora. We are proud of our recent work on collaboration and we are now ready, with Canterbury, to push these initiatives to the next, natural, step: uniting together in the next generation community distribution. Canterbury shall live long and prosper."

Please be notified that this announce is just the starting point, the necessary changes will happen in the upcoming days. You can use the #cbproject hashtag to give us your feedback on twitter or

You can also use our forums to leave a comment.

The Canterbury Project

We are pleased to announce the birth of the Canterbury distribution. Canterbury is a merge of the efforts of the community distributions formerly known as Arch Linux, Debian, Gentoo, Grml and openSUSE to produce a really unified effort and be able to stand up in a combined effort against proprietary operating systems, to show off that the Free Software community is actually able to work together for a common goal instead of creating more diversity.

Canterbury will be as technologically simple as Arch, as stable as Debian, malleable as Gentoo, have a solid Live framework as Grml, and be as open minded as openSUSE.

Joining the Canterbury Project Arch Linux developer Pierre Schmitz explained: "Arch Linux has always been about keeping its technology as simple as possible. Combining efforts into one single distribution will dramatically reduce complexity for developers, users and of course upstream projects. Canterbury will be the next evolutionary step of Linux distributions."

Gerfried Fuchs, who gave a talk about Debian at last year's openSUSE conference, said: "While DEX (Debian Derivatives Exchange) might have been a good idea in principle, its point of view is too limited. We need to reach out further for true success."

Robin H. Johnson, lead of the Gentoo Infrastructure team, in a panel of core Gentoo developers at SCALE9x: "I really hate compiling-induced downtime. I've been looking forward to installing packages with just a couple of keystrokes. By building on the efforts of other successful distributions, we can take the drudgery out of system maintenance."
Michael Prokop, founder of the Grml live CD, can be quoted on the effort that "we managed to create a universal live build framework with grml-live. Our vision was always that it will be universally usable to further the spreading of Free Software."

Last year's openSUSE conference had the topic of "Collaboration Across Borders". Klaas Freitag, a respected member of the community, mentioned that "the conference motto was set intentional and actually this is what I had in mind as a positive outcome for the conference."

Stefano Zacchiroli, Debian Project Leader, comments on the Canterbury distribution: "during the last year, Debian has worked a lot on the topic of collaboration with other distributions. Some initiatives have been targeted to Debian Derivatives Distribution (e.g. the Derivatives Front Desk, DEX, etc.), but we have also been happy to participate in conferences and panels with other distributions such as openSUSE and Fedora. We are proud of our recent work on collaboration and we are now ready, with Canterbury, to push these initiatives to the next, natural, step: uniting together in the next generation community distribution. Canterbury shall live long and prosper."

Please be notified that this announce is just the starting point, the necessary changes will happen in the upcoming days. You can use the #cbproject hashtag to give us your feedback on twitter or

Arch Hurd Netbook Remix

After much discussion, we, the Arch Hurd developers, feel that, due to the success of Arch Hurd, a remix focussed on netbook users is in the best interests of our community. We reviewed the typical usage of netbooks today and feel that despite our lack of wireless, sound and SATA (amongst other things) the superior design of the Hurd is worth the small hindrance that some of our users may experience.

A graphical LiveCD, featuring GNOME 3.14-1, will be released shortly.

Thursday, March 31, 2011

srcpac 0.10(.1)

I'm pleased to announce srcpac 0.10(.1)!

This new release add the support for splitted packages, that was really needed, and it does in a nice way (I got the inspiration from pacman 3.5 groups). Here some screenshot:

Installation of a splitted package:
 ...and after the build you'll get:

I've to say that in this version srcpac loose 2-5 sec to get the PKGBUILD of a package passed as parameter. This because it sources every PKGBUILD in /var/abs/ to get the pkgbase. I don't know other ways to implement this (suggestions are welcome).

Please report any bug or feature request to


Wednesday, March 30, 2011

Slitaz Core 20110329 Release

Hello everyone

Its been a long time coming but I’m finally going to release my slitaz iso. There are alot of things added to this iso that make it different. This is also the first mirror iso. This iso doesn’t have sources or packages in it cause it would talk to long upload or download. But it does have local-mirror script and repos for making the slitaz websites locally.

To turn local-mirror on in root type ‘local-mirror on’. By default it will use the local host ip for the websites. You can go to to test if its working since that domain name doesn’t exist on the net. Thats also the linux from scratch html book. Figure it will help with building packages.

local-mirror off will turn the local mirrors off. Make sure to exit web browser before doing this or you may still get the local mirror sites.

I also have a tank-only and mirror-only option. You can also use these sites on local lan by change the IP_ADDR variable to your network ip. There is also ROUTER_IP variable if your router ip is not so you can change it.

I have add dnsmasq support into iso for local-mirror. dnsmasq -d will allow you to see if your computer is pushing the websites on to the local lan right. You will have to add this to /etc/resolv.conf:

search slitaz.lan

domain slitaz.lan

nameserver $IP_ADDR

The $IP_ADDR in /etc/resolv.conf and /etc/local-mirror.conf should be the same ip.

I hope this explains the new local-mirror script.

root password is root

tux password is tux * I think tux doesn’t have a password anymore but I’m putting it here if find out it does.

Here is the iso, md5 and packages list.

Happy Hacking!

Tuesday, March 29, 2011

Securing Debian repositories

I had to build some Debian packages recently after a long time. The experience really made me appreciate the simple approach taken by Arch Linux. When I finally built something up to Debian standards I had to distribute it on the network. Debian's APT can work over HTTP, FTP... but also SSH, which is pretty good for creating a more secure repository. There are a lot of articles covering this approach, but I didn't find any information on how to make it work in a chroot environment, and got stuck there for a while.

But to go back to the beginning, if you're setting up a repository the reprepro tool is pretty good for repository management. Having SSH in mind for later, some decisions have to be made where you'll store the repository and who will own it. The "reprepro" user is as good as any, so we can start by adding the user (and group) to the system and making its home /srv/reprepro. Then we can setup the bare repository layout, in this example the repository will reside in the debs directory:

# su reprepro
$ cd ~
$ mkdir -p debs/conf
You'll need two files in the conf sub-directory, those being "distributions" and "options". They are simple to write, and all the other articles explain them. You don't have to worry about the rest of the tree, once you import your first package, or .changes file, reprepro will take care of it then. If you intend to use GPG, and sign your Release file, it's a good time to create your signing key. Then in the distributions file configure the Sign option, and in the conf file add ask-passphrase.

Now we can add another user, one for APT clients, its home can be the same as our reprepro user has. User should be allowed to connect in the SSH daemon configuration file, and properly chrooted using the ChrootDirectory option. Here are the (only) binaries you will need in /srv/reprepro/bin: bash, find and dd. I mentioned getting stuck, well this was it, APT is using find and dd binaries internally, which strace revealed.

You can now publish your repository in /etc/apt/sources.list, for example:
deb ssh://apt@ squeeze main
deb-src ssh://apt@ squeeze main
That's the gist of it. User apt gets read-only access to the repository, if coming from an approved host. You can control host access with TCPWrappers, Iptables and even OpenSSH's own whitelist. Each APT client should have its key white-listed for access as well, but give some thought to key management, they don't have to be keys without a passphrase. You can setup the SSH agent on a machine you trust, and unlock a key. Using agent forwarding provided by the OpenSSH client you could login to a machine and install packages without being prompted for the passphrase, and without leaving your keys laying around. This alone would not scale in production, but is a good start.

Monday, March 28, 2011

Thank you Google!

Google, you get a lot of bad words over you lately. "Evil", "big brother", "dangerous", ....
But I just wanted to say: thank you.
You provide us some nice services. Google search, Gmail, analytics, google maps, ...
All of these products are/were game changers and made the life of people all over the world easier.
Many people take them for granted and don't realise what it takes to design, engineer and operate these applications.
That you provide them for free makes it even more amazing. And as far as advertisements are concerned; many business models rely on them and I don't see that changing any time soon.
I'll take a personalized ad over a generic ad any day. The more you can optimize targetted ads on my screen, the more useful I'll find them.
Just don't overdo them, but you know that already.


Saturday, March 26, 2011

Dvcs-autosync: An open source dropbox clone... well.. almost

I found the Dvcs-autosync project on the vcs-home mailing list, which btw is a great list for folks who are doing stuff like maintaining their home directory in a vcs.
In short: Use cases:

::Read more

Allan Vs. Wild

No, I have not been dropped off in the middle of nowhere and made my way back to civilization by jumping into every river and exploring every cave I come across. But I have finished reading the “SAS Survival Handbook, Revised Edition: For Any Climate, in Any Situation”, which was generously bought for me off my Amazon Wishlist by an Arch Linux user.

Am I now prepared to survive in any climate and in any situation? Not really! The lists of safe and poisonous plants have merged together in my head, so that is not particularly helpful to any future survival endeavour. But, if I ever get stuck in the Arctic and manage to kill a polar bear, I now know not to eat its liver due to potentially dangerous levels of vitamin A. That seems an important factoid to store away…

Friday, March 25, 2011

Last Week in Arch, March 15-21, 2011

What went on in the world of your favorite Linux distro last week:

Latest News

wicd split in ‘wicd’ and ‘wicd-gtk’ – announcing a split of the wicd package in the official repository, so you don’t have to use the wicd-nogtk from AUR

Hot Forum Topics

How to find file duplicates only matching size, not md5
Arch Package Signing issue getting big on Reddit
So, FFmpeg has finally blown itself to pieces…

Unanswered Forum Topics

juk music player help needed
Dual Monitor Hook-up, DVI/VGA/HDMI?
Looking for opinions on daemontools vs runit.
Transparent Squid Proxy question
few questions about alsa and vlc
konsole fails to start

Interesting Packages

lshw – A small tool to provide detailed information on the hardware configuration of the machine
lshw-gtk – GTK GUI for lshw
fall-of-imiryn-svn – A classic role-playing style game of three aspiring young warriors.
fwfstab – graphical file system table (/etc/fstab) editor
redditaddictlite – track your karma in real-time.
ryzom_client_open – Free to Play MMORPG, This version is for Testing on open core(dev) server
rmlint – Tool to remove duplicates and other lint, being much faster than fdupes
silicon-empire-git – Set of tools to manage and organize your optical discs like CDs, DVDs and Blu-rays.
golang-hg – A compiler toolchain from Google for the Go programming language
g4l – G4L (Ghost 4 Linux) is a hard disk and partition imaging and cloning tool.
storybook – Open Source Novel Writing Software for Novelists, Authors and Creative Writers

Wiki Changes

Touchatag RFID Reader‎ – new page for this RFID Reader
Firewalls (Svenska) – Translation of the English firewall page to Swedish
Browser Plugins‎ – few updates about troubleshooting
PolicyKit – created page for this privilege controller
Keymap (Português)‎ – Portuguese Keymap page created
Pam mount‎ – pam_mount installation
User Madek has been very busy creating a bunch of new Spanish wiki page translations, including Xcompmgr (Español)‎, Xgl (Español), ATI (Español)‎, NVIDIA (Español) and Improve Pacman Performance (Español)‎,

Thursday, March 24, 2011

The real story behind Arch Linux package signing

This is going to be a long but hopefully informative read. I'd encourage you to sit down and put on your reading glasses, or better yet, go brew yourself a nice hot cup of coffee.

We've taken a lot of unjustified heat in just the last month or so regarding package signing, and I'd like to clear the air as well as debunk some awful "journalism" encountered that provoked this blog post. Thank you, LWN, for providing the catalyst.

If you don't want the backstory, skip down to "The Forbidden Subject".


For those of you that are reading my blog for the first time, I (Dan McGee, aka toofishes) am the current lead developer of pacman, and have been in that role since roughly May of 2007, when I stepped in to fill the shoes of Aaron Griffin, who is now the Arch Linux "overlord" since Judd stepped down. I have been contributing work to the pacman codebase since late 2006, so this piece of software is not new to me by any means.

Currently I am assisted by another great developer and maintainer, Allan McRae. He stepped into a role of working primarily with makepkg around May 2008, but now commits and reviews code changes all over the codebase. Did I mention he is also in charge of the Arch toolchain as well as several other [core] packages? I'm not sure how he does it.

The Story

FS#5331 : Signed packages

I know it is a shocker, but pacman has a bug tracker. One of the 5 oldest bugs in there is FS#5331 : Signed packages. Opened in September 2006, it had no serious comments until July 2007. Once the comments started, no one produced any actual patches, code, or anything to proceed with any sort of plan. The bug sat relatively silent until March 2010.

The first patch

Step forward to 2008. On June 1, 2008, a day that will live in infamy, the very first patch dealing with package signing showed up on the pacman-dev mailing list.

If you browse down a few messages, after a few revisions, to where I said the patch looked good, you will find this gem of a quote. I hate to quote myself, but I think it proves a point that has been lost in the recent furor.

Other than that the patch looks fine, I've started putting these changes in a local branch that will end up in master soon enough. Looking forward to seeing perfect PGP support in pacman/libalpm!

We'll come back to that in due time.

Another thing to note is the issue that still persists to this day came up back then: why can't we just sign the database? It was answered, everyone accepted the answer (at the time), and we moved forward trying to ensure the entire problem was solved.

Follow-up patches

What happened next was typical of both pacman development and OSS development in general- the original contributor of this work sent a few more patches, stopped responding to requests to fix issues in the work, and left it in our laps. For the maintainer of a project, being dumped on like this is never a great thing, but at least here the work was in good enough shape to fix up and commit to a gpg branch for later use.

As has been the case nearly every single time a fuss is raised about this stuff, we were prepping the 3.2 release in July 2008 right as this was all happening, so the main developers couldn't work on it. But our original developer showed up just long enough to say he was still interested in finishing this work. Guess who we never heard from again?

The catch with these initial patches is they were doing the "easy" stuff. Signing a package as the last step of building is not very hard- it is a simple invocation of the gpg command line client. Adding this signature to a pacman database was not too hard either. But the patches from others stopped here, unfortunately. Looking at the authors of the follow-up patches that have since been committed on top of this original work, it is no surprise to see three names: mine, Allan McRae, and Xavier Chantry (another long time pacman contributor).

Until December 2008, the stage was silent. I had to speak up once a discussion started bikeshedding without producing any working code. There is a nice quotable bit in there from me ("First off- stop talking. Start coding."), but the important bit is that the ground rules were laid on what would be an acceptable end result. The link I continue to show people also came into play at this time: Attacks on Package Managers.

We got a few more patches and contributions in December 2008. I did a good bit of work in this time to integrate reading signatures into libalpm, get it under pactest, and all that (not so) fun stuff.

The dormant period

Once again, no one was working on package signing. It was brought up briefly at the end of June 2009, and some discussion happened but once again no solid results were produced. People kept informing Allan and I it wasn't clear where we were heading, so we pointed them at the wiki roadmap and asked them to help edit and clarify. Apparently that is too much work for most people, as they seemed to fade away as quickly as they spoke up. Once again, the primary developers were wrapping up the 3.3 release, so we didn't have much free time on our hands.

You can imagine at this point, a year down the road from the first patches, none of the primary pacman developers are very interested in implementing this themselves. Perhaps this is true, with the ironic twist that more than half of the patches on our long-lived gpg branch are from the three main contributors. I think the most truthful statement is that no one wanted to take the lead on this and finish it by themselves. At this point, the work is nearly where it stands today, as most of the additional work I merged in the last few days was simply bitrot cleanups (aside from pacman-key). However, nowhere have you seen any sense of "even if you produce good work and get things finished we won't take it" attitudes from Allan or I.

Xavier undertook a rebase and cleanup of the stale gpg branch in August 2009, merging in a few old patches from the mailing list.

And you guessed it- another silent period until April 2010.

Recent history

The thread that sums up the "all talk, no walk" part of this whole package signing thing started in April 2010. This is just the part that was on pacman-dev, but it started on arch-general, stretched into May, and accounts for 57 emails in one thread. The sad part? In the package signing work I pushed in the last few days, I see no patches that made it from this timeframe.

We finally got a contributor that stuck around with Denis A. Altoé Falqueto from June 2010 until now. His contributions weren't huge or frequent, but he did write the pacman-key tool which is now merged into master and attempted to keep package signing on the list of features moving forward.

This is the first time period where I would say we failed those that wanted to work on package signing. We weren't quick with responding to patches and giving feedback. Note that this holds true for all patches, not just these ones. I think all of us were quite busy and just didn't have the time or energy we had in the past. When we did work on pacman, we wanted to work on things that were fun, rather than slogging through patch review.

The Forbidden Subject

I couldn't help but steal the dramatic title. On February 18, 2011, Your Signature Please arrived in our mailbox. Keep in mind the following:

  • This is the poster's first email to the pacman-dev mailing list.
  • Mr. IgnorantGuru has decided not to share his real name with us.
  • Standard practice is not to post 1500+ word essays to the mailing list, especially as your first post.

I happened to be skiing in Colorado this day (Friday) and was gone the entire weekend. Do you think I was going to waste time reading a novel? Not a chance. Poor Allan for trying to do so, as he has now been thrown under the bus for being the naysayer, and his words twisted and changed in multiple forms of publication.

Some memorable quotes from this thread that quickly went the wrong way:

Allan: Have you actually looked at the current implementation at all?

IgnorantGuru: I read some discussions of it, but I have not looked at it. Frankly it interests me far less than having signatures available at this point.

An attempt to let cooler heads prevail:

On Sat, Feb 19, 2011 at 12:00 AM, IgnorantGuru wrote:

[.... lots of talk....]

Denis: There's no political problems here. It's just lack of manpower to make it. That's it. This is a standard open source community, there's just momentum when there is personal interest.

And a solo quote from Allan that is oft-repeated as him being anti-signing:

As I said, it really does not affect me. I use the master server for my repo db downloads and know exactly which package updates to expect given I see all commits to our svn repos. So the scope in which I could be attacked is very small and I am prepared to take that risk. So my priorities are clearly different to other peoples. The key difference is, I submit patches to implement what I consider a priority...

Chances of anyone going to over mailman and reading this entire thread are slim- I know this. But realize Allan had not only the first reply, but the last email in the thread. If you read it, you can see why some of his emails were perhaps filled with anger. But he never dismissed anyone, or told them to fuck off, or said things about their mother, or let Godwin's Law prevail.

Once again, were left hanging on the promise of patches on the way from those raising the most trouble in this thread. They never showed up. Thankfully, at least Denis, mentioned earler, proposed a few new patches.

The Media Blitz

From here, shit hit the fan. The Arch Linux forum moderation team got caught up in the scuffle. IgnorantGuru started his crusade with this post on an existing thread. They closed IgnorantGuru's forum rant post that looked like a blog post, which later did show up as a blog post. We were then the target of multiple sensationalist blog posts that he also tried to drum up on reddit.

Mr. IgnorantGuru filed this "flyspray", FS#23103, asking to add sha256sums to our package databases. A reasonable request that quickly turned into a war of words, but I attempted to straighten it out by telling him the standard patch submission rules we use for pacman. I was treated to this, to which I did not respond:

Are you willing to add it if I take the time to submit a patch, or are you just wasting my time? I ask because thus far I have met nothing but unwillingness, so please don't waste my time. I don't really see why a patch is necessary as it is a trivial addition, but if you want one I will be happy to provide it. Thank you.

Surprise- no patch showed up in my inbox or on the bug report. I in fact did exactly what the request asked for a few days later, noting that the changes were not trivial at a +12/-5 diff.

The Deal Breaker

I was willing to let all of this slide and fade into darkness as it normally does, until someone showed me the LWN article Arch Linux and (the lack of) package signing. This forced me to write this post as it is full of lies, lies, and more lies.

First, shame on you Nathan Willis, Jonathan Corbet, and LWN for allowing this to be published. This is not journalism- this is propaganda fueled by a rogue blogger who you've decided to let create a story where there isn't one. I'm going to address points in the article that are just flat out wrong.

The topic had come up before, but no one acted on it, and several of the core Arch developers dismissed the subject as an unimportant one that they were not willing to work on personally.

I challenge you to find any of us that said package signing is or was "unimportant", and that we are not willing to work to get it into the core of the package manager. The only sound byte latched onto here was the one I previously quoted from Allan- it wasn't important to him personally so he didn't feel obliged to devote time to it. This is also a good time to point back to my original quote from the very first patch we received.

We are also not paid for our work on Arch. I do not know a single core developer that gets paid to regularly hack on the distro- we are nothing like a Red Hat or Canonical. I can guarantee you that both Allan and I would go straight to work on package signing if we were getting paid to do so and guaranteed a long term job furthering Arch Linux.

A few, he said, did take the issue seriously and had submitted patches to Pacman, but core developers refused to act on them.

Please show me in my detailed history above where this happened. Since you cannot, I congratulate you on perpetuating rumors further.

McRae … sought out every discussion of the topic and tried to quash others' efforts to work on a solution.

Are you kidding me? Did you even read the mailing list thread he is referring to? You clearly did not or you would have observed what I did above, but doing research on a completely public conversation before publication must be optional these days.

In the second bug report, IgnorantGuru even suggested a lightweight solution that involved only signing the main server's package database … McRae countered …

So if your links in your article are right, you mean FS#23103. The funny thing is, this bug neither deals with signatures (only checksums) nor has a single comment from the aforementioned Allan McRae. Busted.

(Editor's Note: I gave LWN 12 hours to respond to this before I made it public. They have since fixed the article where it said "second" to read "first", so the above doesn't apply directly anymore. For the first bug, FS#23101, the article is correct in saying "IgnorantGuru even suggested a lightweight solution". However, suggestions don't produce working software, code produces working software, and not a single piece of code was provided on this bug report.)

He then describes patches sent by himself and other Arch contributors, and what McRae and other core developers did to prevent merging them. This second post covers similar ground as its predecessor, but the comment thread provides even more detail, as McRae eventually joins in.

Show me one patch he has ever sent us- just one. You won't find anything.

Next, show us exactly what we did to prevent merging them. You won't find anything.

Finally, you further your baseless libel of Allan by referring to this "second post", but providing no link so your readers can independently verify the allegations.

(Editor's Note: LWN corrected me here a bit, saying I may have misread the wording of this paragraph. The linked post and the "second post" are in fact one and the same- IgnorantGuru's blog post. Finally, the "comment thread" refers to the comments on his blog, which I also did not understand. I never would have expected a magazine article to cite comments on a blog post as authoritative.)

That of course is the second level at which the core developers' resistance is troubling: the fact that they would prevent security patches from going into the project.

Please continue the fear mongering and baseless allegations. Allan made one point- that this wasn't very important to him- and it is now interpreted as a blockade against all attempts to introduce signing.

Congratulations, LWN, for dropping to a new low. You won't be seeing my money anytime soon any more than the rags in the grocery store checkout do.

Where are we now

With pacman 3.5 out the door and 3.6 in development, package signing is not falling out of the spotlight. Instead, three different merges plus additional follow-up commits have already taken place of the code that in some cases is 2.75 years old.

Still, no one has stepped up in the last two days to tackle items from the Package Signing TODO list. I foresee Allan and I slogging through this with hopefully a little help from Denis, Xavier, and our newest regular contributor Dave Reisner. It will get done, but it all takes time since we are only volunteers.

The “python2″ PEP

When Arch Linux switched its /usr/bin/python from python-2.x to python-3.x, it caused a little controversy… There were rumours that it had been decided upstream that /usr/bin/python would always point at a python-2.x install (although what version that should be was unclear). Although these rumours were abundant and so more than likely such a discussion did occur (probably offline at PyCon 2009), this decision was never documented. Also, whether such a decision can formally be made off the main development list is debatable.

Enter PEP 394. Depending on how I am feeling, I call this the “justify Arch’s treatment of python” PEP or the “make Debian include a python2 symlink” PEP. Either way, the basic outcome is:

  • python2 will refer to some version of Python 2.x
  • python3 will refer to some version of Python 3.x
  • python should refer to the same target as python2 but may refer to python3 on some bleeding edge distributions

The PEP is still labeled as a draft, but all discussion is over as far as I can tell and I think it will probably be accepted without much of any further modification. The upshot is, using “#!/usr/bin/env python2” and “#!/usr/bin/env python3” in your code will become the gold standard (unless of course you code can run on both python-2.x and python-3.x). There is still no guarantee what versions of python-2.x or python-3.x you will get, but it is better than nothing…

One recommendation made by the PEP is that all distribution packages use the python2/python3 convention. That means the packages containing python-3.x code in Arch should have their shebangs changed to point at python3 rather than python. Given our experience doing the same thing with python2, this should not be too hard to achieve and is something that we should do once the PEP is out of draft stage. This has a couple of advantages. Firstly, we will likely get more success with upstream developers preparing their software to have a versioned python in their shebangs (or at least change all of them when installing with PYTHON=python2 ...). That would remove many sed lines from our PKGBUILDs. Secondly, if all packages only use python2 or python3, then the only use of the /usr/bin/python symlink would be interactively. That would mean that a system administrator could potentially change that symlink to point at any version of python that they wished.

Wednesday, March 23, 2011

Firefox context menu switcharoo

Awesome, Firefox 4 is out! But please to god explain how things like this get past quality control. Firefox 3.6 is first, then Firefox 4.0:

Firefox 3.6 open in tab Firefox 4.0 open in tab

I don't know about you, but I have muscle memory, and I have opened at least 10 links in windows today instead of tabs because of this change. And yes, I would normally just middle-click but this is on a laptop touchpad with no middle button, so I have always gone to the menu.

I should probably file a bug for this (edit: bug filed), but who knows if it is a big enough deal for anyone to care about...

Time flys when working on mirror iso

As the suggests i been working on a mirror iso. I have been doing this iso and also working on slitaz wok for the past mouth and a half. Sorry for not reply lately.

This mirror iso for slitaz is going to be mirror of tank for local lan. The idea is for a off-line resource for developers that may have low bandwidth caps. It will also allow for a local slitaz repo packages and src mirror for local lan and maybe boot with epxe if done right.

Most of my problem is that i can’t get the address out to the local lan right at the moment. I may have to use bind’s named command to get this done. Info on doing your own dns here.

I also would like to add lot of documentation in packages to slitaz repos so i can make local lan websites out of them maybe.  I have done this to the linux from scratch html docs for slitaz so far.

I hope this helps with what want to know i’m doing. I do plan on releasing small slitaz mirror iso for you guys to see what it can do soon.

Tuesday, March 22, 2011

Music metadata visualization in Python

I was looking at my music collection this morning and a seemingly simple question came to mind- what bitrate is most of my MP3 music?

Of course, this simple question descended into a Python script that did much more than the original question, producing the following result, which plots the bitrates vs. ID3-embedded song year in a nice little visualization:

MP3 Stats

4913 songs played into this picture. Several interesting things to note.

  • The left-most line is not my huge collection of music from 1965- instead, this is all the music that was not tagged with a year, which is why that column stands out from the rest.
  • The sizes of the boxes do not matter, only the colors do.
  • No surprise- the hottest (e.g. most files) point on the plot is unknown year at 128 kbps. Old music files never seem to die! It is good to also see several 192 kbps and 160 kbps files there as well, however.
  • Looking across the 128 kbps row, you can see the rise and fall of the Napster era and the effect on my music collection. I think my usage of it peeked in 2001.
  • As Napster faded, my collection began to move toward 192 kbps bitrate instead.
  • 320 kbps seems to be the new fad, following 192 kbps and 128 kbps being the preferred format. Of course, VBR files are going to be scattered among the different buckets, so it is hard to tell where that plays into the picture.
  • A lot of my classic rock collection is ripped from CDs and in FLAC format, so is not on the chart. However, the older songs (1970-1985) seem to show up in the 256 kbps row.

Here is the script:

Required Python modules for this script are mutagen (MP3 processing), numpy (histogram generation), and matplotlib (visualization). Let me know what you think and if you make any improvements!

Deshaking videos with Linux

This blog post is more or less for my own, that I don’t forget how to deshake a video if I need it again. Reading several blog posts and forum threads how to deshake a video, I have found the solution. Most Linux distributions have already installed it, without any knowledge of their users.

I’m speaking about “transcode”. If you don’t have transcode installed, install it:

pacman -S transcode

transcode includes a video stabilizier and you just have to know how to use it. You can update only the stabilizer by downloading a new version here and overwrite the exisiting files, but I don’t recommend it. You must download the binary version if you want to update.

Now we will start to deshake our video. Open a terminal, change to your directory with your videos and let us start:

  1. transcode must analyze our video, so start the command:

    transcode -J stabilize -i yourmovie.avi

    If transcode complains about not supported format, then try the following:

    transcode -J stabilize --mplayer_probe -i yourmovie.avi
    (this will use mplayer to decode the file, which should be able to decode everything :) )

  2. The next step will stabilize your video. You have several options here, have a look at the plugins project page to find out what is possible.

    transcode -J transform --mplayer_probe -i yourmovie.avi -y raw -o yourstabilizedmovie.avi (Here is an example video from the project page. This will produce a large new video file, because I have taken the output “raw”. If you want another output then you have to specify it after -y, e.g . “-y xvid4″. Or you can send the output to ffmpeg, how to do this you have to look into the manual of transcode.)

    The result will be zoomed so you don’t see how transcode moved the image to deshake it. If you want to see it, then try the following command:

    transcode -J transform=crop=1:optzoom=0 --mplayer_probe -i yourmovie.avi -y raw -o yourstabilizedmovie.avi

    This will end up with a video and a black border around it which is moving around. Here is an example video from the project page.


That’s all. Now you have a stablilized video. You can change some parameters like shakiness and zoom factor and so on, but the standard values making good results.

You can find the deshake plugin project page for transcode here:
All parameters and options are described there. If you want to output the result to ffmpeg you have to read the wiki or manual of transcode, you can find the project page here:

This is an example from the project page, it looks amazing how good it stabilize the original source:

Monday, March 21, 2011

I Have Important Things To Say!

And if they are less than 140 characters long, you will find them on my freshly created account. So everyone rush there and subscribe so I can regale you with my witty banter…

Sunday, March 20, 2011

wicd split in 'wicd' and 'wicd-gtk'

Like requested as a feature request (FS#22550) and as a solution to one bug (FS#22423), I have split the wicd package into two seperated packages. From now on there is

wicd:         All stuff you need to run the wicd daemon and the wicd-cli and wicd-curses interfaces
wicd-gtk:   All  stuff you need to run the GTK interface of wicd and the autostart file for the client to appear in the systray.

So if you want the GTK interface back you have to install wicd-gtk manually after you have updated the wicd package.

- Daniel

Friday, March 18, 2011

When you use a Django query keyword as a field name

I need to model a location in the Alberta Township System coordinate space. The model is extremely simple:

class Location(models.Model):
    project = models.ForeignKey(Project)
    lsd = models.PositiveIntegerField(null=True, blank=True)
    section = models.PositiveIntegerField(null=True, blank=True)
    township = models.PositiveIntegerField(null=True, blank=True)
    range = models.PositiveIntegerField(null=True, blank=True)
    meridian = models.PositiveIntegerField(null=True, blank=True)

There’s a rather subtle problem with this model, that came up months after I originally defined it. When querying the foreign key model by a join on location, having a field named range causes Django to choke:

>>> Project.objects.filter(location__range=5)
Traceback (most recent call last):
  File "", line 1, in
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/", line 141, in filter
    return self.get_query_set().filter(*args, **kwargs)
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/", line 556, in filter
    return self._filter_or_exclude(False, *args, **kwargs)
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/", line 574, in _filter_or_exclude
    clone.query.add_q(Q(*args, **kwargs))
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/sql/", line 1152, in add_q
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/sql/", line 1092, in add_filter
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/sql/", line 67, in add
    value = obj.prepare(lookup_type, value)
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/sql/", line 316, in prepare
    return self.field.get_prep_lookup(lookup_type, value)
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/fields/", line 136, in get_prep_lookup
    return [self._pk_trace(v, 'get_prep_lookup', lookup_type) for v in value]
TypeError: 'int' object is not iterable

That’s a pretty exotic looking error in Django’s internals, but it didn’t take long to figure out that using location__range is making Django think I want to use the range field lookup on instead of the field I defined in the model. I expect a similar problem would arise if I had a field named “in”, “gt”, or “exact”, for example.

The solution is simple enough, but didn’t occur to me until searching Google and the Django documentation, and ultimately scouring the Django source code failed to yield any clues. If you ever encounter this problem, simply explicitly specify an exact lookup:

>>> Project.objects.filter(location__range__exact=5)
[< Project: abc>, > Project: def >]

Thursday, March 17, 2011

Last Week in Arch Linux – March 7-14, 2011

Latest News

syslinux now includes a default configuration file

Hot Forum Topics

Linux Survey – fill out a survey, get some data
In search of a truly innovative Desktop – interesting discussion on what makes a good desktop
flash isn’t working anymore – long discussion about broken Flash. Stupid Flash.
[Solved] Gnome 2.32 “About me” problems – problem with gnome-about-me solved
Sound not working for USB Headset – the always interesting “solving an audio problem” topic

Interesting Packages

oilrush – a real-time strategy game based on group control
python2-wikipedia-rewrite-svn – A rewrite of the Python Wikipedia Robot Framework
theide-svn – Modern IDE designed for developing large U++/C++ applications
apvlv – A PDF Viewer which behaves like Vim
blobwars – Platform action game featuring a variety of different weaponry and multiple objectives
cherrytree – A hierarchical note taking application featuring rich text and syntax highlighting
fatrat – QT4 based download manager with support for HTTP, FTP, SFTP, BitTorrent, rapidshare and more

Wiki Changes

Bash – PROMPT instructions, esp. vis-a-vis color
Lotus Notes in 32bit Chroot – installing Lotus Notes(!)
Pacman GUI FrontendsWakka added
Help:Editing (Italiano)‎ – ArchWiki tutorial page for Italian added
Getting Involved (Italiano) – getting involved in Arch, Italian-style, added

Someone else did the pacman 3.5.0 blog post

Thanks Allan! I usually try to write up something on my blog, but he covered nearly all of what I wanted to say. So go read his post, and say thanks to all your friendly neighborhood pacman developers for their work. Especially when you upgrade and have those "wow, this is a LOT faster" thoughts.

Pacman 3.5.0 Released

It is time for another major pacman release. Here is a brief overview of the new features:

The feature that will be immediately noticed on the pacman upgrade is the change of database format. This was a step towards reducing the large number of small files pacman had to read, which was a major cause of performance issues (particularly on systems with slow hard-drives). Two major changes occurred: the sync database became a single file per repo and the local database had some of its files merged. The sync databases are now read directly from the database (compressed tarball) that is downloaded from the mirrors. No extraction means no fragmentation of the database across the filesystem. The “depends” and “desc” files in the local package database were merged into one file as there was actually little point for them being separate. This results in an approximately 30% less files to be read for the local database on an average system. A script (pacman-db-upgrade) is provided to preform this database upgrade and pacman will abort if a database in the old format is found. Any scripts that read directly from the database will need to be updated to deal with these new formats. Or better yet, they could be written to use libalpm which would make them robust to future changes (the local database format could be improved further). Combine the database changes with other speed enhancements (improved internal storage of package caches, faster pkgname/depends searches) and this pacman release is notably faster.

Until now, a great way to break your system during an update was to run out of disk space. Pacman now attempts to avoid this in two ways. Firstly, it will (optionally) calculate the amount of disk space needed to perform the update/install and check that your partitions have enough room. Doing this calculation is actually fairly involved and I’m sure we will encounter some case of a filesystem and platform combination that we have not tested where this calculation is not correct… I know for certain that it does not work in chroots. The “solution” in these cases is to disable this check in pacman.conf and make a bug report with all the details needed to replicate the issue (except the chroot case). As a second line of defence for disk space issues, pacman will report any extraction error it encounters and attempt to stop installation on the important ones.

A much missed feature in pacman-3.4 was the ability to select which packages you wanted to install from a group. Well, that is back and better than ever! Additionally, the selection dialog is also extended to package provisions, allowing the user to select which provider package they want installed rather than pacman just installing the first one it found.

A feature that will primarily affect packagers is the removal of the “force” option that would result in packages being installed from the repo even if the version was not considered newer by pacman. This was useful for packages with weird versioning schemes (is that “a” for alpha or the first patch level?), but it resulted in strange update behaviour for those who had built themselves newer versions of a package locally. This has been replaced by the use of an optional “epoch” value in the package version – so a “complete” package version looks like epoch:pkgver-pkgrel. If present, the value of the epoch field takes precedent over the rest of the package version.

The main addition to makepkg is the ability to run a check() function between build() and package(). This optional function is useful for running package test suites (or even better, not running them in the early builds when bootstrapping a package). Other changes include the removal of STRIP_DIRS (now all files are stripped by default), adding a buildflags option to disable CFLAGS etc, and allowing the use of $pkgname in split package functions.

For a more complete list of changes in pacman-3.5, see the NEWS and README files in the source code.

Sunday, March 13, 2011

syslinux now includes a default configuration file

Thomas Bächler wrote:

The new syslinux package (4.03-4) now includes a default configuration file at /boot/syslinux/syslinux.cfg. If you get a file conflict during update, run:

mv /boot/syslinux/syslinux.cfg /boot/syslinux/syslinux.cfg.sav
pacman -S syslinux
mv /boot/syslinux/syslinux.cfg.sav /boot/syslinux/syslinux.cfg

Friday, March 11, 2011

Last Week in Arch Linux – March 1-8, 2011

Latest News

ArchServer RC3 – Release announcement for ArchServer redgum RC3
New Arch Schwag – New Arch Linux laptop bags, plus reduced price on older models and a sale on Arch Linux pens

Hot Forum Topics

Linux Survey – fill out a survey, get some data
In search of a truly innovative Desktop – interesting discussion on what makes a good desktop
flash isn’t working anymore – long discussion about broken Flash. Stupid Flash.
[Solved] Gnome 2.32 “About me” problems – problem with gnome-about-me solved
Sound not working for USB Headset – the always interesting “solving an audio problem” topic

Interesting Packages

oilrush – a real-time strategy game based on group control
python2-wikipedia-rewrite-svn – A rewrite of the Python Wikipedia Robot Framework
theide-svn – Modern IDE designed for developing large U++/C++ applications
apvlv – A PDF Viewer which behaves like Vim
blobwars – Platform action game featuring a variety of different weaponry and multiple objectives
cherrytree – A hierarchical note taking application featuring rich text and syntax highlighting
fatrat – QT4 based download manager with support for HTTP, FTP, SFTP, BitTorrent, rapidshare and more

Wiki Changes

Bash – PROMPT instructions, esp. vis-a-vis color
Lotus Notes in 32bit Chroot – installing Lotus Notes(!)
Pacman GUI FrontendsWakka added
Help:Editing (Italiano)‎ – ArchWiki tutorial page for Italian added
Getting Involved (Italiano) – getting involved in Arch, Italian-style, added

Monday, March 07, 2011

Let's make the world a better place. Let's stop the abuse of SI prefixes

This has been on my mind for a while. But now I actually took some time to launch a little project to do something about it.
1024 just ain't 1000. stop abusing SI prefixes!

Saturday, March 05, 2011

Why rewriting git history? And why should commits be in imperative present tense?

There are tons of articles describing how you can rewrite history with git, but they do not answer "why should I do it?". A similar question is "what are the tradeoffs / how do I apply this in my distributed workflow?".
Also, git developers strongly encourage/command you to write commit message in imperative present tense, but do not say why. So, why?
I'll try to answer these to the best of my abilities, largely based on how I see things. I won't get too detailed (there are enough manuals and tutorials for the exact concepts and commands).

::Read more

Friday, March 04, 2011

There Will Be Dragons!

Finally, George R. R. Martin has announced a release date for A Dance With Dragons! Tuesday, July 12, 2011. Yay! What is even more impressive is that Amazon knows how many pages it will be even though the author has not finished the book yet.

Now I have to rack my brain back four or five years and try and remember what happened in the last book of his A Song of Ice and Fire series….

Wednesday, March 02, 2011


Today’s Ctrl+Alt+Del comic has brought it all flooding back to me… I can accept that perhaps “after midnight” ends at sunrise the next day, but it never seemed biologically plausible that such an occurrence was so oddly specific to a given time. Do the Mogwai’s bodies have some internal clock that will detect midnight with great accuracy? Can it correct for daylight savings time? What if I feed the Mogwai in one timezone after 11pm and then take it across to the adjacent timezone where that feeding now occurred past midnight?

Planet Arch Linux

Planet Arch Linux is a window into the world, work and lives of Arch Linux hackers and developers.

Last updated on May 22, 2011 07:13 PM. All times are normalized to UTC time.



Arch Planet Worldwide

Other Arch Linux communities around the world.

brain0 maintains a google earth map showing where in the world arch users live. Add yourself!


Brought to you by the Planet aggregator, cron, and Python. Layout inspired by Planet Gnome. CSS tweaking and rewrite thanks to Charles Mauch

Planet Arch Linux is edited by Andrea Scarpino. Please mail him if you have a question or would like your blog added to the feed.