Sunday, January 24, 2010

Paktahn 0.8.3 released

As a precursor to 0.9 which will contain much desired features like AUR system upgrades we decided to release 0.8.3 which contains a bunch of bug fixes:

  • Version comparison no longer fails on provider packages (#8)
  • Reinstallation works properly again (#7)
  • Trying to install or get pkgbuilds for non-existent packages is handled correctly (#5 and #6)
  • Question the user in case of malformed pkgbuild dependencies (#12)
  • Proxy support works correctly now (#15, reported by nitralime)
  • update cache after package removal (proposed by Ralith)
  • handle non-Unicode strings more gracefully (#9, reported by zajca)

Syncing up

Use the quickinstall script or sync your Paktahn repo to get it.

Fosdem 2010

I'll be at fosdem - 10th edition - again this year.
I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

I'll be presenting a lightning talk about uzbl.
Also, Arch Linux guys Roman, JGC, Thomas and me will hang out at the distro miniconf. We might join the infrastructure round-table panel, but there is no concrete information yet.

More stuff I'm looking forward to:

I'm suprised myself how there are much more topics of interest to me then last year, and I'm not sure if the program is even finished.

Saturday, January 23, 2010

Bug Day: Saturday 2/6

Paul Mattal wrote:
By popular demand, the next Bug Day will be on Saturday 2/6. People are usually around all day, but you will certainly find us in the #archlinux-bugs IRC forum in the afternoon and evening EST. There's a job for everyone! Come help out however you can.

Friday, January 22, 2010

Death of Arch Bounty

Last September I posted about Arch Bounty, a project I’d written to allow people to post ‘bounties’ to have specific ArchLinux bugs fixed. I didn’t promote it and interest faded quickly. I’ve been thinking of pulling the plug on it for a while now, but it happened unexpectedly yesterday when I accidentally killed half a dozen of the sites on my webhost. I’ve recovered most of them, but I decided that ArchBounty won’t be coming back.

There was one donation to the project; I will be forwarding it directly to the Arch Linux Donations fund.

Share/Save/Bookmark

A Python 3 Powered Blog

Last week, I posted my intent to port the Tornado web framework to Python 3. Not only have I done that (sort of), but I’ve hacked it to pieces; soon it will be unrecognizable to the original developers, and possibly, to me!

It didn’t take long to get the Hello World example included with Tornado running. Quite a bit more work was the blog demo, but I now have the template, auth, and httpclient modules working, along with the core modules for a working async server. I was able to log into my example blog with my google account, compose some entries, view, and edit them, including the feed.

That doesn’t sound like much if you’re coding blog software for an existing framework (10 minutes in django, probably about 12 in web.py). But if you’re coding a web framework for existing blog software, it’s an accomplishment. I’m proud to have a “working” framework (even though it’s full of bugs and working just means “those things I’ve fixed”) for Python 3 in such a short amount of time.

I’ve named the project psyclone, a play on ‘tornado’ –> ‘cyclone,’ and the fact that Python projects oughta have a ‘p’ and ‘y’ in them somewhere. The code is on github for all to play with. Patches welcome! :-)

I’m having a lot of fun with this project, so it’s taking more of my time than I ought to be devoting to it… on the positive side, it’s progressing rapidly!

My plans:

  • Go over the existing code and improve some of the rather messy unicode/str –> str/unicode hacks I had to make to get it working.
  • Write some tests. (The Tornado team seems not to value tests.) I’ll use py.test and may need to write a test client.
  • Write a session framework and auth framework; the current auth framework uses openID only; but I like a more local solution to be available as well.
  • Consider writing an ORM. Likely, I’ll discard this idea, arguing that Judd was right to design a frameworkwith SQL only. The truth behind the argument will be laziness, of course.
Share/Save/Bookmark

Thursday, January 21, 2010

About libjpeg/libpng rebuilds

Hi,

for those of you that follow arch-dev-public mailing list this message show be ignored as this is for those  who plan to update from testing now.

Don’t fill bug reports since the rebuild process is  not finished.

As a side note, most users are using other cairo packages, like cairo-lcd(i do) and other popular patches.  Be sure that you rebuild yourself and reinstall gtk2 after installing the package and don’t submit to our bugtracker.

Share/Save/Bookmark

[kde-unstable] KDE SC 4.4RC2

They released it, we packaged it!

Yesterday KDE developers tagged the 2nd Release Candidate and today we released the Arch Linux packages into [kde-unstable]…nearly there, 4.4.0 will be out soon!

The x86_64 packages are already on [kde-unstable] whereas I am still uploading i686 packages, they will be in the mirrors tonight. ;)

The packages are built with libpng 1.4.0 and libjpeg 8, so you can update easy without any problems.

As ever, please report any bugs. Thanks!

Hacklab.CL's Arch Linux Orphan's Day

Aaron Griffin wrote:
This weekend on January 23rd, Hacklab.CL is organizing an event called Arch Linux Orphan's Day. This event will consist of talks and presentations about Arch Linux packaging, covering the Packaging Guidelines, the AUR, complementary tools, and much more. The event concludes with a review of the AUR orphan packages and, hopefully, some new maintainers. This will happen at "KernelHouse" (Antonia lopez de bello 157-A), in Santiago de Chile.

Where is my new kernel, Slicehost?

A while back, I posted about a long-needed Slicehost kernel upgrade from a 2.6.24 to a 2.6.31 build. Since I run Arch Linux on my slice, it is a lot easier on the maintenance if I am running a relatively up to date kernel. Given that 2.6.24 was released in January 2008, it was definitely time for something more recent.

Fast forward two and a half months. I've been running this 2.6.31-302-rs build of theirs the entire time, and I was a very early adopter. Given this, I found two non-trivial issues with the build and gave their support team a heads up. The first was an issue I noted in my earlier post regarding iptables and the recent connection tracking module changing names and thus no longer being included.

Ticket 14486, November 1, 2009

I upgraded to the 2.6.31 kernel and noticed my iptables rules didn't get loaded correctly. It looks like it is due to the xt_recent kernel module not being available. In 2.6.24 (through I think 2.6.27), it was known as ipt_recent, which is probably why it got omitted in the new build. Can we get it back?

dmcgee@toofishes /etc
$ locate xt_recent.ko

dmcgee@toofishes /etc
$ locate ipt_recent.ko
/lib/modules/2.6.24-24-xen/kernel/net/ipv4/netfilter/ipt_recent.ko

And the "offending" rules in case you were curious:

$ cat /etc/iptables/iptables.rules | grep recent
#-A INPUT -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 300 --hitcount 10 -j DROP
#-A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set

The second ticket was an issue I found soon after. Although my slice now has 321 MB of memory, it also has a ton more overhead than it used to due to a bunch of unnecessary kernel-level processes.

Ticket 14489, November 2, 2009

Another note about the new 2.6.31 kernel- it ends up having a ton more overhead because it looks like JFS and XFS are built-in, and all of their kernel processes get started even if one isn't using either of these filesystems. Because they aren't modules I can't unload them and free up some memory.

Example 1: Process count spike, look at the thin line and the spike at the end of October

Example 2: Memory usage jump, note the application memory and swap in use

dmcgee@toofishes ~
$ ps -eLf | grep -E 'jfs|xfs'
root        42     2    42  0    1 Nov01 ?        00:00:00 [jfsIO]
root        43     2    43  0    1 Nov01 ?        00:00:00 [jfsCommit]
root        44     2    44  0    1 Nov01 ?        00:00:00 [jfsCommit]
root        45     2    45  0    1 Nov01 ?        00:00:00 [jfsCommit]
root        46     2    46  0    1 Nov01 ?        00:00:00 [jfsCommit]
root        47     2    47  0    1 Nov01 ?        00:00:00 [jfsSync]
root        48     2    48  0    1 Nov01 ?        00:00:00 [xfs_mru_cache]
root        49     2    49  0    1 Nov01 ?        00:00:00 [xfslogd/0]
root        50     2    50  0    1 Nov01 ?        00:00:00 [xfslogd/1]
root        51     2    51  0    1 Nov01 ?        00:00:00 [xfslogd/2]
root        52     2    52  0    1 Nov01 ?        00:00:00 [xfslogd/3]
root        53     2    53  0    1 Nov01 ?        00:00:00 [xfsdatad/0]
root        54     2    54  0    1 Nov01 ?        00:00:00 [xfsdatad/1]
root        55     2    55  0    1 Nov01 ?        00:00:00 [xfsdatad/2]
root        56     2    56  0    1 Nov01 ?        00:00:00 [xfsdatad/3]
root        57     2    57  0    1 Nov01 ?        00:00:00 [xfsconvertd/0]
root        58     2    58  0    1 Nov01 ?        00:00:00 [xfsconvertd/1]
root        59     2    59  0    1 Nov01 ?        00:00:00 [xfsconvertd/2]
root        60     2    60  0    1 Nov01 ?        00:00:00 [xfsconvertd/3]

I got very quick responses to both of these tickets. Both were along the lines of "thanks for letting us know, I'll get this over to our kernel guys". I got the impression there would be a new kernel in a week's time or so, especially given this more detailed follow-up response I received a few days later.

Slicehost Support, November 5, 2009

Hello Dan,

Thank you for your messages about the new kernel. We're working on a new kernel now that will take up much less resources and also provide the same amount of iptables functionality as the previous kernels. The new kernel is currently being tested in our testing environment and we hope to have it released very soon.

As soon as it is available, we will post an update on http://slicehost.com/blog/ and it will be available for you in the SliceManager. We apologize for the issues in the current kernel and we are working to fix the problems as quickly as possible.

Please let us know if we can assist you further at any time.

So what has happened since? Nothing. I sent them another follow-up email a month later asking on the status of the new kernel. I got a response telling me they are "working through some compatibility issues" and "we are in the final stages of the last round of testing". Wow, final stages of the last round of testing? That sounds good, they really do some good regression.

Only problem? This amazing new kernel still is nowhere to be seen. Slicehost, your good will meter is shrinking faster than you know right now. It has been nearly 50 days since that conversation and I still see no new kernel available. Will this blog post help? I have no idea, but know that your customers are trying real hard to stay loyal but Linode is looking awfully good right now.

Wednesday, January 20, 2010

Tuesday, January 19, 2010

Arch Linux Pens now available!

Hey All,

I'd like to announce the arrival of Arch Linux Pens to the Schwag shop. These are nicer than I imagined, very opulent' I'm very happy with them and hope you will be too.

Pens are deep blue and gold with "Arch Linux" and "www.archlinux.org" engraved on them. They have a soft black grip and fine ball-point black ink. They are available for $5 individually, or as low as $3.50 in bulk.

Order them now from http://schwag.archlinux.ca/product/pen/

As always, thank you for your support!

Dusty

-- posted by Dusty

Arch Linux Pens

I’d like to announce the arrival of Arch Linux Pens to the Schwag shop. These are nicer than I imagined, very opulent’ I’m very happy with them and hope you will be too.

Pens are deep blue and gold with “Arch Linux” and “www.archlinux.org” engraved on them. They have a soft black grip and fine ball-point black ink. They are available for $5 individually, or as low as $3.50 in bulk.

Order them now from http://schwag.archlinux.ca/product/pen/.

pens

Share/Save/Bookmark

Monday, January 18, 2010

The Utility Of Python Coroutines

Coroutines are a mysterious aspect of the Python programming language that many programmers don’t understand. When the first came out I thought, “Cool, now you can send values into generators to reset the sequence… when would I use that?” The examples in most books and tutorials are academic and unhelpful.

Last year, I attended David Beazley’s course A Curious Course On Coroutines along with a fellow Archer. We agreed that it was an exceptionally interesting course (Beazley built an OS scheduler in Python, with just a minimal amount of code: how cool is that), but that we didn’t see any practical application of it in our regular work.

Yesterday, I started working with the Tornado code to port it to Python 3. Tornado uses an async framework; I hate async because I hate working with code like this:

def somemethod(self):
    # 
    self.stream.read_until("\r\n\r\n", self.callback)
 
def self.callback(self, content):
    # handle content read from the stream

I understand the utility of this code; while the stream is being read, the app can take care of other stuff, like accepting new connections, until the stream has been read. You receive high speed concurrency without the overhead of threads, or the confusion of GIL. When the read is complete, it calls the callback function. It makes perfect sense, but when you read code with a lot of such callbacks, you’re constantly trying to figure out where the code went next.

In my mind, the above code is really saying:

def somemethod(self):
    # 
    self.stream.read_until("\r\n\r\n")
    # give up the CPU to let other stuff happen
    # but let me know as soon as the stream has finished reading
    # handle content read from the stream

I find this paradigm much easier to read; everything I want to do surrounding content is in one place. After pondering different ways to write a language in which this was possible, it hit me that this is what coroutines are for, and it’s possible in my preferred language.

Because coroutines use generator syntax, I thought they had something to do with iterators. They don’t, really. The above code can be written like so:

def somemethod(self):
    # 
    self.stream.read_until("\r\n\r\n")
    content = (yield)
    # handle the content

The calling code would call somemethod() and somemethod().next(), and eventually, when content is available, somemethod().send(content) to drive it.

A generator compiles to an object with an iterator interface. The coroutine above (sort of, but not really, at all) compiles to a function with a callback interface (you could say it is an iterator over callbacks). You can use yield multiple times in one method to receive more data (or to send it; put the value on the right side of yield, like in a generator).

The mainloop that called this code would still be at least as complicated to read as it is using a callback syntax, but the objects on the async loop are now much easier to read.

This paradigm has been implemented in the Diesel web framework. I’ve looked at it before and thought it was an extremely bizarre way to design a web framework. I still do, but now I understand what their goals were. If you’ve ever struggled with the, “why would I ever use this?” question when it comes to coroutines, now you understand too.

I have no immediate plans to rewrite my tornado port using coroutines, but maybe someday if I’m bored, I’ll give it a try.

Share/Save/Bookmark

wxHaskell packaged for Arch

wxHaskell, the venerable portable and native GUI library for Haskell, is now packaged for Arch, in the following packages: haskell-wxcore haskell-wx Which you can install with yaourt:   yaourt –aur -S haskell-wxcore haskell-wx And is already used by a number of graphical Haskell programs and libraries: haskell-wxfruit, a very high level gui library for Haskell (with examples) wxasteroids, an implementation of asteroids lostcities,  a [...]

Python 3 Web Framework

I got it in my head this weekend that it was about time someone wrote a web framework for Python 3. My head is kind of stubborn about these things, so I asked it some questions:

Does the world need another web framework?
Do I need another web framework?
Do I have time to do this?

The answers were all “no.” Still, I’m planning to go ahead with it until I get bored. Then the project can sit and collect dust with all my others.

A bit of discussion with The Cactus, led to a few ideas:

I discovered that QP is apparently the “first Python-3 enabled web framework.” I didn’t try it, so I was perhaps unfair in discarding it, but it doesn’t look… suitable.

I looked around some more, and found that CherryPy is about to release a Python 3 enabled version. I’m sure that will spawn a whole slough of Python 3 frameworks built around CherryPy. I considered such a plan (I’d call it ChokeCherryPy based on a receipe my mom devised): create some kind of templating system based on str.format, some session support, and some kind of database module wrapped around py-postgresql. Could be fun. But I’d end up with a mess of third-party technologies much like TurboGears, and that would be embarrassing, plus I’m sure the TG team already has people working on this.

Then I came back to my original plan, which was to port either Tornado or web.py to Python 3. Tornado looks like a smaller codebase (easier to port) and I’ve never used it before, so it’s also a chance to learn something new. So today I forked Tornado on github and run 2to3 on it. I’ve already got the “hello world” demo running; it wasn’t too hard once I figured out the difference between bytes and strings. At least, I think I did that part correctly.

The project is named psyclone, a little play on the ‘destructive weather patterns’ genre. I was close to p3clone, but it’s too hard to convince people it should be pronounced ‘cyclone’.

This isn’t a project I expect to go anywhere; django will be ported to Python 3 soon enough, and other frameworks will be popping up all over. But I’ve been working with Python 3 a lot lately, and I thought it was time to tackle the ’scary’ job of porting an existing app. It’s tedious, but not difficult.

Share/Save/Bookmark

Sunday, January 17, 2010

Kahel OS – A Review Without Booting

There are becoming more and more distros that are based on Arch Linux, with some so heavily “based” that they actually use Arch packages. This is fun for me as it means that I can now break multiple distros in one go, bringing “Allan broke it” to a whole new level.

One such distro is Kahel OS. Breaking through the market-speak on their website, it basically claims to be a newbies distro that has all the features a guru expects. It comes in Server, Desktop and Light Editions. I decided to try the Desktop Edition using the installer released on 2009-12-25. I installed using QEMU as I do not have a spare partition at the moment.

The install CD boots to a horrific orange screen [01]. After selecting the “install” option, you are greeted with bunch of kernel bootup text [02], followed by an Arch style boot process [03]. No graphical boot for this distro, so newbie friendly is a bit dicey already. Once booted, you are presented with a screen explaining why Kahel OS is good [04]. I suppose that was in case all that boot text was scaring us away.

Then we are actually installing. The installer is what I call ascii-graphical [05], although reverts you to text based screens as needed [06] (from that screenshot, you might notice that the answers are not necessarily intuitive…). Partitioning is done in cfdisk [07], followed by reselecting what type of filesystem you really want [08]. I decided for a single partition taking up the whole 4GB image I created and selected Btrfs for something new and given support for new filesystems is one of Kahel’s claimed features. I found it a bit strange that there was no warning about this filesystem still being experimental, but after some searching I found one hidden away on another TTY [09].

The “Install Packages” step goes straight to output from pacman [10], so there is no option to customize your install. The default install uses 3GB of space [11]. The package list is certainly interesting…. it installs the entire base, base-devel, xorg, xorg-video-drivers, gnome and gnome-extra groups. These are supplemented with a variety of other software including banshee, brasero, gnote, firefox, go-openoffice, xsane, and lots of fonts. I do not understand the use of gnote over Tomboy given mono is already installed for banshee. The SVN version of gtkpacman is installed for graphical package management. Other software choices are plain strange, such as libgpod, which is not required by anything else and is fairly useless on its own.

Finally, the installer takes you through some basic setup [12]. This distinguishes three types of users; root, administrators and normal. An “administrator” appears to have been given permissions to perform a variety of tasks via policy-kit.

Once you are done, you can reboot into your nice preconfigured desktop… but I could not. Those of you paying attention earlier would have noticed that I choose to have a single partition using btrfs. Of course, grub can not boot from that so that is a fail on my behalf. But a newbie friendly distro should have stopped me from doing that.

So, here is what I found different form Arch Linux without actually booting the system. There are a couple of extra repos enabled in pacman.conf. The listed Kahel OS repo does not exist yet. I did find a link to another Kahel repo, but it was empty. As a non-working repo breaks gtkpacman, package management is broken out of the box. Also the archlinuxfr repo is present but disabled, probably just so you can easily install yaourt.

Several packages are novel to Kahel OS. These are mainly for automatic configuration of the desktop and fonts as well as providing nice icons. The developers need to learn about makepkg.conf as they have not set their PACKAGER variable. Also, something strange is happening to their kahel-desktop-base-configurations package. It has 22 files, but “pacman -Qk” show that 11 of them are missing from the system so some installer magic has occurred. Not a great use of package management…

Overall, I am not sure what this distribution hopes to achieve. It seems that that it wants to provide a fully functional desktop after install and maybe it achieved that (I can not comment). But the installer is far from what is considered user-friendly, to the point that I do not think someone could achieve an install using it and not be able to do so with the Arch installer. Looking at screenshots on their home page, I can not see a major improvement graphically from a standard GNOME install. From all their “release announcements”, I am not sure that they know what they are trying to achieve either.

As an aside, of the 704 packages installed by Kahel OS, I built 80 (11%). So there is a lot of scope for me to cause breakage for unsuspecting Kahel OS users!

Screenshot index:

[01] – Bootscreen with lots of orange.
[02] – Boot text
[03] – Familiar boot-up from Arch
[04] – Market-speak
[05] – Ascii-graphical installer
[06] – Configuring timezone
[07] – Partitioning disk
[08] – Selecting filesystem type
[09] – Hidden Btrfs warning
[10] – Installing packages
[11] – 3Gb installed
[12] – Set-up

Saturday, January 16, 2010

Copyright Dichotomy

In the so-called “copyright wars,” we see a spectrum having the MPAA, RIAA, Jack Valenti, and “all rights reserved” on one side, with the Pirate Parties, Pirate Bay, Rick Falkvinge, and “no rights reserved” on the other side. In the middle, we have Creative Commons, Lawrence Lessig, and “some rights reserved”.

I’d like to momentarily expand this line to one that places “no rights reserved” in the middle, in a way that shifts Lessig closer to Valenti, and opens up a whole new area of creative exploration beyond the pirates, who are no longer extremists.

First, a disclaimer: I don’t claim to have any answers. I don’t even believe what I’m suggesting is the right path. I am simply suggesting an idea that frames a long-standing and long-term discussion in a different light.

The spectrum above defines the opposite of a right as “the absense of a right.” This only goes halfway. The opposite of a right is a responsibility.

Image, for a moment, a society where there is no such thing as, “the right to my creation,” but there is a massive, “responsibility to create.” In this society, people would have free access to all the materials of the world, all the patents, blueprints, and software, all the films, songs, and books, all the photos, paintings, and sketches the world has ever seen. In exchange for this free access, individuals would be required (responsible) to create a certain amount of new material every year. Some of this material would be innovative and fresh, some would be a new presentation of old stories and ideas, some of it would be interpretations of those old stories in new media. We’d see new designs for existing products, we’d see new products that merge old technologies. We’d see Android phones with iphone gestures, and we’d see Mickey Mouse saving Princess Peach from the evil Bowser the Hedgehog.

Such a world may excite some, bore others, and scare many. Would these same people be less excited, bored, or scared by the Pirate Party? by Creative Commons? Maybe those deals aren’t so bad after all (to those demanding rights)… or maybe they aren’t so good (to the promoters of creativity).

This responsibility to create idea seems radical in the context of entertainment media, but it is not new. It’s a long-standing scientific tradition, best encompassed by Newton’s overused quote about giants. Academics have “free” access to the entire compendium of academic knowledge; in exchange for this access, they are expected (responsible) to generate new ideas and innovations. Some are good and some are bad, but if a scientist neglects to publish a few new papers a year, they fade from the academic community.

This idea is also an unofficial motivator in open source communities. Within the Arch Linux community, my home, I’ve made some effort recently to verbalize this norm. The story goes thus: Arch Linux has had contributions from many thousands of users. Each of us that uses the distribution is somehow indebted to all those other users. Further, we can never, as individuals, pay off the debt in its entirety. Even the well-known user with 8000 posts on the forum, thousands of package updates to his name, and dozens of Arch Linux tools under his belt has contributed but a drop in the bucket compared to the efforts of the entire community. And Aaron is aware of this debt. So should we all be.

Yes, in the academic and open source world, the implied responsibility to create is known to work. Creativity in both worlds spreads more quickly than anywhere else. Compare to the communities creating ideas whose soul purpose is entertainment. Even the liberated Jamendo is mired way over in the (Some) Rights Reserved end of the scale.

Share/Save/Bookmark

Friday, January 15, 2010

SimpleHTTPServer in Python 3

If you’ve been doing any testing of client code that uses urllib or httplib, you probably know about this command:

python -m SimpleHTTPServer

This starts a very simple server in the current working directory; it serves all files from that directory, and is, quite simply, the quickest way to get something set up if you want to test some kind of web parsing or client code. (It’s also handy if you want to fire up a server to easily share files from your hard drive for a few minutes).

SimpleHTTPServer has been merged with BaseHttpServer into the http.server package in Python 3. I couldn’t easily find documentation for the new command, and ended up writing the following simple code:

from http.server import HTTPServer, SimpleHTTPRequestHandler
 
httpd = HTTPServer(('127.0.0.1', 8000), SimpleHTTPRequestHandler)
httpd.serve_forever()

Then I did a bit more digging around and realized that this command does what the old one did.

python3 -m http.server

My code performs a little differently (it only serves on the localhost interface), but if anyone is looking for the old SimpleHTTPServer command line, there you have it.

The http.server module is normally supposed to be as a base for creating more complicated server environments (see your favourite web framework, for example), but the fact that it can be executed directly has a great deal of utility as well.

By the way, if you didn’t know about SimpleHTTPServer, you might also be interested in the built-in smtpd server as well. I use this command frequently:

python -m smtpd -n -c DebuggingServer localhost:2525

This runs a simple smtp server on the given interface and port, and outputs all mail sent to that port to the console. It is very useful for testing and debugging web-based send-mail forms and such. You can, of course, run a standard smtpd server by not passing the -c DebuggingServer.

Share/Save/Bookmark

Thursday, January 14, 2010

Archiso-live 20100114 Release

Changes since last release: * Updated kernel to 2.6.31.11. I’m still using 2.6.31 series cause my wifi is unstable with the newer 2.6.32.3 kernel. * Updated the initramfs to use busybox instead of klibc. * Removed the boot splash. * I fixed the installer to work now even when ~/.gvfs is mounted. Everything is up2date as of 3:00 PM EST [...]

GMail as default KDE mail client

It’s more simple and fast then you think:

  • Open System Settings -> Default Applications -> Email Client
  • Check “Use a different email client
  • type the following: /usr/bin/rekonq https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=%t&su=%s&%u

Where instead of /usr/bin/rekonq, you must type the path to your favorite browser. %t is the destinatary, %s is the subject and %u is the full mailto URL (needed if you want to add the body too).

Kheers!

Wednesday, January 13, 2010

Fixing Git Bash Completion

I didn’t know until yesterday about the __git_ps1 command. You can include it in your bash PS1 like this:

PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ '

and whenever you’re in a git directory, it will include the current branch in your prompt, along with a few other goodies.

I did this and it didn’t work. It just displayed __git_ps1 in my prompt all the time, which is ugly and not terribly useful.

I couldn’t find an answer on Google, so I ended up just disabling lines in my .bashrc until I could figure out what was wrong. I ended up having to disable this line:

shopt -u promptvars

I don’t know why it was on; perhaps I had a reason for it once and then copied the bashrc from computer to computer, but it’s gone now and my git bash prompt works.

So if you’ve recently heard about __git_ps1 and it’s not working for you, look for the promptvars shopt.

Share/Save/Bookmark

Arch Linux Magazine, January 2010 - Discussion

Fellow Archers! Come one, come all!

As promised, here is the first release of Arch Linux Magazine for the new year! Unfortunately, issues with a lack of contributions (blame the holidays) delayed the release a little, and issues with permissions on the server post-Kensai have delayed it even more. However, I finally decided that you have waited long enough and I am releasing this issue of ALM from my website (at least until we get the permissions issue straightened out).

It's been entirely too long since an issue of ALM has been published, but I can't thank you all enough for your continued support and inquiries. Without the backing of the entire community we never could have come this far.

My sincerest thanks to those of you who somehow managed to find the time to submit work for this issue despite the holidays, and my thanks to those of you who have contributed in the past and will continue to do so.

Without further ado... The long-awaited link to the first issue of ALM for 2010!

http://ghost1227.com/newsletter/index.html

-- posted by Ghost1227

Tuesday, January 12, 2010

"I say, beware of all enterprises that require new clothes, and not rather a new wearer of clothes."

“I say, beware of all enterprises that require new clothes, and not rather a new wearer of clothes.”

- Walden, eerily summarizing a certain “book about email” that will be published 156 years hence. (via merlin)

OSNews Arch Linux Team Interview

It's already gone across the Arch Planet feed from a few other people's blogs, but an interview with the Arch Linux Team was published today. I think it is a good read because it wasn't just Aaron speaking for us; instead anyone on the dev team that wanted to contribute answers was more than welcome to. I have a few well-placed answers in there, but I am glad that they left it mostly unedited from what we submitted.

Anyway, enjoy the interview!

Monday, January 11, 2010

OS News Interview

OS News has published an interview with the Arch Linux team. Its full of insightful comments from a fair portion of the developers (including me!).

Arch Linux interview and Uzbl article

Apologies for only informing you about the second article now. I assumed most of you follow LWN (you probably should) or found the article anyway.
Of all the articles written about uzbl, no one came close to the quality of Koens work. So even though it's a bit dated it's still worth a read.

Arch Hurd?

I have always been interested in the GNU Hurd. This probably stems from the endless discussions on Slashdot about how (in my interpretation) microkernels should be full of awesome but none have really managed to obtain the greatness that they deserve. I always thought the status of Hurd was so far off being useful that there was no point in looking into it further. However, I recently read the Hurd status page and there was a picture of a GUI, doing useful spreadsheet type stuff.

My interest was piqued… Combining that with the joys of building a cross-compiler for an operating system or architecture you do not actually have access too (yes, I am a sad, sad person) and you get a Hurd cross compiler. I built a few packages and even managed to get (a slightly patched) pacman built. Then, having wasted much time, I moved on.

Several months pass and there is a post on the Arch forums, with someone trying to compile a GNU operating system for themselves. I mentioned my previous endeavours and somewhat surprisingly others seem interested in the possibility of making a Hurd distro. Well, Arch users are a weird bunch…

And so, Arch Hurd was born. There is a website, so there is no stopping now! The current status is a bunch of scripts that create a quite up-to-date cross-compiling toolchain (glibc-2.10.1, binutils-2.19.1 and gcc-4.4.2), which can be used to build the GNU Mach kernel, the Hurd, coreutils and bash (the latter two being more updated than the versions in Arch!). That is not far from a minimally bootable (but completely useless) system. Then we can all bask in the microkernally goodness.

[kde-unstable] KDE SC 4.4 RC1

Ok, here we are. The first official KDE 4.4 packages for Arch Linux are ready. We will move the package to [extra] only when the 4.4 stable will be released. So, if you want to try the new KDE you need to enable the [kde-unstable] repository. Run the update with this steps:

  1. Add [kde-unstable] to pacman.conf (up [testing])
  2. [kde-unstable]
    Include = /etc/pacman.d/mirrorlist

  3. Exit KDE
  4. Stop KDM
  5. Backup your ~/.kde4 directory
  6. Run pacman -Sy qt
  7. Run pacman -Su
  8. Start KDM
  9. Report any bugs!

Arch Linux changes:

  • Removed the Sesame backend from Soprano, now we will use the Virtuoso backend and this avoids the Java dependence
  • Nepomuk enabled by default
  • Qt-Phonon replaced by Phonon from kdesupport
  • New packages:
    • kdeedu-cantor
    • kdeedu-rocs
    • kdegames-granatier
    • kdegames-kigo
    • kdegames-papeli
    • kdepim-akonadiconsole
    • kdepim-blogilo
    • kdeplasma-addons-applets-blackboard
    • kdeplasma-addons-applets-kimpanel
    • kdeplasma-addons-applets-knowledgebase
    • kdeplasma-addons-applets-opendesktop-activities
    • kdeplasma-addons-applets-plasmaboard
    • kdeplasma-addons-applets-qalculate
    • kdeplasma-addons-applets-spellcheck
    • kdeplasma-addons-applets-webslice
    • kdeplasma-addons-runners-audioplayercontrol
    • kdeplasma-addons-runners-kopete
    • kdeplasma-addons-runners-mediawiki
    • phonon-gstreamer
  • Packages removed:
    • kdelibs-experimental replaced by kdelibs
    • kdeaccessibility-kttsd
    • kdepim-kpilot
    • kdeutils-kdessh
    • kdewebdev-kxsldbg
  • Fixed dependencies

Note: [19/01/2010] This repo will be unusable until KDE 4.4 RC2. DON’T USE IT.

Sunday, January 10, 2010

Paktahn 0.8.2 released

Merry christmas everyone!

It must be christmas of course since Paktahn 0.8.2 is now out as promised! ;D

As it often happens in software development it’s a little later than originally expected, but there’s a lot of good stuff that has made it into this release.

Highlights

  • fixed arch=(any) case (reported by magus)
  • proper error reporting and restarts when AUR results cannot be fetched (Brit)
  • Paktahn now remembers which PKGBUILD files it already presented for review (Brit)
  • Paktahn now has proper customizepkg support for AUR packages and will automatically build packages with customizepkg definitions from source (Brit)
  • support for just getting a pkgbuild (i.e. yaourt -G) with pak -G pkgnames (Brit)
  • makepkg’s PKGDEST variable is detected and used correctly (reported by Stefan Husmann)
  • AUR package dependencies are no longer installed explicitly (reported by bram85)
  • Basic proxy support (no authentication) (Brit)
  • Basic versioning support (Wei Hu, Leslie)

Also, we no longer depend on a custom version of SBCL!

Thanks to Jürgen Hötzel, Wei Hu and of course my colleague Brit Butler for their help with this release.

Syncing up

Use the quickinstall script or sync your Paktahn repo to get it.

Hiking

Last Thursday I was hiking with a friend on a two day trip with bivouac on a castle ruin. The text is in German, but you can watch the short video and look at the photos.

It’s really great to do something totally different than sitting the whole time in front of my computer…

Saturday, January 09, 2010

Arch Linux Laptop Stickers now Available from Arch Schwag

Due to frequent requests, I finally got some laptop stickers printed to supplement the ever-popular Arch Linux case badges. They were a little late to arrive, but I have them in stock now, and preorders will be shipping soon (unless you also ordered case badges, in which case, I have to wait for the new stock to arrive too). The new stickers claim "Powered by Arch Linux" and remind us to keep it simple. They are the same size and material as standard laptop stickers from commercial vendors and chip manufacturers. The printing quality is superb.

Stickers are selling for $1.50 each, or $1 if you buy ten or more. I had to do a large minimum order, so they're priced to sell! :-)

http://schwag.archlinux.ca/product/laptopsticker/

I apologize for the blurry photographs; perhaps somebody can send me some nicer photographs when they receive them (please).

Enjoy!
Dusty

-- posted by Dusty

Intellectually Dispossessed

Ursula K. Le Guin is, or had been, one of my favourite authors. In 1974, she published an excellent thought experiment, set in a science fiction setting, titled, “The Dispossessed.” The book discusses a group of people who built a culture and society around the idea of non-possession; nothing belonged to anyone. People lived in whichever house was vacant, people worked together to feed and shelter themselves. Their language did not include concepts of “my” or “mine,” and their children were raised by the community at large.

In some ways, “The Dispossessed” picks up where Richard Stallman’s Short Story, The Right To Read, published 23 years later, left off. The similarity is striking, yet the current stance of the two authors is startlingly different.

“The Dispossessed,” was a masterpiece, yet it is only one of several books Le Guin has written that seem to support cultures of freedom and creativity. I always believed this author was one who supported freedom and creativity.

Apparently, her works are fiction after all.

In December, 2009 Ursula K. Le Guin resigned from the Author’s Guild due to their settlement with Google on their book scanning policies.

I question how a woman who so clearly understood and documented the benefits of “dispossession” could now be in favour of intellectual property and copyrights. How could she write such an innovative novel, one that she apparently believed in, and yet, now that the world she describes is within reach, she fights it?

Yes, the culture described in Le Guin’s 1970s-era book is similar to a culture the open source and creative commons movements are now so effectively living. Her dream, nearly forty years later, is now becoming a reality.

I’m not sure what has changed in the decades since The Dispossessed was originally written and published, but I would like Ms. Le Guin to reconsider her stance, to study these new movements. Please, ask Lawrence Lessig to explain his views. Most importantly, I sincerely encourage her to publish her next work under a creative commons license. I think she’ll find that she will profit, rather than lose, from such a venture.

Share/Save/Bookmark

Thursday, January 07, 2010

A Reluctant Evaluation Of Google Wave

I’ve been using Google Wave quite a bit since I first got my preview account last fall. I was not as caught up in the initial hype as some people, but I was excited to try it, and my first impressions were enthusiastic. I had high hopes that Wave would be an alternate technology that could replace Facebook, and consolidate e-mail and instant messaging.

Wave has one very strong point; it is a very good collaborative editor. It makes for a terrific “private wiki.” If nothing else ever comes from Google Wave, I hope that wikis at least adopt the idea of “reply within the article.” Talkback pages are easy to ignore. Wiki discussion *needs* to be easy to ignore, but it should also be easy to respond to direct points within the wiki. Being able to insert widgets into wiki pages would also be advantages, but this doesn’t require the concept of Google Wave extensions; people have been plugging external widgets into web pages for years.

I’ve found this “private wiki” collaborative editing functionality extremely useful for trip planning and designing project specs. I believe it could also be used effectively in certain educational or tutoring scenarios, various brainstorming situations, and anything that requires collaborative design. Collaborative design, but not editing. Wave would be great for writing the outline to a new multi-author textbook. It might be useful for discussion of various chapters as the book is written. But it is not the place to write the actual text.

Google Wave is not an effective replacement for e-mail. Although GMail revolutionized e-mail with the concept of conversations, using Wave conversations feels clunky and slow. This may be a fixable flaw in the interface Google has provided, but I suspect it goes deeper. When I receive an ‘updated’ wave, I find myself scrolling through the whole Wave to find changes. Even though they are highlighted and easy to find, it does not feel as intuitive as just reading the new comment in an e-mail. While I often use quote reply in e-mail, it is only effective when the sender is snipping out only relevant portions to reply to. Wave doesn’t support snipping.

Google Wave is not an effective replacement for instant messaging. I find that chatting in a Wave is messy, unless I’ve installed the “RetroChat” extension. One problem is a fixable interface problem: blips are too big, and each message takes up too much room; you need a lot of screen space. The other problem is that people tend to respond to different topics in a wave at the point where the topic came up; when I’m chatting, I find I’m discussing three things with one person in one Wave. Theoretically, you could start a new wave for each topic, but chatting is supposed to be freeform. I find chatting in a Wave just makes me bounce around too much. In IM, if I’m discussing three different things, I interleave them (often enclosing different topics in brackets), and somehow, it makes more sense than in Wave.

I think the basic problem with Wave is that it allows you to do anything, anywhere in the wave. This provides a lot of flexibility, but it also brings responsibility; suddenly you have to *think* about the conversation and how it is formatted, instead of just having a conversation.

In spite of my disappointment, I’m going to continue to use Wave for a while; I think it is a step in the right direction, and that it will either be refined by the Google developers (or the community) to be a more usable tool, or it will be an inspiration to someone designing something better.

Edit: I forgot to mention, I’ve got several wave invites if anyone hasn’t gotten on the bandwagon yet.

Share/Save/Bookmark

Cleanup your Google Chrome/Chromium history

Just wanted to share this with you.

#!/bin/zsh
if [ a$(uname) = aDarwin ]
  then dbfile="$HOME/Library/Application Support/Google/Chrome/Default/History"
  else dbfile="$HOME/.chromium/???"; fi
blacklistfile=$HOME/.config/history-blacklist

if [ ! -f $dbfile ]
  then echo "oops! history db '$dbfile' not found!";
  exit -1; fi
if [ ! -f $blacklistfile ]
  then echo "oops! blacklist not found! please create '$blacklistfile' (see doc).";
  exit -2; fi
filter=""
while read i; do filter="\n  url like '$i' or ${filter}"; done < $blacklistfile
request="delete from urls where ${filter% or };"

echo -n "Do you want to execute:\n'${request}'\nin your chrome history db? [y/N] "
read answer; case $answer in
  y|Y) echo $request | sqlite3 $dbfile; return $?;;
  *) echo "mission abort, pu\$\$y!";; esac

And in $HOME/.config/history-blacklist, use something like:

%4chan.org%
%4chanarchive.org%
%piratebay.org%
%youtube.com%

You need to close Chromium/Chrome before running this script (else you’ll hit a permanent db lock).

Regretsy – River of Love

Regretsy – River of Love:

Great album cover? Or best album cover?

The album cover lured me in, and the music brought it home….

Wednesday, January 06, 2010

Arch Schwag Shipping Delays

I promised last month that pre-orders on pens, case badges, and laptop stickers would be filling early in the New Year.

All three of these items are coming from different suppliers, and all three of them are late. I am still expecting all of them “any day now,” but I wanted to let anyone waiting for their items know that there’s going to be a bit more of a delay than I expected. I have about 40 orders outstanding, and I will try to fill them all as quickly as possible as the supplier orders arrive.

Items shipping from Zazzle, and other Arch Schwag items including Jewellery, wooden sculptures, and laptop bags should continue shipping on their normal schedules.

Share/Save/Bookmark

What's wrong with SVN

Subversion was the first serious open-source (and free) version control systems to be a worthy rival to CVS. For anyone that has used CVS in the past and has moved on to better tools, you can understand where those who started the Subversion project were coming from. With CVS came no atomic commits, no easy way to rename files, and many other fun things I have since forgotten.

Fast forward 10 or so years. We now have a huge selection of version control systems, many of which have adopted the more distributed model. SVN still fills that niche (especially in the corporate world) of having a centralized repository while not being near as encumbered with restrictions as CVS. You'd think in 10 years of steady development, SVN would have done a pretty good job getting the kinks worked out. Compare this to git, which has only been around for five years. However, I've found SVN to have some of the worst performance ever when it comes to doing absolutely nothing, which is a fairly uncomplimentary thing to say.

I decided tonight to gather some basic numbers and performance characteristics of Subversion. As a comparison, I've done some similar tests with git and will show those here as well. I should note that both timing and tracing runs here were done after the respective update operation (svn update, git pull) actually did a refresh of the local copy- the timings and traces using strace you see below are of what turns out to be "no-ops".

Subversion timing and tracing

dmcgee@galway ~/projects/arch-repos
$ time svn up
At revision 62267.
Killed by signal 15.

real    0m13.375s
user    0m1.160s
sys     0m0.600s

dmcgee@galway ~/projects/arch-repos
$ strace -c svn update
At revision 62267.
Killed by signal 15.
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 65.46    0.019177           2     11598           unlink
 21.88    0.006409           0     46516        52 open
  3.74    0.001095           0     46468           close
  3.43    0.001005           0     23198           getdents
  3.24    0.000948           0     69601           fcntl
  1.79    0.000523           0     46478           read
  0.32    0.000094           1       176           brk
  0.08    0.000024          12         2           wait4
  0.07    0.000021           0       152           mmap
  0.00    0.000000           0        37           write
  0.00    0.000000           0        13         9 stat
<snip>
------ ----------- ----------- --------- --------- ----------------
100.00    0.029296                244451        64 total

Git timing and tracing

dmcgee@galway ~/projects/linux-2.6 (master)
$ time git pull
Already up-to-date.

real    0m0.636s
user    0m0.100s
sys     0m0.033s

dmcgee@galway ~/projects/linux-2.6 (master)
$ strace -cf git pull
Already up-to-date.
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 99.30    0.135659        2886        47        21 wait4
  0.36    0.000488          24        20           execve
  0.15    0.000208           8        26           clone
  0.13    0.000181           2       115           munmap
  0.02    0.000032           0       394         6 close
  0.02    0.000027           0       226        79 stat
  0.02    0.000026           4         7         2 connect
  0.00    0.000000           0      1216           read
  0.00    0.000000           0        21           write
  0.00    0.000000           0       438       162 open
  0.00    0.000000           0       233           fstat
  0.00    0.000000           0        55        27 lstat
<snip>
------ ----------- ----------- --------- --------- ----------------
100.00    0.136621                  4672       371 total

Looking for Answers

Let's sum up the test as the above raw data may not mean much just yet.

VCSRepositoryFilesDirectoriesSyscallsTime
SVNArch Packages14,35112,649244,45113.375 secs
gitLinux Kernel31,5041,7944,6720.636 secs

I've highlighted the two figures in the above table that I find astounding. Yes, I know these two repositories aren't identical. One has more files, the other more directories, and SVN definitely seems to struggle as you add directories as it sticks its own .svn metadirectory in each one. But that doesn't excuse its awful performance. Why on earth is it making over 11,000 unlink calls to do absolutely nothing? Then there are the other 230,000 syscalls that I haven't even begun to think about.

I wish I cared more about Subversion to help make it better, but I don't use it in my personal projects anymore because git is so quick and easy. It looks like it is time to move the Arch Linux package repositories to something that sucks less.

Bonus material

I ran this test too but it didn't really have a comparable operation in git, so it didn't fit in above. I'll put it here and let you draw your own conclusions.

dmcgee@galway ~/projects/arch-repos
$ strace -c svn cleanup
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 36.39    0.038920           1     46388           rmdir
 28.92    0.030931           1     46388           mkdir
 17.87    0.019107           2     11597           unlink
  8.17    0.008734           0    104492     11647 open
  3.26    0.003482           0    115970           getdents
  1.70    0.001821           0     92845           close
  1.69    0.001806           0     72307     11597 lstat
  0.95    0.001014           0     92799           fcntl
  0.91    0.000972           0     58076           read
  0.14    0.000155           0     11598           lseek

Tuesday, January 05, 2010

Using POSIX capabilities in Linux, part two

Okay, it has been over half a year since I last got to writing about this topic. And I don’t want to write about what I originally intended – which was capchroot. However, I am going to introduce you to the concept of inheritable file capabilities and inheritable thread capabilities and how to use them with capsudo.

If you read part one and experimented with capabilities, you probably noticed that the set of effective capabilities gets lost whenever you execute a subprocess using one of the exec* system calls. Looking at the capabilities manpage, there is an interesting formula that explains the situation:

P'(permitted) = (P(inheritable) & F(inheritable)) |
                (F(permitted) & cap_bset)

P'(effective) = F(effective) ? P'(permitted) : 0

P'(inheritable) = P(inheritable)    [i.e., unchanged]

 where:
P           denotes the value of a thread capability set before the execve(2)
P'          denotes the value of a capability set after the execve(2)
F           denotes a file capability set
cap_bset    is the value of the capability bounding set (described below).

So, to be able to inherit a capability from a parent, the following must be true:

  • The thread must have the capability in its inheritable set.
  • The executable file must have the capability in its inheritable set.
  • The executable file must have the effective bit set (this can be omitted if the executable is aware of capabilities and raises the permitted capability to an effective capability during execution).

For the first point, we’ll have a look at capsudo. It’s a small tool written by yours truly, which requires libcap and iniparser. Get the source, build it with make, install the config file to /etc/capsudoers and put the binary somewhere (/usr/bin in our example). Then, run setcap cap_setpcap=p /usr/bin/capsudo. The CAP_SETPCAP capability allows it to put arbitrary capabilities into the thread’s inheritable set, but does not allow them to become permitted unless you execute a program with the correct file inheritable capability.

Now we’ll use this to allow capturing in tcpdump and wireshark to certain users without setuid and without root:

  • Run setcap cap_net_raw=ei /usr/sbin/tcpdump
  • Add the following section to /etc/capsudoers:
    [tcpdump]
      caps = cap_net_raw
      command = /usr/sbin/tcpdump
      allow_user_args = 1
      users = user1 user2
      groups = group1 group2

    The users user1 and user2 are now allowed to use tcpdump with the CAP_NET_RAW capability, as well as all members of group1 and group2.

  • Run capsudo tcpdump -ni wlan0 and capture traffic.

To do the same with wireshark, we need to do something slightly different: Instead of running the setcap command on /usr/bin/wireshark, run it on /usr/bin/dumpcap. This is because wireshark does not capture itself, but calls dumpcap. The beauty here is that despite the CAP_NET_RAW inheritable capability being in the thread, wireshark has no privileged rights at all until it calls dumpcap, which then only gets the capability to capture, and nothing more.

  • Run setcap cap_net_raw=ei /usr/bin/dumpcap
  • Add the following section to /etc/capsudoers:
    [wireshark]
      caps = cap_net_raw
      command = /usr/bin/wireshark
      allow_user_args = 1
      users = user1 user2
      groups = group1 group2
  • Run capsudo wireshark and capture traffic.

Another use case would be running a http server on port 80 without root:

  • Run setcap cap_net_bind_service=ei /usr/bin/yourhttpserver
  • Add the following section to /etc/capsudoers:
    [yourhttpserver]
      caps = cap_net_bind_service
      command = /usr/bin/yourhttpserver
      allow_user_args = 1
      users = httpd
    
  • Start the service with capsudo yourhttpserver and open a privileged port.

That’s all for today, I hope you enjoyed it and find ways to use this to your advantage, so that we may at some point minimize the number of places where we have to use setuid or become root.

Share/Save/Bookmark

ZOMBOCOM

ZOMBOCOM:

I…I just don’t know…

Sunday, January 03, 2010

Playing around with kdenlive

Again I have played around with kdenlive and with my Canon HF100 camcorder. I have edit the video a little bit in kdenlive, added some effects to it. You can watch the final result here:

Moments of a ladybug from Daniel Isenmann on Vimeo.

Friday, January 01, 2010

Managing my TODOs in 2010

Last May I wrote an article on my ideal todo list. I implemented it in offline-enabled format, but never got around to writing the server-side code and it didn’t get used. I’ve been using a paper-based day book effectively all year, but the book is filled up.

Today, starting a new year, I needed something quick to manage my todos. I’m on a bad internet connection, and don’t want a web-based app; even offline enabled apps are quirky. I decided to write something quick and dirty using the command line. Half an hour later, this is what I have:

  • All my todos are stored in text files in one directory.
  • Each textfile contains the things I want to accomplish in one day, named after that day in 2010-01-31 format so they show up in sorted order.
  • I edit the files in my favourite text editor and put a “*” beside ones I’ve completed.
  • I wrote some scripts to easily open “relative” names such as TODAY, YESTERDAY, TOMORROW, and TWODAYS side by side.
  • I named each script starting with a 1 so that they show up at the beginning of the listing. This is useful in gui file managers as I can double click those scripts to open them.
  • I don’t actually use gui file managers much, but I put a link to this one on my desktop with a fancy icon so I don’t forget my tasks.
  • When I opened the directory in nautilus, I discovered that I can zoom in on the files, and actually read their contents without opening them. I switched it to compact view so I can fit more TODOs in one screen.
  • I’ll probably have one extra text file for “things that need to be done eventually.”
  • I haven’t really tested it, but I intend to use it for the next week and revise it as necessary. I may have to whip up a web.py server to give a simple interface to it from my phone, or maybe ConnectBot will suffice. It’s not important at the moment, I don’t take the phone anywhere due to a complete lack of coverage.

    If it seems to be working as well as the daybook did last year, I’ll keep it up. If I tend to forget to use it, like other electronic solutions I’ve tried, I’ll get a new daybook.

    What little code there is, I’ve posted to github.

    Share/Save/Bookmark

Wednesday, December 30, 2009

GCC compound statement expressions

GCC has a rather crazy syntax that I discovered tonight for the first time, and I thought I'd share so you don't have to dig for a half hour like I did to find out what it is and how it works. The feature is compound statment expressions. I came across it in the Linux kernel code in include/linux/percpu.h:

#define per_cpu_ptr(ptr, cpu) ({ (void)(cpu); (ptr); })

The long and short of this expression is this- if you call per_cpu_ptr(myptr, 5), you will get myptr back. But the macro to do this confused me, with no explicit return.

With a compound statement expression, as opposed to just a compound statement (those things surrounded by curly braces), the last item in the block is used as the return value for the whole expression. Thus, in the above expression, the value of ptr will be returned. The key is including both the parenthesis and curly braces.

If you want to see a full example, check out the following (working) program.

#include <stdlib.h>
#include <stdio.h>

int main(int argc, char *argv[])
{
    int number;
    char *string;

    string = ({ "hidden"; "shown"; });
    number = ({ int a = 50; a += 50; a; });

    printf("%s %d\n", string, number);
    return 0;
}

If compiled with gcc test.c, it should produce the output "shown 100".

Tuesday, December 29, 2009

kernel 2.6.32 series moved to the [core] repository

Tobias Powalowski wrote:
Hi arch community, The new 2.6.32 kernel series moved to the [core] repository. Upstream changes Arch Linux bugfixes/feature requests: # added CONFIG_PM_DEBUG # added CONFIG_MMIOTRACE KMS changes: - changed intel kms enabled by default - radeon kms is now disabled by default Please use the modeset option to enable it. - Early userspace KMS support is broken at the moment for radeon and nvidia cards. There are some workarounds posted on the ML, bugtracker and on the forum. Next mkinitcpio update will fix this regression in the future. Arch Linux changes: - split kernel-headers to extra package If you want to build external modules please install: pacman -S kernel26-headers Please change your PKGBUILDS to makedepend on this package. - added xen support to 64bit kernel - changed to new firewire subsystem

Planet Archlinux

Planet Archlinux is a window into the world, work and lives of Archlinux hackers and developers.

Last updated on January 25, 2010 12:43 AM. All times are normalized to UTC time.

Subscribe

Feeds

Arch Planet Worldwide

Other Archlinux communities around the world.

brain0 maintains a google earth map showing where in the world arch users live. Add yourself!

Colophon

Brought to you by the Planet aggregator, cron, and Python. Layout inspired by Planet Gnome. CSS tweaking and rewrite thanks to Charles Mauch

Planet Arch Linux is edited by Andrea Scarpino. Please mail him if you have a question or would like your blog added to the feed.