Older blog entries for crhodes (starting at number 34)

WARNING: If you like haiku, look away now, but I don't want these to be lost for posterity...

So this started with Raymond Toy:

<rtoy> Krystof:  Do you remember your haiku for CMUCL? If so, can I use it in my personal CMUCL startup banner?

I'd (on the spur of the moment, a few days ago) come up with a haiku that summarised CMUCL's performance:

<Krystof> cmucl / compiles beauty to fast code / life springs eternal
<Krystof> Ah yes, traditional "wise guy" cheat on the season-word 'requirement'
On the other hand...
<wnewman> steel bank common lisp / compiles beauty carefully / finished by winter
<wnewman> openmcl / spring for proprietary hardware / get lisp for free

Dan gets to be a bigger wise guy than me, though, because of these gems:

<dan_b> on axp / sbcl has type issues / is that uint? uh ...
<dan_b> cltl  was / summa theologica / but now outdated
<dan_b> frankly, I'm struggling

Ideas on a postcard for other lisp implementations... we now return you to our regular schedule.

29 Aug 2002 (updated 29 Aug 2002 at 10:14 UTC) »

So, well, a mostly-functioning HPPA port of SBCL has been merged, SBCL 0.7.7 has been released, tbmoore's linkage tables stuff has landed in CMUCL, Gerd Moellmann has been busily cleaning up PCL, Alexey Dejenka's nailed a few more obscure compiler bugs, and I've started reviving the MIPS backend. It mostly works, but sadly not entirely.

On the plus side, this backend is the last of the legacy ones from CMUCL that I care about (I am planning to abandon the support for the IBM RT; so sue me), so after this it should be much more fun... if refactoring compiler code can be described as fun, that is. Maybe I'll be able to help merge the wave of patches to the compiler codebase that have been rolling in recently, too, and even work on some of my blue-sky projects (loop invariants, mmm).

[ Edited to get names more correct ]

HPPA port of SBCL is progressing nicely. It seems to work, mostly, on the one parisc machine I have access to (thanks, James); bits and pieces probably need review (I've had to statically link it, because I haven't yet learnt enough to write trampoline functions, and so on), and I had to hold its hand while building on the machine in question as it has a distressing tendency to lock up.

However, preliminary reports seem to show that it segfaults on more modern parisc machines, probably because there's something subtly different in signal handling. Oh well. I'll cross that bridge soon enough. It's successfully saved its memory image, and it's probably time to do a complete build from scratch to check that things still work. It may be too late to make it into 0.7.7, but we shall see.

Well, since I'm now officially a wonder twin (read Dan; he's a better writer and more interesting), I feel obliged to give a status report... instead of doing something useful with my time, I've been doing what might well turn out to be the most thankless port in SBCL's brief history. How many people do you know who will want to run a native-code Common Lisp compiler on the HPPA (aka "parisc") platform? Anyone?

So why, then? Well, to understand that, you need to understand a bit of history, and a bit of software engineering. The history first: CMUCL historically supported compilation on Alpha, HPPA, MIPS, RT, SPARC and x86 platforms (as well as the PowerPC, briefly); however, partly because of motivation and partly because of cmucl's build process, CMUCL currently only supports SPARC and x86. SBCL's build process is such that, in contrast to CMUCL, building binaries is trivial, so, since the backends are as good as they ever were, it is much easier for SBCL to support more platforms, particularly since we can piggyback on Debian's "buildds". Then, once all the viable CMUCL backends are ported, we can perform some much-needed surgery.

One strange problem is now less strange, though it remains a problem. Delegation works!

On the other hand, no-one's bitten on the strange "jump far into weeds" problem. What have we discovered so far?

call_into_lisp, the function that ends up jumping into Lispland, does so by an indirection. Relevant code snippets:


        movl     8(%ebp),%eax   # lexenv?
        call    *CLOSURE_FUN_OFFSET(%eax)


        addi reg_LIP,reg_CODE,6*4-FUN_POINTER_LOWTAG
        mtctr reg_LIP
        slwi reg_NARGS,reg_NL2,2

A working Lisp image has the top-level function being referenced from very close to the top of dynamic space. My broken image has the top-level function very far from the top of dynamic space. This would tend to indicate that the PURIFY stage (when Lisp data are collected and anything remaining compacted) didn't work on the x86.

Here's where the fun begins: the changes involved didn't obviously touch the purify machinery at all. Investigations are ongoing, if hampered by the fact that I tried (three times, on three different architectures) to compile with the wrong patch installed. Hey ho.

SBCL 0.7.6 is out.

This was a more fraught release than previous, maybe because we're playing around with some low-level suboptimalities; obviously we want to fix them, but the time scale is quite challenging. The good news is that it seems that Dan's stack checking stuff (a) has landed and (b) is here to stay. It's a much better scheme architecturally, and it also means that I can read disassembly without having to filter out n calls to SB-KERNEL:%DETECT-STACK-EXHAUSTION...

The bad news? Well. F'rinstance, there's the vexing matter of floating point arithmetic. I was so pleased to have fixed SBCL's signal handling code on x86/ and PPC/Linux. Ha. One of the problems of testing on only one machine (per architecture/os combination) is that you don't necessarily catch all your assumptions. In this case, when I tried the nice shiny new code on Dan's iMac:

* (/ 1.0 0.0)

Ouch. So, I went back over to the Sourceforge compile farm machine (IBM RS6000, running Debian), and tried it there, just to check that I wasn't going completely mad:

* (/ 1.0 0.0)
Segmentation fault

The temptation to weep and curse was fairly strong; however, before turning myself in for crimes against good programming I did note that the machine in question had just changed from running a 2.4.high kernel to a 2.2.low one; given my previous experience with signals on SPARC/Linux, I'm willing to believe that it's not my fault.

Still, on the plus side, we appear to have a fan, even if he hasn't actually used the system. Off to sing in Paris for a week, so my two outstanding strange problems are left in the capable hands of my co-maintainers. Phew.

So, obviously, my consciousness had registered that this weblogging phenomenon was taking off. From the header in Dan's to the fact that there are even mostly-Lisp weblogs, the evidence was fast becoming compelling.

However, when my significant other (one who has had a fair amount of exposure to technology over the years, bless her, but who let us say hasn't exactly enthused over it) announces that, encouraged by a Guardian competition, she has started her own weblog, I have to confess feeling that I am being left behind technologically by the great British public. Did we think that technology was going to be the great leveller, giving equality of opportunity? While this is probably still as untrue as it has been in the past, maybe, just maybe, expression of self has become more possible.

I seem to be sinking lower and lower.

As Dan mentions, we happy few in the Common Lisp world seem to be working at an absurdly low level. I mean, OK, we're compiler implementors, but the three previous sizeable improvements to SBCL from the pair of us seem to be better stack exhaustion detection, better floating point exception support, and correct undefined-function handling on the PowerPC platform. You will observe that those patches are mostly not touching any of the 100kloc of the Lisp code in the implementation.

Maybe there is something to this CLIM thing after all.

My presentation yesterday went well (there was a decent audience; not too many of them fell asleep; a couple of questions at the end), but the star of the show for me was Gilbert Baumann's demonstration of the Closure web browser.

Closure was, in 1999, the first web browser to pass the W3C CSS1 compliance test suite. Since then, all sorts of nifty things have been implemented, including a CLIM frontend and the TeX line-breaking algorithm. Certainly, his demo (and Robert Strandh's introduction to CLIM) has given me ideas for killer apps...

So, that was one conference. It's somewhat entertaining on a number of levels; firstly, being in a room with lots of really clever people is a very good thing; secondly, watching those really clever people disagree violently with each other is amusing; thirdly, getting new ideas for my own research has to help with the impending third year of Ph.D. studies nightmare.

Should you, the dear reader, be interested in the nature of Dark Energy, a brief summary: Monday and Tuesday were devoted to experimental techniques and observational results. It saddened me slightly to see some of the theorists take time off during these sessions, because Physics has to be driven by experiment to work (otherwise it's simply Mathematics... oh, wait, what department am I in again? Still, I learnt a fair bit about the Cosmic Microwave Background baloon experiments (MAXIMA and BOOMERanG), the Type Ia Supernovae observations, Weak Lensing, all apparently pointing towards the ‘Concordance Cosmology’ of (Ωm, ΩΛ) = (0.3, 0.7).

The last plenary session on Tuesday was devoted to the question “Is evidence for Dark Energy compelling?” Based on the previous paragraph, one would have to say ‘yes’, as the observations strongly point towards a non-zero Cosmological Constant. But wait! The CMB results depend on assuming only adiabatic perturbations; we don't have a model for the Type Ia supernovae, and there is the problem of the cosmic distance ladder; and weak lensing observations can easily be contaminated by strong lensing effects. Is it possible that systematic experimental effects can lead to a false concordance (or, more cynically, is it possible that experimentalists will choose the method of analysis that leads to an answer close to the one that they're expecting)? Sadly, the history of science points to a ‘yes’ answer to that question, too. Based on this, I skipped Tuesday afternoon's session to go shopping.

Wednesday to Friday were more theoretical days (well, the days themselves weren't theoretical, but the talks were on theoretical subjects), so I skipped fewer talks. Highlights: Gia Dvali, not so much for his talk's content as for the way he said it – he actually made an 09:00 start tolerable; Sacha Vilenkin, for the bravery in extolling the virtues of the anthropic principle to a mostly hostile audience; and, of course, having my own work presented (all the glory and none of the responsibility). Maybe a side note about the anthropic principle is in order: it comes in a number of flavours, ranging in character from “We're here” through “We're here because we're here” to “Everything in the Universe is your fault”. As presented by Vilenkin, it was a very reasonable argument, essentially saying that, given that we exist, we have a non-uniform prior probability on cosmological parameters, so we shouldn't use a uniform prior when we do Bayesian statistics. This seemed reasonable to me (maybe he shouldn't have said that the anthropic principle ‘predicted’ an ΩΛ of 0.7) but didn't meet with much approval among my peers. It's a shame, because the anthropic principle is a useful tool in the chest of a physicist (notably used by Fred Hoyle in the prediction of the resonance in Carbon-12, at just the right energy for the triple-α collision to work...

The conclusion from the Colloque was really along the lines of “We have no real idea what Dark Energy is like or where it comes from. But that's not a problem, because it leaves us plenty of room for writing articles which everyone else can cite.” Though I did like the attitude of the final session chair: “If I could ask God one question, it would be ‘How many dimensions does the Universe have?’; hopefully He would answer with a number... a real number... if we're really lucky, an integer...”

And now, off to Bordeaux for Libre Software Meeting. I should stop writing this diary entry, and start writing my talk on “SBCL: The best thing since sliced bread?”

25 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!