Older blog entries for chalst (starting at number 138)

11 Feb 2005 (updated 11 Feb 2005 at 09:19 UTC) »
Slow Motion Railway Disaster
Michael "Diary of a Nobody" Totten gets drunk with Christopher Hitchens. A truly excruciating story. Via Kieran Healy. Some highlights:
  • Sir Michael rides forth: But Hitchens had a defender, too. He had me;
  • He shares some down-to-earth wisdom: Believe me, you don't know what a tense political fight feels like until the person yelling at you is from a country you recently bombed and currently occupy;
  • You really had to be there: I put my face in my hands. None of this was what I wanted to hear, and it dragged on longer than I'm making it seem in the re-telling;
  • Eureka! Maybe they really didn't (and don't) completely understand how we differ from the colonialists and imperialists of the past;
  • So 'ere's to you, Fuzzy-Wuzzy: I respected them more, too, because they stood up to me and Christopher Hitchens. They are not servile people. They will never, ever, be anyone's puppets;
  • Grooovy! We're instantly on the same page on multiple levels all at once;
  • He's, like, totally tuned into where it's at: You can't afford not to read blogs,'' I said. ``Because of who you are and what you do for a living, you'll be hopelessly behind if you don't.
    ``Like I said, Christopher,'' I told him. ``You can't afford to be unplugged from the blogosphere."
  • A moment of intimacy... ``Angel,'' he said. ``Can I call you angel?''
American slang doesn't really have the right word for our brave correspondent, but British slang has the word pillock.
9 Feb 2005 (updated 8 May 2007 at 09:58 UTC) »
Odifreddi's book
Cameron Laird, who I have the idea has an advogato account though I can't find it, writes a nice plug for Piergiorgio Odifreddi's The Mathematical Century. Buy it, read it. (via lispmeister/Planet LISP). Odifreddi is also the author of Classical Recusion Theory, a masterful introduction to the area, with a very enlightening overview of the significance of Church's thesis.

Diary redux
Lot's of things I've been meaning to commit to my diary, little said:

  • I want to respond to Bill Clementson's very information packed post, that on the way put his case for not wanting to follow the definitions of concurrency at al I proposed. I also have had a nice email discussion with Dirk Gerrits on this topic;
  • Lots of things came up on LtU that I'd like to link/comment on;
  • Warning: political content will be appearing in this diary in the next few days: left-wing vs. right-wing political cultures, the political centre, and what I regard as the failure of the anti-war left to respond in a decent manner to the Lancet study.
21 Jan 2005 (updated 21 Jan 2005 at 14:21 UTC) »
He's Back!
ncm, compassionate sociopath and resenter of garbage collection, is an advogatan again! I've recertified him as master: sorry Nathan, I know it annoys you, but you do deserve the rating according to the criteria I've been trying to consistently apply.

Oop: better in theory than in practice?
This Lambda the Ultimate story seems to have provoked a not-too-unpredictable storm...

19 Jan 2005 (updated 19 Jan 2005 at 13:40 UTC) »
Collison's new LISP
Lemonodor reports on a discussion with Patrick Collison on CROMA, who won the BT young scientist of the year for his efforts. I would post in the comments thread there, but I can't get my post past the syntax checker. So two points here:
  1. CROMA is a language with first-class macros, but apparently Patrick thinks that hasn't worked out. I'm guessing he might find Alan Bawden's POPL 2000 paper "First-class macros have types" [PDF] [GS citations] of interest: he shows that some compile-time/run-time issues around first-class macros can be cleaned up by means of a very simple static analysis, essentially by giving a C-like type system to handle procedures that depend on macros.
  2. In the comments thread pfdietz asks the notorious bad question "what's wrong with lisp-2s"? (since Patrick decided CROMA would be a lisp-1). It's almost as bad a question as "which is better, lisp-1s or lisp-2s?", to which the least bad answer I know of is to say "it depends, you should read Gabriel and Pitman to find out what it depends on." One issue not talked about in that paper: an issue some applications may have with Common LISPs way of passing functions as arguments is that it depends on coercing symbols to values at run-time: if you don't want an evaluator in your runtime, you have to do things differently. This, I understand, is one of the main reasons Dylan was a lisp-1.
It sound like Patrick has done something interesting; I'd love to read an document outlining in more detail the design choices he has made.

Postscript: It looks like there won't be much information before September [cll msg].

18 Jan 2005 (updated 18 Jan 2005 at 22:27 UTC) »
Parallelism, concurrency and distribution
Bill Clementson penned Part two of his series on parallel computing in LISP. It's shaping up to be a nice series, but I'm bothered that he, Dirk Gerrits and Wikipedia are using definitions of parallelism, concurrency and distribution that abolish useful distinctions that I think they ought to express. Here is my opinion:
  1. Parallelism is a property of a computation or a machine, which means that control flow occurs on parallel pathways. It is opposed to sequentiality as a property of computation or machines;
  2. Concurrency is a property of programs and programming languages, which means the programming language has control constructs (or the program makes use of such) that are most naturally realised by parallel computations. It is, a little confusingly, also opposed to sequentiality as a property;
  3. Parallelism and concurrency are intimately related, but it is frequently useful to distinguish them: one expresses the program view, the other the execution view. Concurrent code may be realised on a sequential machine by use of, say, time-sharing; similarly sequential code may be realised by a parallel computation by means of a behind-the-scenes program transformation, which discovers opportunity for parallelism by means of flow analysis.
  4. Distribution is something else entirely: of a computation it means that it occurs in space spread out over many locations; of code it means the code expresses that data is stored or code is to be computed at several, named, locations.
  5. Distribution normally entails parallelism/concurrency, but code may be concurrent without describing locations, and quantum computation allows parallelism to happen in one place. Furthermore, this entailment is not necessary: a machine that is spread out over many locations, but which is designed so that computation may occur at one sublocation only when computation is suspended at other locations, is not parallel.
By contrast, the Clementson/Gerrits/Wikipedia definition makes concurrency the disjunction of parallelism and distribution, and abolishes the notion of location from the definition of distribution.

There are several ways in which the definitions I gave can be tightened, but I think they are much closer to how most computer scientists use the terms. I'd be very interested to hear of better web-accessible and brief definitions of these concepts, or of serious problems with my definitions.

18 Jan 2005 (updated 8 May 2007 at 12:33 UTC) »
Luke's 1st Rule of Systems Hacking</ b>
lkcl's recent post in the Open group thread is excellent, and I hope he'll forgive me for sloganising his message, probably inaccurately:
Those distributed application developers who do not understand DCE/RPC are condemned to reinvent it, poorly.
...or maybe:
Any sufficiently complicated LDAP- or Samba-using distributed system contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of DCE/RPC.

Missing in Action
ncm seems to be no more, although his diary is still with us...
Postscript: Old news - I've been away from advogato for a while.

Home front

  • A couple of months ago I had the idea that it might be a good idea to have a workshop on the kind of proof theory we like as a satellite workshop of the coming ICALP conference. Fast forward to today, and the workshop is a coming reality: Structures and Deductions (SD05) is one of the satellite workshops of ICALP 2005.
  • Come May my household will be moving across the ocean to New Haven: my wife has a visiting professorship at Yale, and I'm hoping I can get a post teaching logic in the philosophy department.
  • Carlin will be 1 year old on Thursday.

The Overdue Links Department</ b>
Christopher Hitchens wrote an excellent obituary of Susan Sonntag for Slate.

9 Dec 2004 (updated 13 Dec 2004 at 21:20 UTC) »
Just written a shell script for reclobbering resolv.conf files mangled by dhclient that has created an enormous sense of relief for me:
#suclob-resolv: written Charles Stewart (C) 2004
file="$1"; test -z "$file" && file=${HOME}/doc/dns/resolv-wv-fhd.conf
grep -e "^nameserver " "$file" || \
    (echo "$file" does not appear to name any nameservers; exit 1)
set -x
super cp "$file" /etc/resolv.conf
No, it won't work for you as is, it's just a personal hack. The point? Well, maybe the super command isn't well known to people who chance upon this diary snippet: it's a much better alternative to sudo that is popular amongst FreeBSD types: I recommend you take a look (it's in Debian, as well)

Postscript #1: yeupou asks: why not simply editing dhclient.conf so it does not mangle your resolv.conf in an annoying way?
Some reasons: I've read the resolv.conf man page several times and didn't know you could configure this; even now I know, I expect writing the script was faster and easier than figuring out the right way to configure the cf file; possibly resolv.conf needs to clobber the file to work properly; the script I wrote solves a problem in addition to anti-clobbering one, namely a by providng a sort of wvdial localisation support.

Postscript #2: xach notes a dumb error in an earlier version of the above script: I had used a shell-style glob syntax in the -e argument to grep. Script corrected: I dropped the idea of filtering for the presence of a dotted quad: it's a mistake in any case, since one wants to see even lines that don't conform. Thanks to xach for the tip.

7 Dec 2004 (updated 7 Dec 2004 at 07:31 UTC) »
I've been spending a fair amount of time trying to improve the quality of the logic pages at wikipedia. I've just posted a slightly polemical comment in one of the talk pages about the name "Curry-Howard isomorophism":
I've got a long past with this topic, so much so that I probably can't be objective about certain details: my doctorate was on the so-called classical Curry-Howard correspondence and begins with a rant about the misleading way the topic is made use of in many works in the field. While the centre of the topic was Bill Howard's 1969 manuscript, which indeed presented an isomorphism between two formal systems, there are reasons to prefer not to call the topic the Curry-Howard isomorphism, and these are mine:
  • Howard's manuscript was called The formulae-as-types notion of construction, the terminology formulae-as-types correspondence is closest to the source;
  • The topic does not consist in the first place of showing an isomosophism, rather it consists of establishing first an intuitive observation of a correlation between certain notions of construction in type theory, and certain patterns of inference in logic; the presentation of an isomorphism is rather a virtuoso flourish that adds force to this observation. To insist on talk of isomorphism obscures the most important part, which is the intuitive and I think necessarily informal part;
  • There are many important aspects of the theory that cannot be put into isomorphism, because what would have to be regarded as the logical counterpart of the type theoretic construction makes no logical sense: the clearest development of this problem is in Zhaohui Luo's Computation and Reasoning, where he discusses the problems applying the formulae-as-types correspondence to the Extended Calculus of Constructions (essentially one cannot mix dependent sum with System F like polymorphism and expect the result to be entirely logic-like);
  • There's something artificial about Howard's isomorphism: while the simply-typed lambda calculus is a calculus of an obviously fundamental nature, there's something forced about the logical end, namely the idea of labelling assumptions and implies elimination rules. The labels are needed for the isomorphism to work, but before the work of Curry and Howard, no-one working with natural deduction thought to keep track of these; instead the assumption was simply that all assumptions matching the implies elimination rule are discharged.
As a last point, if I am convinced that 'Curry-Howard isomorphism' is the more widely used term, the people whose judgement I respect most, mostly prefer 'formulae-as-types correspondence'; and at the risk of sounding elitist, I think we should prefer the better term even when it is less popular. My preference for this page are:
  1. Formulae-as-types correspondence;
  2. Curry-Howard correspondence;
  3. Curry-Howard isomorphism;
with the latter only under sufferance; also note that the phrase 'Curry-Howard' is disrespectful to those logicians who contributed fundamental insights; I would particularly single out William Tait in this respect.

ncm's testimonial
A couple of days ago, a rather stressy morning was brightened by noticing that ncm posted a fun and mostly generous Orkut testimonial. But really, he's got my LISP infatuation all wrong! I guess I'll post something in my diary saying why I got into scheme/LISP...

Planet LISP ...which reminds me, this should be my first diary entry syndicated to the multitudes who read Planet LISP (though thanks to a comment of dan, maybe my last post counts as the first?). Well, hello there... A comment with my thoughts on Kent Pitman's idea of programming lamnguages as political parties and what Common LISP and Scheme can learn from each other is planned in the not too distant future.

LtU on types A lot of energy is being expended by Lambda the Ultimate regulars on various issues around type systems: not in itself surprising, but what I find a bit odd is how little this broad-minded group (my dogmatic self excluded, of course) are making contact on the fundamental issues. A list of some recent threads:

  1. Why type systems are interesting
  2. Why type systems are interesting - part II
  3. Why type systems are interesting - part III: latent types
  4. Looking for Classic Types Thread
  5. What's a definition of "type" that beginners can understand?
  6. Definition of Type
26 Oct 2004 (updated 27 Oct 2004 at 08:30 UTC) »
CLRFI Snippets
From comp.lang.lisp:
  1. It was suggested that the CLRFI for a timer facility might be worth putting together (Pascal Bourguignon, <874qkyrcq7.fsf@thalassa.informatimago.com>);
  2. Paolo Amoroso first suggested the idea of a CLRFI process back in 1999 (<36da4b51.4026275@news.mclink.it>);
  3. Threading and even mmap have been suggested as possible subject matter for a CLRFI (see, eg., Pascal Bourguignon, <871xg5v1ea.fsf@thalassa.informatimago.com>). FWIW, I don't think CL needs a mmap CLRFI! Though maybe a general UNIX/Linux system call interface would be worthwhile.

Wondering aloud...
What are "agents"? Wikipedia says software agents are:

a piece of autonomous, or semi-autonomous proactive and reactive, computer software.
That's a lot of complex, highly technical, words to define what one would hope was a simple and fundamental concept. So a start might be to ask some more elementary questions:
  1. Are they a design pattern? If so, what are its characteristics?
  2. Are they a kind of process? If so, what kind of abservations distinguish agents from other processes?
  3. Or, cynically, is "agent" a buzzword that scientists use to get research proposals rubber stamped by research funding councils, and companies use to give the semblance of being up-to-date?
Maybe fxn has some thoughts...
Postscript: Indeed he does.
I wish I did that
John Quiggin posts some time management tips where he reveals:
I discipline myself by setting word targets. I try to write 500 to 750 words of new material every day. 500 words a day might not sound much, but if you can manage it 5 days a week for 40 weeks a year, you've got 100 000 words, which is enough for half a dozen journal articles and a small book. So, that's my target. If I haven't written enough one day, I try to catch it up the next day and so on.
My publication list would be a mite more impressive if I had followed this discipline in the 5 years since finishing my doctorate...

129 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!