Co-hygiene and quantum gravity

Co-hygiene and quantum gravity. Some light weekend reading by John Shutt.

The post starts with a dazzling proposition:

Gravity corresponds to pure function-application, and the other fundamental forces correspond to side-effects. ... quantum non-locality ("spooky action at a distance") is part of the analog to side-effects ...

I can't do it justice here, so if you're interested in John's fascinating take on the relationship between lambda calculus and quantum physics, hop on over!

Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89

Obituary from NY Times.

Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
Jean E. Sammet, an early software engineer and a designer of COBOL, a programming language that brought computing into the business mainstream, died on May 20 in Maryland. She was 89.

....

Grace Hopper, a computer pioneer at Sperry Rand in the late 1950s, led the effort to bring computer makers together to collaborate on the new programming language. Ms. Hopper is often called the “mother of COBOL,” but she was not one of the six people, including Ms. Sammet, who designed the language — a fact Ms. Sammet rarely failed to point out. (Ms. Sammet worked for Sylvania Electric at the time.)

“I yield to no one in my admiration for Grace,” she said. “But she was not the mother, creator or developer of COBOL.”

Imperative Functional Programs that Explain their Work

Imperative Functional Programs that Explain their Work
Wilmer Ricciotti, Jan Stolarek, Roly Perera, James Cheney
submitted on arXiv on 22 May 2017

Program slicing provides explanations that illustrate how program outputs were produced from inputs. We build on an approach introduced in prior work by Perera et al., where dynamic slicing was defined for pure higher-order functional programs as a Galois connection between lattices of partial inputs and partial outputs. We extend this approach to imperative functional programs that combine higher-order programming with references and exceptions. We present proofs of correctness and optimality of our approach and a proof-of-concept implementation and experimental evaluation.

Dynamic slicing answers the following question: if I only care about these specific part of the trace of my program execution, what are the only parts of the source program that I need to look at? For example, if the output of the program is a pair, can you show me that parts of the source that impacted the computation of the first component? If a part of the code is not involved in the trace, or not in the part of the trace that you care about, it is removed from the partial code returned by slicing.

What I like about this work is that there is a very nice algebraic characterization of what slicing is (the Galois connection), that guides you in how you implement your slicing algorithm, and also serves as a specification to convince yourself that it is correct -- and "optimal", it actually removes all the program parts that are irrelevant. This characterization already existed in previous work (Functional Programs that Explain Their Work, Roly Perera, Umut Acar, James cheney, Paul Blain Levy, 2012), but it was done in a purely functional setting. It wasn't clear (to me) whether the nice formulation was restricted to this nice language, or whether the technique itself would scale to a less structured language. This paper extends it to effectful ML (mutable references and exceptions), and there it is much easier to see that it remains elegant and yet can scale to typical effectful programming languages.

The key to the algebraic characterization is to recognize two order structures, one on source program fragment, and the other on traces. Program fragments are programs with hole, and a fragment is smaller than another if it has more holes. You can think of the hole as "I don't know -- or I don't care -- what the program does in this part", so the order is "being more or less defined". Traces are also partial traces with holes, where the holes means "I don't know -- or I don't care -- what happens in this part of the trace". The double "don't know" and "don't care" nature of the ordering is essential: the Galois connection specifies a slicer (that goes from the part of a trace you care about to the parts of a program you should care about) by relating it to an evaluator (that goes from the part of the program you know about to the parts of the trace you can know about). This specification is simple because we are all familiar with what evaluators are.

Databases from finite categories

Spivak and Kent (2011). Ologs: A categorical framework for knowledge representation:

In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research.

Ologs are essentially RDFs extended to encompass commuting diagrams, so a visual little language. The paper talks about how database schema can automatically be extracted from ologs.

Type Systems as Macros

Type Systems as Macros, by Stephen Chang, Alex Knauth, Ben Greenman:

We present TURNSTILE, a metalanguage for creating typed embedded languages. To implement the type system, programmers write type checking rules resembling traditional judgment syntax. To implement the semantics, they incorporate elaborations into these rules. TURNSTILE critically depends on the idea of linguistic reuse. It exploits a macro system in a novel way to simultaneously type check and rewrite a surface program into a target language. Reusing a macro system also yields modular implementations whose rules may be mixed and matched to create other languages. Combined with typical compiler and runtime reuse, TURNSTILE produces performant typed embedded languages with little effort.

This looks pretty awesome considering it's not limited to simple typed languages, but extends all the way to System F and F-omega! Even better, they can reuse previous type systems to define new ones, thereby reducing the effort to implement more expressive type systems. All code and further details available here, and here's a blog post where Ben Greenman further discusses the related "type tailoring", and of course, these are both directly related to Active Libraries.

Taken to its extreme, why not have an assembler with a powerful macro system of this sort as your host language, and every high-level language would be built on this. I'm not sure if this approach would extend that far, but it's an interesting idea. You'd need a cpp-like highly portable macro tool, and porting to a new platform consists of writing architecture-specific macros for some core language, like System F.

This work may also conceptually dovetail with another thread discussing fexprs and compilation.

Idris 1.0 Released

What do we mean by “1.0”?

Idris version 1.0 corresponds to the language as described in Type-Driven Development with Idris, published last week by Manning.

kdb+ 3.5 released last month

kdb+ is a real-time time series database, known in the financial services universe as the fastest tick database on the market. It was first conceived by Arthur Whitney at Morgan Stanley as a prototype, and over the last 35+ years has grown to add many features. The database makes such aggressive usage of mmap() POSIX function for mapping file chunks into main memory, to the point where it has exposed issues with the implementation of mmap itself.

Recently, the company now behind kdb+ has also built Kx for DAAS (Data-as-a-Service), which is basically a cloud-based, massively clustered version of kdb+ that deals with the curious oddity that kdb+ is effectively entirely singly threaded. For those interested in reading more about kdb+'s unique cloud architecture (as compared to "big data" solutions like Hadoop), you can read the following whitepapers as suggestive guidelines for how the q community thinks about truly "big data" several orders of magnitude faster and larger than most Hadoop data sets:

While I don't suggest these papers are the blueprint for copying/mimicking the DAAS product, it does help the LtU reader imagine a "different world" of data processing than the often cited Map/Reduce paper and other more mainstream approaches. What is particularly striking is how tiny q.exe (the program that runs kdb+ and provides a CLI for q scripting) is. Language researchers are looking at provably correct C compilers, and it is not a huge leap to think about the world soon seeing provably correct real-time time series databases using kdb+ as an inspiration.

Another curiosity, relevant to us here at LtU, is that kdb+ has its own programming language, q. q is a variant of APL with a special library for statistics. Most "big data" solutions don't have native implementations for weighted average, which is a fairly important and frequently used function in quantitative finance, useful for computing volume weighted average price (VWAP) as well as tilt and weighted spread. q is itself implemented in another language, k. The whole language of each is just a couple lines of (terse) code.

Prolog vs mini-Kanren

There's an interesting Q&A on Stack Overflow.

The complexity of abstract machines

I previously wrote about a brand of research by Guy Blelloch on the Cost semantics for functional languages, which let us make precise claim about the complexity of functional programs without leaving their usual and comfortable programming models (beta-reduction).

While the complexity behavior of weak reduction strategies, such as call-by-value and call-by-name, is by now relatively well-understood, the lambda-calculus has a much richer range of reduction strategies, in particular those that can reduce under lambda-abstractions, whose complexity behavior is sensibly more subtle and was, until recently, not very well understood. (This has become a practical concern since the rise in usage of proof assistants that must implement reduction under binders and are very concerned about the complexity of their reduction strategy, which consumes a lot of time during type/proof-checking.)

Beniamino Accatoli, who has been co-authoring a lot of work in that area, recently published on arXiv a new paper that has survey quality, and is a good introduction to this area of work and other pointers from the literature.

The Complexity of Abstract Machines

Beniamino Accatoli, 2017

The lambda-calculus is a peculiar computational model whose definition does not come with a notion of machine. Unsurprisingly, implementations of the lambda-calculus have been studied for decades. Abstract machines are implementations schema for fixed evaluation strategies that are a compromise between theory and practice: they are concrete enough to provide a notion of machine and abstract enough to avoid the many intricacies of actual implementations. There is an extensive literature about abstract machines for the lambda-calculus, and yet -- quite mysteriously -- the efficiency of these machines with respect to the strategy that they implement has almost never been studied.

This paper provides an unusual introduction to abstract machines, based on the complexity of their overhead with respect to the length of the implemented strategies. It is conceived to be a tutorial, focusing on the case study of implementing the weak head (call-by-name) strategy, and yet it is an original re-elaboration of known results. Moreover, some of the observation contained here never appeared in print before.

Stroustrup's Rule and Layering Over Time

Dave Herman is the voice of the oppressed: syntax is important, contrary to what you have been told!

To illustrate he discusses what he calls Stroustrup's Rule:

  • For new features, people insist on LOUD explicit syntax.
  • For established features, people want terse notation.