UserPreferences

TemplateHaskellTutorial


Template Haskell Tutorial

by Johannes Ahlmann
(this content is available under a simple permissive license)

Template Haskell (TH) is an extension for GHC that allows compile-time meta-programming in Haskell. A meta-program is a program that operates on other programs, such as a compiler or a documentation generator. In TH both the meta-language and the object language are Haskell, and it offers the possibility of representing Haskell code as an Abstract Syntax Tree (AST), which can then be manipulated as a usual data structure.

The TH code is executed at compile-time and is incrementally type checked while substitutions are taking place. This allows type-safe manipulation and creation of Haskell code at compile-time, which enables:

Online documentation available for Template Haskell is pretty sparse, but a good overview of TH can be gathered from the original TH paper, a later report, notes of the original authors about version 2 and of course the haddock documentation. There is also the ghc users guide, and various bits on the TemplateHaskell wiki page.

In order to gain the most from this tutorial you should have a minimal understanding of monads and the IO monad and should be fairly confident in Haskell.

Rather than an end-all tutorial to TH that will answer any question you'll ever have, this tutorial is rather a collection of more or less useful examples with some explanations and annotations. For a more structured and detailed approach to TH you might consider some of the above documentation.


Version Compatibility Note: The code examples in this tutorial work with GHC 6.4, but earlier versions of Template Haskell have differing semantics and upcoming versions will very likely see major changes too.

You will need to use the option -fth when compiling any of the following examples. If an example should unexpectedly not run, try to "import Language.Haskell.TH.Syntax" :)

Contents

  1. Introduction
    1. Abstract Syntax Tree
    2. The Quasi Monad
  2. Quasi-Quotes
  3. Reification
  4. Splices
  5. Restrictions
  6. Compile-Time IO
  7. A Simple Code Walker
  8. Recursive Unlooping
  9. Auto-Generated Deconstructors
  10. Conclusion

1. Introduction

The central point of TH is enabling the user to convert between the concrete Haskell syntax and a Haskell readable Abstract Syntax Tree. By manipulating this tree and converting it back into a compiler-internal representation it is possible to do arbitrary transformations on Haskell code.

1.1. Abstract Syntax Tree

An Abstract Syntax Tree is a structured representation of source code. For example the source code

would yield an CondE-node which represents a Conditional Expression with three child nodes:

In our case condition, consequent and alternative are variables, but in a more complex example they might be expressions themselves and could have subtrees of their own. In Template Haskell such a tree is represented as a nested data structure. The top-level node types of such an AST are Expressions, Declarations, Patterns and Types.

where CondE represents an expression and is therefore of type Exp.

You might want to have a quick look at the Expression link above, as its constructors are used heavily when writing TH code.

1.2. The Quasi Monad

The Q or Quasi monad is at the heart of Template Haskell and pretty much all TH operations are performed inside of it. It is essentially an extended IO monad that allows generation of unique names, error reporting, querying for binding types and access to the compiler's internal program representation. Tree nodes inside the monad have type synonyms ending in Q, yielding ExpQ, DecQ, PatQ, etc.

Conversion between the Q monad and the IO monad is possible to some extent using the runQ function which will make the given AST inside the Q monad available to the IO monad, i.e. for printing. Conversely the function runIO enables the user to call "some" IO functions from inside the Q monad. To print an AST one can either use the show function which will produce a literal representation of the syntax tree or the pprint function from Language.Haskell.TH.Syntax to "pretty print" an AST as actual haskell code.

The function printAST is passed an AST of type ExpQ (i.e. an Expression inside the Q monad), which is then transferred into the IO monad by the function runQ and finally printed.

2. Quasi-Quotes

Obviously writing complex code in AST form manually is highly impractical and for this reason Quasi-Quotes were added to TH as syntactic sugar. They look like matching square brackets plus pipes, [| |] and will convert contained code into the corresponding AST of type ExpQ.

Apart from being much easier to handle, the Quasi-Quote notation also takes care of some housekeeping that the explicit notation will not handle. While an explicitly constructed AST does not respect the lexical scope of surrounding code and is not immediately statically type-checked, both these properties are kept by Quasi-Quotes. Whenever feasible you should therefore use Quasi-Quotes, although there are instances when they are not flexible enough and small portions of code have to be written "manually".

Note though that type safety of the resulting program is at all times checked and can not be bypassed by any TH feature.

This is pretty much the simplest possible AST, consisting of a Literal Expression, namely the Integer Literal "5". Note that the Quasi-Quote returns an Expression inside the Q monad.

VarE GHC.Num.subtract is a Variable Expression bound to a function. Applying a value to this curried binary function yields a new, unary function. Function bindings are represented as VarE nodes and therefore treated no different from other variable bindings.

Note that all Applications Expressions are represented in unary form in the AST, meaning that internally functions are curried, i.e. they take a single parameter and return a function handling remaining arguments.

A Lambda Expression takes two parameters:

A Variable Pattern matches any single argument and binds its content to the given name. Therefore, the above code builds a Lambda Expression that takes a value and divides it by the literal "2"; then applying this function to the literal "6".

In addition to simple expressions, TH also allows Declarations [d| |] and Patterns [p| |] to be produced by Quasi-Quotes. If for example we wanted to splice a few function or type declarations at the current position, we could simply do:

The resulting type of a Quasi-Quote is ExpQ for normal quotes and [DecQ] or PatQ respectively for the other cases.

3. Reification

The function reify allows lookup of type information on any imported or local symbol. Using it, it is possible to find the fixity of a function, the types of its arguments or the internals of a data type:

reify takes a symbol name and returns an Info structure from which typing information can be extracted. Obviously in the examples above we have simply converted this Info structure to a string with TH's pretty printing function, but for more serious applications the real typing data would have to be extracted from it.

As you may have noticed we introduced a new syntax in our reification examples, namely the single-quote for symbol name resolution. One such quote should be used to resolve a variable or function binding, while two quotes must be used for type bindings.

For more complex reifications see section "Auto-Generated Deconstructors".

4. Splices

A splice in TH looks like a dollar sign in front of an atomic expression, that can either be a name $foo or a parenthesized expression $(function foo). It evaluates its content (usually of type ExpQ) at compile-time, converts the resulting AST into Haskell code and inserts it in the program at the location of its invocation.

The above creates a lambda expression which, given an value "n", will evaluate to the expression consisting only of "n". The function newName provides a fresh, unique name and thus prevents capturing a variable of same "name" from an outer scope.

In case you were wondering why the constructors were starting with lowercase characters: TH provides helper functions for all constructors that return their instance already in the Q monad. This can spare us a lot of lifting and returning and make the manual AST code much more readable.

The above code creates an addition infix expression with its first argument lacking, and its second argument bound to the integer literal "4".

infixE is of type infixE :: Maybe ExpQ -> ExpQ -> Maybe ExpQ -> ExpQ. The Maybe type is used in this case to represent slices like (+4). Any of those Maybe parameters that is set to Nothing will need to be bound to an value later on.

The above creates a lambda expression that takes an argument "n" and evaluates to the infix expression of applying the addition operator to the argument and the integer literal "2".

Splices can also be nested, which in conjuction with Quasi-Quotes can spare us a lot of typing:

5. Restrictions

There are a number of restrictions in TH that can make life somewhat more difficult, but knowing about them (and their origins) might alleviate some of the pain :)

6. Compile-Time IO

It is possible in TH to execute IO operations at compile-time which might even directly influence the resulting object code. One example is the Automatic Skeletons paper, where an optimised algorithm for the current environment was chosen at compile-time and then spliced into the code.

The code will prompt for an input at compile-time and subsequently read a line from stdin. The resulting string will then be lifted to an Expression in the Q monad and spliced into the code as an argument to putStrLn.

The function askForStr is a simple function in the IO monad, while the TH's runIO allows execution of IO operations inside the Q monad.

It is easily seen that this feature offers great possibilities, like customizing an executable to the build environment or endless other options. But execution of unrestrained IO by the compiler without the user's explicit consent can also present a serious security issue.

7. A Simple Code Walker

Let's say we knew about a domain-specific code transformation that would drastically improve our performance. TH allows us to write the unoptimized code and apply the transformation transparently on the resulting AST. For the sake of simplicity we will choose as our transformation the interchanging of all plus and minus operators in a given piece of code.

To visit each node of the ASTree we need dispatch code for all types of tree nodes, which will allow us to recursively descend, transform and reassemble the tree bottom-up. As the complete AST has nodes of different types, a clean solution would involve writing a type-class based treewalker which applies a given transformation function on all subtrees. This is exactly what was done in the Loop Unrolling paper, but for our little example we'll deal exclusively with function applications in order to process simple arithmetic expressions.

The AST of a simple arithmetic expression in Haskell is either an Infix Expression or the Application of a curried function to a value. Our codewalker should therefore accept Infix and Application Expressions and recurse over (some of) their subtrees. Whenever it reaches a Variable binding which corresponds to either addition or subtraction, it will substitute the binding by the opposite operation.

Note that our codewalker has to be defined in a separate module from the one it will be spliced into because of the mentioned restriction of top-level splices!

Given an arithmetic expression, our codewalker will traverse all it nodes and leaves, substitute addition / subtraction operators, leave anything else intact and return the transformed AST.

Applying the lifted function to an a Quasi-Quote will return an Expression in the Q monad with its additive operators reversed, as can be seen in the result of printCode. The liftM function lifts our codeWalker into the Q monad and therefore lets it accept and return values of type ExpQ.

We did it! We have transformed arbitrary code at compile time, and all that in only a few lines of simple code! Of course, more elaborate transformations require a lot more supporting code, but they will basically follow the same principles as our solution.

A topic we have not touched in our codewalker is handling type information of tree nodes. If, for example, we wanted to transform addition operations on Integers only , we'd have to use reify to figure out the types of arguments. And for this we'll need to look a little closer at type representation in the TemplateHaskell AST. See section "Auto Deconstructors" for an example.

8. Recursive Unlooping

OK, so now we can substitute single nodes of an AST tree, which is already pretty neat. But for serious transformations we will also need to create whole new AST subtrees... One well-known application of algebraic transformations is expansion of loops and recursion. As an example we will try to unroll the computation of the exponentiation function nx.

BTW, the easiest and fastest would obviously be to evaluate the resulting value at compile-time and splice that single integer into the code. But then we'd miss all the fun :)

We want to write a function that, given a base and an exponent will unroll the recursive calculation, producing code containing only the base and multiplications:

That seemed easy enough... Apart from a minimal syntactic overhead for Quasi-Quotes and Splicing this looks pretty much like a function one would have written in pure Haskell to compute exponentiation. It takes as parameters a base and an exponent and returns an Expression AST in the Q monad.

The Splice inside the Quasi-Quotes works by inserting the result of the recursive call inside the code returned by aExp, thus creating a chain of multiplications:

Yes, that worked out nicely! With a minimal amount of effort we've just unrolled our first recursive function, potentially gaining a lot of performance... We could simply contend ourselves with this, but there's more optimization to be done!

There's this neat algorithm for speeding up exponentiation, in which you calculate common subexpressions only once and thus reduce the total number of multiplications:

We have specialized some base cases for simplification reasons (i.e. "n" instead of "n*1") and have calculated the common subexpression "tmp" only once, splicing its result into the Quasi-Quote as before.

But, lo and behold, we haven't actually optimized anything - the number of multiplications is still proportional to e. It turns out that we have actually extracted common subexpressions, but instead of caching runtime expressions, we have cached the compile-time AST subtrees and spliced them wholly into the resulting code...

So, back to the drawing board! We need to cache common expressions at runtime, in order reduce the number of multiplications from e to roughly log2 e. To achieve this, we have to find a way of temporarily saving values at runtime... And what better way is there than good old let:

Beautiful! We have produced nicely readable code, which also is exceedingly fast and only uses ceil (log2 e) multiplications!

Again, we might just stop here and live happily ever after...

No, I didn't think so! What happens if we have lots of exponentiations with the same exponent but differing bases? Explicit splicing in otherwise plain code might get ugly and we can't pass arguments at runtime to a function that was run at compile-time, right? Well, there might be a way...

Given only an exponent we could return a unary function that takes a base and multiplies just the right number of times:

That worked like a charm, although the resulting code is rather ugly; passing the same n from one lambda function to the next might not result in the most readable code... But then again, this code is not to be read! Technically this is a fine solution, but esthetically we'd like to find a nicer one, right?

So, maybe we can pass an argument in only once and then reuse it via lexical scoping:

By passing the Variable Expression n to our meta-code, the object code can then use the variable (which is, after all in its lexical scope) directly. Without needing recursive lambda expressions any more, this produces beautiful, presumably fast code with only little effort. What an example of how much one could possibly do with TemplateHaskell!

As a sidenote: instead of passing the Variable Expression n to the recursing meta-code, it would also have been possible to (ab)use TH's possibility of dynamic scoping:

But apart from saving a few keystrokes this seems like a potentially fragile solution and utilizing it unnecessarily in more complex function is calling for trouble!

9. Auto-Generated Deconstructors

At times there's lots of code to be written that dispatches by type and is essentially reproducing a data types structure. If we wanted to traverse the whole TH AST for example, we'd have to write type-class based code to recurse over the heterogenous tree and apply our transformation on all tree nodes.

Wouldn't it be nice if we could query Haskell for the structure of a given type and let TH automatically generate dispatching code for it? Well, that's exactly what we're about to do:

TyConI is an Information structure for Type Constructors like "data X = A | B" containing for example a Data Declaration DataD which holds among other things its data constructors.

NormalC is the most common form of data constructor comprised of the constructor's name and a list of argument types.

Obviously this in only a partial matching, but for your standard data type it works just fine.

Getting an overview of this jungle of data structures is quite taxing, but after finishing this example we'll have a meta-tool for determining which constructor was used for the creation of an instance. So in a sense we're working on meta-meta-code :)

Calling our little function with the type name of the Maybe type constructor yields the names of its two constructors along with the number of their arguments. Just as we set out to do!

Getting type information for arbitrary symbols in any module is obviously a nice feature, but couldn't we possibly generate code from our newly gained knowledge? Let's see if we can't create pattern matching code from a constructor's name and the number of its arguments.

As we really have no idea what a case expression or a complicated pattern match look like, we'll try to observe them in the wild with our (by now) trusted tools ghci and TH:

This definitely helps us along! We'll just have to reproduce such a Match line with a given name and number of arguments and we'll be one step closer to automagic deconstruction:

By now the above code should be pretty understandable. Mostly the code of mkPattern and mkLambda is just copied from our ghci session, filled in with a list of wildcard Patterns (wildP) to match against the constructor's arguments. The function nameBase will extract the string name of a binding from a Name.

While mkPattern generates a pattern of wildcards for a single constructor, mkLambda takes a list of such patterns as argument and constructs a lambda function which uses a case expression to dispatch on the different possible constructors. All that consMatcher does, is glue those two functions together to return a full AST expression for a given symbol name.

As was to be expected, our function works like a charm on the first try. It took apart the Maybe type, gathered information about all available constructors and built a function that returns the constructor's name when matched. Notably it will now work for (almost) any data type with simple constructors.

The possibilities of use and abuse are endless :D

10. Conclusion

This concludes our tour of Template Haskell. I hope you enjoyed this overview of its current features. Some more elaborate examples can be found in the papers listed on the TH site as well as in the few applications using TH, but generally the bleak documentation situation is a real obstacle on one's way to TH gurudom. Yet, with the idioms used in this tutorial you should generally already be pretty well prepared for all but the most complicated meta-programming tasks.

So, write some meta programs and share them on the magnificent #haskell channel!


CategoryTutorial