Thursday, July 15, 2010

Going Meta on the Macro

Today I want to talk a little bit about what I've been researching over the summer. For a little context, I've finished the first year of the CS Ph.D. program at my university, and I have mainly focused on formal methods/automated reasoning. My passion is for proving properties about software - specifically, I want to make that process easier, more natural and commonplace. Since I did not learn how to prove theorems with transfinite induction (the most common tool for proofs about software) until grad school, even after getting B.S. degrees in pure mathematics and computer science at a very good university, I don't think the "commonplace" goal is coming soon. Perhaps I will discuss this area and the difficulties it faces in a future post.
At this university, the largest research group - and the most famous - is for programming languages (PL). Academia's approach to PL over the past few decades has focused mainly on functional programming languages, as far as I can tell. That was shocking to me, since just about everything I've seen in industry has been imperative stuff. After dealing with proofs, I now understand a reason why PL fancies functional languages - they use recursion and immutable values much more often than while loops and repeated variable assignments. This approach allows you to disregard the global state of your program and focus specifically on one function at a time. Why? If your function depends on a variable that x other functions can change, then you must keep track of how that variable is changed. Most likely your proof of correctness hinges on that variable holding a specific property, such as prime number or normalized vector. Every place that variable changes, you must make sure that property holds; if it doesn't, you have to make sure it does before your function is called. That's a lot to keep track of. It's not all that easy to maintain, especially if you work in a team setting.
So what's cool about functional programming languages other than that fact? Well, I haven't begun to talk about macros. There are ways of abstracting behavior of your program beyond what a function can do. Say you need the same large initialization sequence executed before some block of code in many different places. You can make a new command that will wrap your code with all that boilerplate at compile time with a macro. That's a simple example. There are other examples in which a macro will do long and complicated analyses on input code, and then output optimized or annotated code that should be equivalent in functionality.
Ooh, "equivalent" sounds like a theorem proving word. Can we prove that these analyses find the invariants we need for correctness of the transform? How about after that - actually proving that the macro form expands to an equivalent expression in all contexts? The answer is (you're going to love this) it depends. It depends on what you consider proof. Certainly you can write functions that describe the semantics of the language you're working in (say, with an embedded operational semantics - oh but for a termination argument you have to specifically limit how many evaluation steps are allowed. Yikes.) So wait, you have to embed the language you're working in... in the language you're working in? Why would you want to do that? Gross.
So what if we want our logic to allow talking about macros inside the logic? Say we don't need that embedding, and then we can actually write rewrite rules to simplify macro forms without having to expand them. This is some heavy stuff. Now we are talking about quantification over syntax rather than simply values.

Some vocabulary - when a macro expands, it is a function from representation of syntax to representation of syntax, but when it's complete, that representation is reified to become actual syntax that will be evaluable.

A problem enjoyed (rather, suffered) by many macro systems is the loss of scoping laws. It is not possible that in all contexts your macro will have the same meaning. This is due to free and bound variable capture. A simple textual replacement macro system such as in LISP and C is bound to change the meaning of your program when you decide to use a variable name that the macro redefines. In LISP at least you can check that variables you introduce are not in the support (set of free or bound variables) of the input expressions, and fail to expand if they are. This is ugly. Not only does it reveal internals of a macro (something that module abstraction should give), but it pits the programmer against the macro - "why can't I use the variable names I want?" There is a solution, however it is not the most palatable either (just the best we've got), called hygienic macro expansion. A hygienic macro expander will rename variables as it goes to make sure variables are available only in the scope of where they are intended to be used. I can think of more ambiguous statements, but that one surely is a nuisance to anyone doing formal reasoning.

Hygienic macros have only recently gotten a formal treatment (see Dave Herman's thesis - A Theory of Typed Hygienic Macros), and since it is in the setting of a decidable type theory, the whole story is not delivered. However it does give us hope for a complete meaning. If one is able to completely describe how a macro's input will be used for introducing and consuming bindings in its expansion, then we can show that if two inputs have the same binding structure implies that the macro's expansion on both inputs has the same binding structure (more precisely, both expansions are alpha-equivalent), then the macro expander is hygienic. Nice. That takes care of the all contexts problem. Now how do we express this without long-winded arguments? My idea is to have simple base that requires long-windedness to begin with, but (haha!) we use macros to conceal this detail.

Notice that the hygiene property cannot be defined inside the logic, since any function describing binding structure would be manipulating only a representation of syntax, and not actual syntax for which the logic has reasoning capabilities. Wait wait wait... we are now introducing to the logic a means to have meta-reasoning at the non-meta level. That shouldn't work, right? Gödel did some work on this I think. Well, there are theorem provers out there that allow for meta-reasoning, so let's look at how they do it.

The key idea is that you don't collapse the meta-theory into your theory, but you separate the two and start at the meta level to begin with. This is what is meant by working in a logical framework rather than a specific logic. Thus we do have an "embedding" of the language we're working in, but it's already given as it is defined in the logical framework. A bit of a hack, but logically necessary. Then what encoding does one choose for syntax? If you have any familiarity with the lambda calculus, you should know how beta-reduction is rather involved (in call-by-value semantics) since it must do bound variable renaming. In hygienic macro systems, lexical information about a term must be stored somewhere, and for a practical language, so too must source code correlation information.

Yikes again. Source code correlation information is purely for debugging purposes and thus logically uninteresting. We can abstract over this by instead defining an abstract datatype that satisfies the properties we want of syntax, and then let an arbitrarily complicated implementation prove that it satisfies this specification. One way researchers have been able to describe binding in syntax is by actually using a higher order representation. That is, instead of symbolically representing bindings everywhere like LISP does, actually use a real lambda abstraction. This is called Higher-Order Abstract Syntax (HOAS) and it has been getting lots of attention for that past, oh, 15 years. Why then aren't we sipping on piña coladas while our informal meta-reasoning style proofs whiz through formal systems?

Remember my description of theorem proving in the beginning, and how induction is vital? Well it turns out that in general, inductive proofs over HOAS is... impossible. The reason for this is because its general representation is not inductively definable - variable references can be arbitrarily far down a syntax tree from the binding point; a crippling technicality called "negative occurrence" in type theory. Though IANAH (I am not a historian), the first attempt I know of to restrict syntax for meta-reasoning was in Edinburgh LF (logical framework), actually implemented at CMU as "Twelf", based on (competes with?) a logic called (fold-n) developed by some guys at École Polytechnique. There is additional work in Isabelle/HOL that builds off Twelf's idea, but this building is a big stack of confused mess if you ask me. At the bottom of the stack is Isabelle's logical framework, then classical higher order logic axioms, then some concrete representation of syntax, then the "specification logic," and then the "object logic." These layers are abstractly what needs to exist, but everything is logically defined, so execution isn't given (can't just build on top of Common LISP like ACL2 does). There is quite a bit of research into extracting programs from specifications, but what if I want programs AS specifications? As far as I know, there is no real programming language basis for these systems that would mean executable specifications and reasoning about actual runnable programs, and not their (trusted) translation into some other logic.

Perhaps it's my history with ACL2 - it is a programming language, a logic and a theorem prover - but is it too much to ask for? This is not a solved problem. I'd like to chip out a big piece of this mystery though. Meta-reasoning about (hygienic) macros must be a possibility. Since meta-reasoning itself has not yet been well-executed, there certainly is a lot to do [collection of meta-reasoning problems for mechanized systems to tackle]. There is a chance though, that by attacking the problem from a macro setting might just simplify (and generalize) the problem space.