## Saturday, June 8, 2013

### Abstracting Abstract Machines in Maude

I've been helping a colleague with his research in analyzing languages with higher-order temporal contracts. While doing this, we ditched Promela (the language of explicit-state model checker, SPIN) in favor of the far more expressive rewriting engine, Maude, that just so happens to have a pretty good LTL model checking engine in it.

This was my first time using Maude. It's a quirky little language, far different from PLT Redex, my semantics engineering tool of choice. For one, it is not an embedded DSL. Redex allows me to escape to Racket whenever I need to do some more complicated computation that I don't need to model using reduction relations. Maude is strictly a term rewriter, and thus, certain tasks are tricky. This post is meant to be a tutorial for those of you who want to try Maude for prototyping analyses, considering its performance might justify the loss in expressiveness for you.

My intention is not to teach you all of Maude, because that would be silly. I will point out some pitfalls I fell into while getting AAM with store widening to work. We'll just model the untyped lambda calculus for brevity.

Gotcha #1: if you want to define a type and get a list of that type in the same module, you can't. The reason for this is that you need prove whatever type you want to get a list of fits into the theory the the LIST "functional" module expects, which just happens to be the trivial theory. This proof must be given at the top level using a "view," and not nested within the defining module.

Workaround: Since type declarations and the definitions of their variants don't have to be in the same module, we can have a stub module ToDefine that declares all the types we want to have views of.

``` mod ToDefine is  pr STRING .  sorts VAR Addr Time Expr Value Storable Point State StateStamp Frame Kont HeapDelta .  subsorts VAR < String .  op mt : -> Kont [ctor] .  op kont : Frame Addr -> Kont [ctor] .  subsorts Kont < Value < Storable . endm```

I've thrown in other types (sorts) that we'll need later. This subsort thing is another funky feature of Maude: you can monkey-patch types by just saying something else is in that type. The reason we order Kont less than Value instead of just both less than Storable is that there should be a least sort for each term, in order to have unambiguous parsing (ya, parsing is type-directed). If you write everything in prefix form, this doesn't matter, but the mix-fix that Maude offers can get confused. A lot of the time, if your program doesn't typecheck, you get an unhelpful error that it has no parse. The op keyword declares an operation, and the ctor attribute means that we should treat it like a constructor of that type. mt is a constant, so we don't have to reference it by mt().

The "pr" thing is short for "protecting," which is like a require, not an include. It's for protecting equations/rules in the module from this monkey-patching stuff.

The boilerplate to show all these sorts are in the TRIV (trivial) theory follows. The names for views is the same as the sort just for notational convenience, so we can say Set{Value} instead of Set{ValueView}.

``` view VAR from TRIV to ToDefine is sort Elt to VAR . endv view Addr from TRIV to ToDefine is sort Elt to Addr . endv view Expr from TRIV to ToDefine is sort Elt to Expr . endv view Value from TRIV to ToDefine is sort Elt to Value . endv view Storable from TRIV to ToDefine is sort Elt to Storable . endv view Point from TRIV to ToDefine is sort Elt to Point . endv view State from TRIV to ToDefine is sort Elt to State . endv view StateStamp from TRIV to ToDefine is sort Elt to StateStamp . endv```

The way to alias instantiations of types, say a map from variables to addresses, is to define an entirely new module and use some funky import renaming syntax:

``` mod Env is  pr ToDefine .  pr MAP{VAR, Addr} * (sort Map{VAR, Addr} to Env) . endm mod AStorable is  pr ToDefine .  pr SET{Storable} * (sort Set{Storable} to AStorable) . endm ```
And since we want to have a map from addresses to sets of storable entities (Abstract storables, or AStorable), we again have to do the view boilerplate:

``` view AStorable from TRIV to AStorable is sort Elt to AStorable . endv```

Continuing on, we write our AST constructors (expressions are labeled so we can distinguish them for continuations' allocated addresses):
``` mod Syntax is  pr Env .  pr AStorable .  pr INT .  pr MAP{Addr, AStorable} * (sort Map{Addr, AStorable} to Heap) .  op Var : Int VAR -> Expr [ctor] .  op Lam : Int VAR Expr -> Expr [ctor] .  op App : Int Expr Expr -> Expr [ctor] .  op CloV : VAR Expr Env -> Value [ctor] . endm ```

The semantics I present a bit differently from the original AAM papers to make store-widening much easier and efficient to do. I separate changing the store into its own step by using a "apply this store change" state. This allows incremental updates to the shared store while stepping all the states in the worklist. Another difference is that I don't nondeterministically step to all values of a variable, since that is wasteful in terms of time and space - the precision gain is non-existent. The meta-functions alloc and tick from the paper are given the entire state to use, which introduces a circularity if you try to parameterize the semantics by a theory containing Addr, Time, alloc and tick, because states are defined by the semantics being instantiated. Oops. I've instead given the functions much less information, but it ends up being enough for commonly used abstract addressing schemes.

Gotcha #2: For a reason I don't fully understand, functional modules are not allowed to define rewrite rules. That means we couldn't parameterize the semantics by the AAM meta-functions and spaces even if we break the circularity. They can, however, define conditional memberships in a sort, so one could define the ReachableStates sort and encode the semantics as a backchaining one instead of a forward chaining one, but that defeats the purpose of a forward analysis. Due to this limitation, I present a very simple allocation and time-stamping scheme: 0CFA.

``` mod CESK is  pr Syntax .  pr AStorable .  pr SET{Value} * (sort Set{Value} to AVal) .  pr MAP{Addr, AStorable} * (sort Map{Addr, AStorable} to Heap) .  op Eval : Expr Env Kont Time -> Point [ctor] .  op Continue : Kont AVal Time -> Point [ctor] .  op ar : Expr Env -> Frame [ctor] .  op fn : AVal -> Frame [ctor] .  op none : -> HeapDelta [ctor] .  op binds : Addr AStorable -> HeapDelta [ctor] .  op wn : Point Heap -> State [ctor] .  op wd : Point Heap HeapDelta -> State [ctor] .  op applyΔ : Heap HeapDelta -> Heap .  op join : Heap Addr AStorable -> Heap .```

The first part of the module here defines what states look like: they are composed of a point in the abstract state space, and its associated heap. We add a heap with wn ("wide to narrow"). The heap change constructor (wd) contains the heap itself so that we can apply the change as a separate step. These declarations out of the way, we enter another boilerplate land: meta-variables. Matching equations must have an idea what sort of patterns they're dealing with. Thus a variable, s, standing for a state must either be declared as "always used to match a state" (var s : State) or directly in each matching pattern (s:State). I personally like to have my equations clutter-free, just using naming conventions to know what I'm working with. I like this feature of the language, but I'd appreciate a way to export common meta-variables. Every time I want to match against an Expr in a different module, I have do declare e stands for an Expr. It gets copy/pasty quickly.

```  vars ℓ : Int .  vars e e0 e1 : Expr .  vars ρ : Env .  vars σ σ' : Heap .  vars x : VAR .  vars t t' : Time .  vars v : Value .  vars av av1 : AVal .  vars ss ss' : AStorable .  vars f : Frame .  vars κ : Kont .  vars a b c : Addr .  vars δ : HeapDelta .  vars ς̂ : Point .```

I use unicode in my programs. That's right. The last couple steps before giving the rewrite rules that define the semantics are the meanings of join and allocation. First, changing the store. There is a bit of a gotcha here, that you can't update a map in-place; you must first match against the key you wish to change to see if it's already there, and then extend the rest with the key mapping to its new value. It's not too bad with the associative-commutative matching in Maude.

```  eq join((σ, a |-> ss'), a, ss) = (σ, a |-> (ss', ss)) .  eq join(σ, a, ss) = (σ, a |-> ss) [owise] .  eq applyΔ(σ, none) = σ .  eq applyΔ(σ, binds(a, ss)) = join(σ, a, ss) .```

The owise attribute is nice. We can have a catch-all case if all previously defined cases didn't match - just like functional programming. Now comes allocation. In 0CFA there is no time. It's constant. That's why we add a singleton sort Unit less than Time, where the only constructor is the initial time. Variables are their own addresses, and continuations are allocated to the label of the expression that creates them.

```  op vaddr : VAR -> Addr [ctor] .  op kaddr : Int -> Addr [ctor] .  sort Unit .  subsort Unit < Time .  op t0 : -> Unit [ctor] .  op allocVar : VAR Time -> Addr .  op allocKont : Int Time -> Addr .  op tick : Value AVal Kont Time -> Time .  eq allocVar(x, t) = vaddr(x) .  eq allocKont(ℓ, t) = kaddr(ℓ) .  eq tick(v, av, κ, t) = t .```

Finally the semantics. Since the multitude of values at any one address aren't really needed until a function call, we keep all values in superposition until we really need to observe which value we looked up. The associative-commutative matching on the set of values in the fn frame gives us that resolution of non-determinism.

```  rl [Var-lookup] : wn(Eval(Var(ℓ,x), ρ, κ, t), σ) => wd(Continue(κ, σ[ρ[x]], t), σ, none) .  crl [Fn-eval] : wn(Eval(App(ℓ, e0,e1), ρ, κ, t), σ) =>                  wd(Eval(e0, ρ, kont(ar(e1,ρ), b), t), σ, binds(b, κ))                  if b := allocKont(ℓ, t) .  rl [Lam-eval] : wn(Eval(Lam(ℓ, x, e), ρ, κ, t), σ) => wd(Continue(κ, CloV(x,e,ρ), t), σ, none) .  rl [Arg-eval] : wn(Continue(kont(ar(e,ρ), a), av, t), σ) => wd(Eval(e,ρ, kont(fn(av), a), t), σ, none) .  crl [Apply] : wn(Continue(kont(fn((CloV(x,e,ρ), av)), a), av1, t), σ) =>                wd(Eval(e,(ρ, x |-> b), kont(f, c), t'), σ, binds(b, av1))                if b := allocVar(x, t)                /\ kont(f, c), ss := σ[a]                /\ t' := tick(CloV(x,e,ρ), av1, kont(f, c), t) .  rl [Heap-change] : wd(ς̂, σ, δ) => wn(ς̂, applyΔ(σ, δ)) . endm```

So that's the simple part of Maude. The fun reflective stuff comes next, when we want to widen the store to get something that runs in polynomial time. The scheme here is to represent all states we have seen (at which heap), the states we still need to step (the "frontier"), and the shared heap. Well, the heaps get pretty big, and checking if a state has been seen before would need to compare equality on a large object. Instead, we use Shivers' trick and track how many times we've changed the store, and represent the stores in the seen set as just a number. The store monotonically increases, so this is perfectly sound. This is where the StateStamp type comes in: just a point and an Int. First a couple aliases:

``` mod Seen is pr ToDefine . pr SET{StateStamp} * (sort Set{StateStamp} to Seen) . endm mod Frontier is pr ToDefine . pr LIST{Point} * (sort List{Point} to Frontier) . endm```

The wide semantics will be reusing the original (or narrow) semantics we previously defined. In order to do that, we need to have an internalized notion of what it means to search for a possible reduct for a term. To do this, we use metaSearch, reflect up the CESK module, inject our state to step with the current shared heap, and let it search for all 1-step reductions. It's pretty cool. Redex doesn't have a notion of meta-reduction/search; I instead have to escape to Racket and use apply-reduction-relation, completely thwarting the debugging tools Redex offers, like traces. But first! Some boilerplate.
``` mod WIDE is  pr CESK .  pr META-LEVEL .  pr Seen .  pr Frontier .  pr SET{Term} * (sort Set{Term} to TermSet,                 op _`,_ to _u_) .  sort System .  op sys : Frontier Heap Int Seen -> System [ctor] .  op wt : Point Int -> StateStamp [ctor] .  op Fail : -> State [ctor] .  vars S S' S0 S1 SC SP : Seen .  var ς̂ : Point .  vars F F' F0 F1 ς̂s ς̂s' ς̂s'' : Frontier .  vars name x y : VAR .  vars e e0 e1 e2 : Expr .  vars ς ς' : State .  vars a a0 a1 : Addr .  vars ss : AStorable .  var i : Int .  var b : Bool .  var v : Value .  vars ρ ρ0 ρ1 : Env .  vars σ σ' σ0 σ1 σc σc' σp : Heap .  vars δ δ' : HeapDelta .  vars κ κ0 κ1 : Kont .  vars vs vs' vs'' : AVal .  vars δ? δ?' : Bool .```
The reason we add a Fail state is so that we give the "reflect down" operation (downTerm from META-LEVEL) a fallback in the same expected kind, incase we can't reify a term that metaSearch produces back into a State. This shouldn't happen, but it's there so that downTerm is happy. A change from the previous semantics is that we build up changes to the store instead of apply them one-by-one. Also, we have to keep track if any of these "changes" actually do change the store. We thus have new join and apply operations that also return a Boolean saying if there was a change. We of course have to add a type and constructor for this - call it Heap/Δ?.

```  sort Heap/Δ? .  op hδ : Heap Bool -> Heap/Δ? [ctor] .  op \$join/δ? : Heap Addr AVal Bool -> Heap/Δ? .  ceq \$join/δ?((σ, a |-> vs'), a, vs, δ?) = hδ((σ,a |-> vs''), δ?')       if vs'' := (vs' , vs)       /\ δ?' := δ? or-else (vs'' =/= vs') .  eq \$join/δ?(σ, a, vs, δ?) = hδ((σ,a |-> vs), true) [owise] .  op \$applyΔ/δ? : HeapDelta Heap Bool -> Heap/Δ? .  eq \$applyΔ/δ?(none, σ, δ?) = hδ(σ, δ?) .  eq \$applyΔ/δ?(binds(a, ss), σ, δ?) = \$join/δ?(σ, a, ss, δ?) .```

The magic happens next. The heavy hitter is allOneStepAux, which takes the term to step, which search solution it's on (I'll explain), the pattern to match against (just "a state"), the shared heap, the number of heap changes, whether there has been a change for this step, a partial seen set, the next frontier, and the current seen set. What I mean by "search solution" is (and this tripped me up for a couple hours) which of the many possible search solutions, since this interface doesn't return a set of all solutions. Instead, you have to keep polling for solutions until it returns "failure," which I suppose I could see as a reasonable interface if users don't necessarily need all solutions (but really it's unsettling to me).

```  op \$step-all : Frontier Heap Heap Int Bool Seen Frontier -> System .  vars T P : Term .  vars N σt σt' : Int .  op CESKMod : -> Module .  eq CESKMod = upModule('CESK, false) .  sort StepTuple .  op stpl : Frontier Heap Bool Seen -> StepTuple .  op metaTermToStateSet : Term Heap Int Bool Seen Frontier -> StepTuple .  op allOneStepAux : Term Int Term Heap Int Bool Seen Frontier Seen -> StepTuple .  eq metaTermToStateSet(T, σ, σt, δ?, S, F) =     allOneStepAux(T, 0, 'ς1:State, σ, σt, δ?, empty, F, S) .  op retag : Seen Int -> Seen .  eq retag(empty, σt) = empty .  eq retag((wt(ς̂, σt), S), σt') = wt(ς̂, σt'), retag(S, σt') .  ceq allOneStepAux(T, N, P, σ, σt, δ?, SC, F, SP) =      stpl(F, σ, δ?, (SP, SC))      if metaSearch(CESKMod, T, P, nil, '+, 1, N) == failure .  ceq allOneStepAux(T, N, P, σ, σt, δ?, SC, F, SP) =      allOneStepAux(T, N + 1, P, σ1, σt, δ?', S', F', SP)      if wd(ς̂, σ0, δ) := downTerm(getTerm(metaSearch(CESKMod, T, P, nil, '+, 1, N)), Fail)      /\ hδ(σ1, δ?') := \$applyΔ/δ?(δ, σ, δ?)      /\ S' := if δ? then wt(ς̂, σt + 1), SC               else if δ?' then retag(SC, σt + 1)                    else wt(ς̂, σt), SC fi               fi      /\ F' := if δ?' or-else not wt(ς̂, σt) in SP then ς̂ F else F fi [owise] .  eq \$step-all(nil, σp, σc, σt, δ?, S, F) = sys(F, σc, (if δ? then (σt + 1) else σt fi), S) .  ceq \$step-all(ς̂ ς̂s, σp, σc, σt, δ?, S, F) = \$step-all(ς̂s, σp, σc', σt, δ?', (S, S'), F')      if stpl(F', σc', δ?', S') := metaTermToStateSet(upTerm(wn(ς̂, σp)), upTerm(ς̂), σc, σt, δ?, S, F).  op CFA : System -> System [iter] .  eq CFA(sys(F, σ, σt, S)) = \$step-all(F, σ, σ, σt, false, S, nil) . endm```

The allOneStepAux function will consider all seen states to be at the current store's timestamp until it encounters a change, at which point it will do a one-time retagging of all the states just marked as seen. This is why there is a partial seen set, SC, so that we only retag states seen in just this step. The arguments to metaSearch are the module it's searching in, the starting term, the pattern it's matching, any side conditions on the pattern (none), a mode of search (+ is 1 or more), the max depth of search (max 1), and which search solution we're after. *phew*

Apart from the boilerplate, this was a fairly pleasant experience. I haven't scaled up my toying around with it to see if it is competitive with my hand-rolled implementation of an analysis for R4RS Scheme, but I wouldn't be surprised either way. This thing is a beast for its rewriting efficiency, but needing to reflect each term for each step, considering that they don't use direct reflection (as far as I can tell), that has got to be expensive. It's been a fun experiment, though.

## Monday, February 18, 2013

In the area of program analysis, it is all-too-easy to marry yourself to a model of computation that does not capture the entire behavior of the language you're analyzing before you even begin. The classical data flow analysis algorithms from the Fortran folks have been mostly successful for all the compiler hackers out there, but computer security has become such a real danger that we can't just use the work of hackers (though their work is appreciated). Before I talk more about program analysis, I want to say a few words on reliable computing.

### Reliable computing and programming languages

If we are to be confident about any property of a program that we write, not only should it pass a large test suite, but it should also have that property proven either automatically, by an interactive proof process resulting in a machine-checkable proof, or provided directly by the language it is written in.

Herein lies the problem. Computers speak only one language - machine code. In modern computers, machine code is a binary representation of some low-level imperative language. In just under 100% of the time, programs are not written in machine code, so we employ compilers - program translators - to change a nicer language to machine code. Sometimes we even chain these compilers together so that there is a higher level intermediate code that even higher level languages can target, so that different kinds of translation technology can be used at the different levels of abstraction.

How, then, do we know properties hold of programs written in high level languages that promise to maintain certain abstractions? Can programs written in other languages break these properties? Many engineers understand languages by their implementation, but not the intended semantics of the language. After all, a language only means something if it executes machine code, right? Well, not really. This line of thinking becomes far too intractable as high level language implementors take more liberties with their implementation strategy. Instead, if we assume a mathematically ideal computational model for the machine code (not really a problem unless you're in outer space or working on an airplane for over 12 hours), then we can employ powerful proof techniques to prove that such translations not only preserve properties, but that programs in the target language can't break them. These techniques require a the model (semantics) for the machine code AND a semantics for the high level language. What is a semantics if not an implementation?

A programming language semantics is a mathematical model of the meaning of a program in that language. Yes, an imperative implementation could be a semantics, but it's often too large and complex of a mathematical object to reason about. Instead, we have different styles for writing programming language semantics (c.f. the Redex book) that are much more amenable to standard proof techniques like induction and subject reduction. I am a personal proponent of languages maintaining a formal semantics in one of these styles. The easiest to learn and possibly the most successful is small-step operational semantics.

### Analysis of a language, not a graph

The standard model of data flow analysis is of graphs; basic blocks form nodes and control flow forms edges between nodes. This form of analysis can be proven correct with respect to a concrete semantics, but the path is cumbersome. Furthermore, not all languages have the standard imperative block structure. There is another way to describe a program analysis and have its correctness as an immediate result of the way it's written. This is in the form of Abstracting Abstract Machines (AAM), which shows a straightforward way to phrase your programming language semantics such that abstraction to a finite automaton (and thus computably explored) is dead simple.

This technique does not mark the end of research in analysis. Instead it opens possibilities and offers a new perspective: your analysis techniques should be built into your programming language semantics. That isn't to say your go-to model should be the translated form that makes abstraction simple. No, this means that you should have a repeatable, straightforward process for translating a run-of-the-mill concrete semantics into a ready-for-abstraction concrete semantics. It is illuminating to see what your analysis machinery really means in terms of concrete execution, and can ease your explanation of the idea to someone else; we humans are good with thinking concretely, while abstraction makes it harder.

I more recently developed a simpler translation path from semantics to a pushdown abstraction (rather than a finite one), although it assumes more of the programming model - that is, no more than a bounded amount of the continuation can be captured at once. Languages with full continuations can enjoy enhanced precision of the method with some extra model-specific work, but the pushdown guarantee flies out the window. Though the guarantee is theoretically broken, the use of heavy control operators tends to be controlled enough (hah) to not cause too much leaked precision.

The translation amounts to saving the state of the machine right after control transfers to a function as a key into a table of "local continuations" that also gets updated on function call to the continuation up to the entry of the function. This "up to" notion is easily captured by replacing the continuation on function call with the key into the continuation table. When it comes time to interpret that key, you just continue to all local continuations that were stored at that key. This non-determinism can lead to loops where previously there weren't (you can always choose the older of return continuations), but reachability of states is maintained with the original semantics. Abstraction follows the same strategy of AAM for finitizing the address space, but continuations no longer need to be store-allocated. This presentation immensely helped people in my department understand the work of my recently-graduated colleague, Dimitrios Vardoulakis, on CFA2, the first pushdown abstraction for a higher-order language. One element that was missing from his work was a non-standard concrete semantics, the term given to concrete semantics augmented with different/extra information to compute the answer of the analysis, rather than the answer of evaluation. Indeed, this idea is not new. Rather, the push I and my colleagues are making is to make the translation from standard to non-standard concrete semantics a straightforward and intuitive process. Secondly, all the trickery of the analysis should have a concrete counterpart. Once you abstract, there should not be anything surprising. The full generality of the abstract interpretation framework is something to appreciate, but we must also respect ourselves and our audience by providing an understandable and accessible technical artifact.

## Wednesday, October 24, 2012

### Docs for Racket + emacs

I have been doing all my research prototyping in the "programming language programming language" Racket. For those of you who work in emacs rather than DrRacket, you might find it fairly frustrating to move from emacs to the browser to search on docs.racket-lang.org, or if you're savvy, go to the terminal and use raco doc.
I came up with a script that will take whatever symbol your cursor is currently in to the documentation for it, if any. It requires a function from paredit, but you should have that loaded with scheme/racket files already.
``````
(setq raco-location "raco")
(require 'paredit)

(setq raco-doc-history nil)
(defun call-raco-doc (text)
(shell-command (concat raco-location " doc -- '" text "' &")))

(defun get-current-word ()
(paredit-handle-sexp-errors
(save-excursion
(progn (backward-sexp)
(mark-sexp)
(buffer-substring (mark) (point))))
(save-excursion
(progn (mark-sexp)
(buffer-substring (mark) (point))))))
(defun raco-doc ()
"call raco doc on selected text"
(interactive)
(call-raco-doc (get-current-word)))
(defun raco-doc-prompt ()
"call raco doc on prompted text"
(interactive)
(let ((text (read-string (if (consp raco-doc-history)
(concat "raco doc [" (car raco-doc-history) "]: ")
"raco doc: ")
nil
'raco-doc-history
;; default value should be most recent history item
(if (consp raco-doc-history)
(car raco-doc-history)
nil)
t))) ;; inherit the current input method
(call-raco-doc text)))
(defun enable-raco-doc ()
(define-key scheme-mode-map (kbd "C-c C-r") 'raco-doc)
(define-key scheme-mode-map (kbd "C-c C-M-r") 'raco-doc-prompt))
(provide 'raco-doc)
``````

## Monday, July 2, 2012

### Play all mp3s in subtrees

`~/bin/playall:`
` `
``````#!/bin/bash
find . -iname '*.mp3' | sort | awk '{ print "\""\$0"\"" }' | xargs mplayer``````

Good for playing all albums of an artist.

But wait! This doesn't give mplayer focus, so you can't press spacebar to pause/unpause. That's extremely annoying. We can fix that by replacing mplayer with mplayer_wrapper:

``````#!/bin/bash
mplayer "\$@" < /dev/tty``````

## Thursday, July 15, 2010

### Going Meta on the Macro

Today I want to talk a little bit about what I've been researching over the summer. For a little context, I've finished the first year of the CS Ph.D. program at my university, and I have mainly focused on formal methods/automated reasoning. My passion is for proving properties about software - specifically, I want to make that process easier, more natural and commonplace. Since I did not learn how to prove theorems with transfinite induction (the most common tool for proofs about software) until grad school, even after getting B.S. degrees in pure mathematics and computer science at a very good university, I don't think the "commonplace" goal is coming soon. Perhaps I will discuss this area and the difficulties it faces in a future post.
At this university, the largest research group - and the most famous - is for programming languages (PL). Academia's approach to PL over the past few decades has focused mainly on functional programming languages, as far as I can tell. That was shocking to me, since just about everything I've seen in industry has been imperative stuff. After dealing with proofs, I now understand a reason why PL fancies functional languages - they use recursion and immutable values much more often than while loops and repeated variable assignments. This approach allows you to disregard the global state of your program and focus specifically on one function at a time. Why? If your function depends on a variable that x other functions can change, then you must keep track of how that variable is changed. Most likely your proof of correctness hinges on that variable holding a specific property, such as prime number or normalized vector. Every place that variable changes, you must make sure that property holds; if it doesn't, you have to make sure it does before your function is called. That's a lot to keep track of. It's not all that easy to maintain, especially if you work in a team setting.
So what's cool about functional programming languages other than that fact? Well, I haven't begun to talk about macros. There are ways of abstracting behavior of your program beyond what a function can do. Say you need the same large initialization sequence executed before some block of code in many different places. You can make a new command that will wrap your code with all that boilerplate at compile time with a macro. That's a simple example. There are other examples in which a macro will do long and complicated analyses on input code, and then output optimized or annotated code that should be equivalent in functionality.
Ooh, "equivalent" sounds like a theorem proving word. Can we prove that these analyses find the invariants we need for correctness of the transform? How about after that - actually proving that the macro form expands to an equivalent expression in all contexts? The answer is (you're going to love this) it depends. It depends on what you consider proof. Certainly you can write functions that describe the semantics of the language you're working in (say, with an embedded operational semantics - oh but for a termination argument you have to specifically limit how many evaluation steps are allowed. Yikes.) So wait, you have to embed the language you're working in... in the language you're working in? Why would you want to do that? Gross.
So what if we want our logic to allow talking about macros inside the logic? Say we don't need that embedding, and then we can actually write rewrite rules to simplify macro forms without having to expand them. This is some heavy stuff. Now we are talking about quantification over syntax rather than simply values.

Some vocabulary - when a macro expands, it is a function from representation of syntax to representation of syntax, but when it's complete, that representation is reified to become actual syntax that will be evaluable.

A problem enjoyed (rather, suffered) by many macro systems is the loss of scoping laws. It is not possible that in all contexts your macro will have the same meaning. This is due to free and bound variable capture. A simple textual replacement macro system such as in LISP and C is bound to change the meaning of your program when you decide to use a variable name that the macro redefines. In LISP at least you can check that variables you introduce are not in the support (set of free or bound variables) of the input expressions, and fail to expand if they are. This is ugly. Not only does it reveal internals of a macro (something that module abstraction should give), but it pits the programmer against the macro - "why can't I use the variable names I want?" There is a solution, however it is not the most palatable either (just the best we've got), called hygienic macro expansion. A hygienic macro expander will rename variables as it goes to make sure variables are available only in the scope of where they are intended to be used. I can think of more ambiguous statements, but that one surely is a nuisance to anyone doing formal reasoning.

Hygienic macros have only recently gotten a formal treatment (see Dave Herman's thesis - A Theory of Typed Hygienic Macros), and since it is in the setting of a decidable type theory, the whole story is not delivered. However it does give us hope for a complete meaning. If one is able to completely describe how a macro's input will be used for introducing and consuming bindings in its expansion, then we can show that if two inputs have the same binding structure implies that the macro's expansion on both inputs has the same binding structure (more precisely, both expansions are alpha-equivalent), then the macro expander is hygienic. Nice. That takes care of the all contexts problem. Now how do we express this without long-winded arguments? My idea is to have simple base that requires long-windedness to begin with, but (haha!) we use macros to conceal this detail.

Notice that the hygiene property cannot be defined inside the logic, since any function describing binding structure would be manipulating only a representation of syntax, and not actual syntax for which the logic has reasoning capabilities. Wait wait wait... we are now introducing to the logic a means to have meta-reasoning at the non-meta level. That shouldn't work, right? Gödel did some work on this I think. Well, there are theorem provers out there that allow for meta-reasoning, so let's look at how they do it.

The key idea is that you don't collapse the meta-theory into your theory, but you separate the two and start at the meta level to begin with. This is what is meant by working in a logical framework rather than a specific logic. Thus we do have an "embedding" of the language we're working in, but it's already given as it is defined in the logical framework. A bit of a hack, but logically necessary. Then what encoding does one choose for syntax? If you have any familiarity with the lambda calculus, you should know how beta-reduction is rather involved (in call-by-value semantics) since it must do bound variable renaming. In hygienic macro systems, lexical information about a term must be stored somewhere, and for a practical language, so too must source code correlation information.

Yikes again. Source code correlation information is purely for debugging purposes and thus logically uninteresting. We can abstract over this by instead defining an abstract datatype that satisfies the properties we want of syntax, and then let an arbitrarily complicated implementation prove that it satisfies this specification. One way researchers have been able to describe binding in syntax is by actually using a higher order representation. That is, instead of symbolically representing bindings everywhere like LISP does, actually use a real lambda abstraction. This is called Higher-Order Abstract Syntax (HOAS) and it has been getting lots of attention for that past, oh, 15 years. Why then aren't we sipping on piña coladas while our informal meta-reasoning style proofs whiz through formal systems?

Remember my description of theorem proving in the beginning, and how induction is vital? Well it turns out that in general, inductive proofs over HOAS is... impossible. The reason for this is because its general representation is not inductively definable - variable references can be arbitrarily far down a syntax tree from the binding point; a crippling technicality called "negative occurrence" in type theory. Though IANAH (I am not a historian), the first attempt I know of to restrict syntax for meta-reasoning was in Edinburgh LF (logical framework), actually implemented at CMU as "Twelf", based on (competes with?) a logic called $FO\lambda^{\Delta\mathbb{N}}$ (fold-n) developed by some guys at École Polytechnique. There is additional work in Isabelle/HOL that builds off Twelf's idea, but this building is a big stack of confused mess if you ask me. At the bottom of the stack is Isabelle's logical framework, then classical higher order logic axioms, then some concrete representation of syntax, then the "specification logic," and then the "object logic." These layers are abstractly what needs to exist, but everything is logically defined, so execution isn't given (can't just build on top of Common LISP like ACL2 does). There is quite a bit of research into extracting programs from specifications, but what if I want programs AS specifications? As far as I know, there is no real programming language basis for these systems that would mean executable specifications and reasoning about actual runnable programs, and not their (trusted) translation into some other logic.

Perhaps it's my history with ACL2 - it is a programming language, a logic and a theorem prover - but is it too much to ask for? This is not a solved problem. I'd like to chip out a big piece of this mystery though. Meta-reasoning about (hygienic) macros must be a possibility. Since meta-reasoning itself has not yet been well-executed, there certainly is a lot to do [collection of meta-reasoning problems for mechanized systems to tackle]. There is a chance though, that by attacking the problem from a macro setting might just simplify (and generalize) the problem space.

## Monday, April 12, 2010

### SMT solvers hate you

And me, of course. No one is safe from their wrath of awful error messages. Boolector-1.2 was giving me "illformed 'let' or 'flet'" at the very end of my 1200 line file. Very useful. After poking around for 30 minutes or so, I found that it did not like my identifiers with hyphens (they should be underscores). Yices did just fine on the file, by the way. Boolector also does not like (let (?varname (let (... ...) ...)) ...). Nested lets are stupid. Scope is for humans.

On that note, I found that Boolector confuses nested lets with shadowed variables, so if ?x is bound twice, it incorrectly concludes that a nested let was found. Yices' behavior for shadowed variables is in my opinion far worse; it persists the first binding as if the second never happened. You will get models when you do not expect them.

Granted, this format really isn't for humans, but for the SMT solvers themselves. Let researchers munch on some NP-completeness rather than inform users. That said, there is an OCaml implementation of a well-formedness check, but it is admitted to be incomplete. The (faulty) implementations of the language specification are the real specs, so some error handling on the solvers' side is very well justified.

Just a note for some other researchers/engineers out there that run into Boolector giving some really nasty errors. I may edit this post after future interaction with more solvers like CVC3, MathSAT and Z3.

P.S. Yices doesn't care about the hundreds of close parens that are bound to conclude your file while Boolector expects all parens to match (no extra ')'s allowed). The error is the unhelpful "Unexpected EOF" message when they do not match.

## Thursday, February 18, 2010

### After Almost a Year of Graduate School

A lot has happened in the past year. I graduated UT with degrees in computer science and pure mathematics, I moved to Boston to pursue my Ph.D. in computer science at Northeastern, and I embarked on a research journey. Allow me to give convey some lessons I have learned.

As you may know, Ph.D. students seldom have to pay for tuition or worry about supporting themselves, as do (some) undergraduates and masters students. No, in exchange for your education, you are required to (at first) be a teaching assistant (you are supported by your college), and later a research assistant (you are supported by your advisor). If you are clever, you might manage to get a fellowship and just give your own work attention. Perhaps next year when I have a better idea of what is an interesting research direction I will apply for a fellowship, but for now I am a TA for a freshman logic class.

This class is very different from my own experience with logic in undergrad, where I spent most of my time smearing glaze over my eyes and amassing a large drool stain on my book while a coke-bottle glasses-wearing emeritus professor drones on about the most menial pedantry of set theory, functions and recursion. This class actually uses my beloved interactive theorem prover, ACL2, to teach students how to reason about their programs, and in the mean time surreptitiously teach them rigorous fundamentals of propositional logic, induction and first-order logic.

ACL2 is a tool that has been taking up a majority of my time recently, and is essentially the reason I came to Northeastern. The name stands for A Computational Logic in Applicative Common LISP, so the 2 is not a version number, but more of an ordinal coefficient/concatenation multiplier. My interest in ACL2 arose when a friend of mine told me about research he was doing with its inventor, J Moore. The idea for systems like ACL2 and the reason theorem proving intrigues me so much comes from a comment John McCarthy (inventor of LISP) made in the 60s along the lines of, "rather than debugging, programs should be proven correct, and that proof should be checked by a computer program." I had only seen reasoning about programs for a couple of lectures in my analysis of programs class (really a discrete structures class), presented in Floyd-Hoare logic, with the caveat that proving properties about programs this way does not scale, is a pain in the ass, and no one does it. Thus I had forgotten about proving properties about programs and went back to hacking and debugging as usual.

A computational logic is a logic for reasoning about algorithms - you stay in the realm of computation, instead of careening off into pure mathematics. ACL2 deals only with finite structures; there is no concept of infinity, functions cannot be values, and lists have a natural number length. All functions must be proven to terminate on any possible input (i.e. ACL2 is a total logic). These restrictions keep the logic very simple, but still very expressive. There are definitely theorem proving systems out there that allow for reasoning about infinity, higher-order functions and partially-defined functions, but proofs in these systems are much less easily automated. Because of the restrictions on the logic, some fantastic engineering and brilliant proof search heuristics, ACL2 is the most automated theorem prover in existence.

Have I sold you yet? If yes, you are just like most other people faced with ACL2 for the first time and shown what it can do. Bright eyes. Naive expectations. The reality of the matter is that proofs of interesting theorems still need mathematicians to come up with a proof strategy. Not only that though - in theorem provers, statements of truth that normally can be used the way you wish when proving theorems by hand now have extra semantics attached to them. You see, when you prove something to be true, you are programming the theorem prover to apply rules in a directed way. Typically if you prove A = B, then really what you are telling the prover is, "when you see A, replace it with B." This is called "substituting equals for equals," a special case of term rewriting. If you do not know the gory details of the prover's rewriting strategy, then you can end up proving extra very specific, horribly inelegant lemmas in order to ram your proof through the system.

The reasons for the prover's rewriting strategies being so directed come from years of experience and a lot of empirical data from heavy users of the system. Here is the learning curve: new to ACL2, you write some simple statement that is obvious to anyone and the prover fails - "stupid piece of crap," you say. Next you start reading and toying around with carefully crafted examples that the prover gets automatically, "oh cool, I was just doing it wrong. ACL2 is awesome!" Later you try to actually use ACL2 for arbitrary code, say you are trying to prove a flood fill algorithm makes a flooded region bigger or the same size as before (intuitively obvious), "AHHH MY EYES! MY FACE! I'M ON FIRE!" and after your Ph.D. in computer science with a focus on formal methods, "oh I get it." At least that is what I am promised. Currently I have bandages on my third degree burns, staring at a burning building, considering going back to save my dog.

My current research attempts to distribute large spanners to users of ACL2, so they have access to their fire hydrants - okay enough of this metaphor. ACL2 tries its very hardest to prove a conjecture true by applying different types of rules to rewrite the whole thing to T (true). The order in which rules are applied matters very much in the success of the proof, and ACL2's approach is to try everything, one by one, in the reverse order theorems were proved. In very directed proofs of one particular theorem, this generally works - consider in mathematics when you prove minor theorems, use those to prove stronger theorems, and then use those to prove your punchline theorem. Typically you do not have to appeal to the low level, minor theorems in your main proof.

In software? Not so much. You tend to work in layers of abstraction, and these layers have borders that allow for communication with other layers. It is this communication that destroys the reverse-chronological ordering strategy. Without communication, the prover would skip over entire layers as irrelevant to get to rules that apply - no big deal. With communication, things get hairy, and you would like to reason at the highest level you can before you have to dive deeper to resolve some technical issue. By tricking ACL2 into reasoning in different layers of abstraction, the order in which you prove theorems becomes less relevant, so you can worry about reasoning rather than rewrite strategies, and it structures proofs to progress in a more natural order.

How I trick ACL2 is the topic of a paper I am writing, and the main idea came from my mentor, Pete Manolios (actually got his Ph.D. under J Moore's supervision, so a very relevant source of ideas). His insight into this "reasoning in layers" came from reasoning about hardware - a very layered process goes into their design. In fact, my methods for tricking ACL2 made the proof of a refinement theorem about a pipelined machine go through in 100 seconds, when previously it made my computer crash, and Pete's computer 15 minutes to prove. This result was a marvellous fluke, since I managed to get such drastic results from a flawed implementation of my trickery. When I corrected the flaw, the proof shot up to 48 minutes on my computer. It took months of fiddling, engineering, and adding heuristics to get the corrected version to emulate the behavior of the flawed version in the case of the refinement proof. I now have a rather robust framework for reasoning in layers of abstraction. Now all I need to do is find some more problems to apply it to that give similar results.