Thursday, February 18, 2010

After Almost a Year of Graduate School

A lot has happened in the past year. I graduated UT with degrees in computer science and pure mathematics, I moved to Boston to pursue my Ph.D. in computer science at Northeastern, and I embarked on a research journey. Allow me to give convey some lessons I have learned.

As you may know, Ph.D. students seldom have to pay for tuition or worry about supporting themselves, as do (some) undergraduates and masters students. No, in exchange for your education, you are required to (at first) be a teaching assistant (you are supported by your college), and later a research assistant (you are supported by your advisor). If you are clever, you might manage to get a fellowship and just give your own work attention. Perhaps next year when I have a better idea of what is an interesting research direction I will apply for a fellowship, but for now I am a TA for a freshman logic class.

This class is very different from my own experience with logic in undergrad, where I spent most of my time smearing glaze over my eyes and amassing a large drool stain on my book while a coke-bottle glasses-wearing emeritus professor drones on about the most menial pedantry of set theory, functions and recursion. This class actually uses my beloved interactive theorem prover, ACL2, to teach students how to reason about their programs, and in the mean time surreptitiously teach them rigorous fundamentals of propositional logic, induction and first-order logic.

ACL2 is a tool that has been taking up a majority of my time recently, and is essentially the reason I came to Northeastern. The name stands for A Computational Logic in Applicative Common LISP, so the 2 is not a version number, but more of an ordinal coefficient/concatenation multiplier. My interest in ACL2 arose when a friend of mine told me about research he was doing with its inventor, J Moore. The idea for systems like ACL2 and the reason theorem proving intrigues me so much comes from a comment John McCarthy (inventor of LISP) made in the 60s along the lines of, "rather than debugging, programs should be proven correct, and that proof should be checked by a computer program." I had only seen reasoning about programs for a couple of lectures in my analysis of programs class (really a discrete structures class), presented in Floyd-Hoare logic, with the caveat that proving properties about programs this way does not scale, is a pain in the ass, and no one does it. Thus I had forgotten about proving properties about programs and went back to hacking and debugging as usual.

A computational logic is a logic for reasoning about algorithms - you stay in the realm of computation, instead of careening off into pure mathematics. ACL2 deals only with finite structures; there is no concept of infinity, functions cannot be values, and lists have a natural number length. All functions must be proven to terminate on any possible input (i.e. ACL2 is a total logic). These restrictions keep the logic very simple, but still very expressive. There are definitely theorem proving systems out there that allow for reasoning about infinity, higher-order functions and partially-defined functions, but proofs in these systems are much less easily automated. Because of the restrictions on the logic, some fantastic engineering and brilliant proof search heuristics, ACL2 is the most automated theorem prover in existence.

Have I sold you yet? If yes, you are just like most other people faced with ACL2 for the first time and shown what it can do. Bright eyes. Naive expectations. The reality of the matter is that proofs of interesting theorems still need mathematicians to come up with a proof strategy. Not only that though - in theorem provers, statements of truth that normally can be used the way you wish when proving theorems by hand now have extra semantics attached to them. You see, when you prove something to be true, you are programming the theorem prover to apply rules in a directed way. Typically if you prove A = B, then really what you are telling the prover is, "when you see A, replace it with B." This is called "substituting equals for equals," a special case of term rewriting. If you do not know the gory details of the prover's rewriting strategy, then you can end up proving extra very specific, horribly inelegant lemmas in order to ram your proof through the system.

The reasons for the prover's rewriting strategies being so directed come from years of experience and a lot of empirical data from heavy users of the system. Here is the learning curve: new to ACL2, you write some simple statement that is obvious to anyone and the prover fails - "stupid piece of crap," you say. Next you start reading and toying around with carefully crafted examples that the prover gets automatically, "oh cool, I was just doing it wrong. ACL2 is awesome!" Later you try to actually use ACL2 for arbitrary code, say you are trying to prove a flood fill algorithm makes a flooded region bigger or the same size as before (intuitively obvious), "AHHH MY EYES! MY FACE! I'M ON FIRE!" and after your Ph.D. in computer science with a focus on formal methods, "oh I get it." At least that is what I am promised. Currently I have bandages on my third degree burns, staring at a burning building, considering going back to save my dog.

My current research attempts to distribute large spanners to users of ACL2, so they have access to their fire hydrants - okay enough of this metaphor. ACL2 tries its very hardest to prove a conjecture true by applying different types of rules to rewrite the whole thing to T (true). The order in which rules are applied matters very much in the success of the proof, and ACL2's approach is to try everything, one by one, in the reverse order theorems were proved. In very directed proofs of one particular theorem, this generally works - consider in mathematics when you prove minor theorems, use those to prove stronger theorems, and then use those to prove your punchline theorem. Typically you do not have to appeal to the low level, minor theorems in your main proof.

In software? Not so much. You tend to work in layers of abstraction, and these layers have borders that allow for communication with other layers. It is this communication that destroys the reverse-chronological ordering strategy. Without communication, the prover would skip over entire layers as irrelevant to get to rules that apply - no big deal. With communication, things get hairy, and you would like to reason at the highest level you can before you have to dive deeper to resolve some technical issue. By tricking ACL2 into reasoning in different layers of abstraction, the order in which you prove theorems becomes less relevant, so you can worry about reasoning rather than rewrite strategies, and it structures proofs to progress in a more natural order.

How I trick ACL2 is the topic of a paper I am writing, and the main idea came from my mentor, Pete Manolios (actually got his Ph.D. under J Moore's supervision, so a very relevant source of ideas). His insight into this "reasoning in layers" came from reasoning about hardware - a very layered process goes into their design. In fact, my methods for tricking ACL2 made the proof of a refinement theorem about a pipelined machine go through in 100 seconds, when previously it made my computer crash, and Pete's computer 15 minutes to prove. This result was a marvellous fluke, since I managed to get such drastic results from a flawed implementation of my trickery. When I corrected the flaw, the proof shot up to 48 minutes on my computer. It took months of fiddling, engineering, and adding heuristics to get the corrected version to emulate the behavior of the flawed version in the case of the refinement proof. I now have a rather robust framework for reasoning in layers of abstraction. Now all I need to do is find some more problems to apply it to that give similar results.