Skip to main content

The Yesod Transformer Library

The Yesod Transformer Library

This post assumes a familiarity with monad transformers.

As part of my job, I've been spending lots of time in the Yesod ecosystem - yesod-core (and yesod-*), persistent, and the shakespeare templating languages. These libraries are particularly easy to contribute back to: the efforts of Michael Snoyman and Matt Parsons, as well as everyone else who puts hard work into those communities, mean that interaction is pleasant, considerate, and meaningful. Their ambitious nature also means that small contributions are relatively easy to make - just fill any utility hole in the vast APIs that already exist.

However, Yesod's design does come with some pain points. Notably, yesod-core is in a complicated relationship with monad transformers: it used to support them, and now it doesn't.

Monad transformers in Yesod

Yesod is based around two monads: HandlerFor site, which is essentially a request handler for some foundation site type site; and WidgetFor site, which is a specialised handler for building webpages. Specifically, WidgetFor lets you use do-notation to build a page imperitavely - you can write to the page piece by piece, appending HTML, CSS, and JS directly to the head or body.

Yesod also has two monad classes: MonadHandler, which can be implemented by any monad that can run a HandlerFor; and MonadWidget, which has the same property for WidgetFor. Each base monad type implements its respective monad class. But these classes also have lifting instance for loads of monad transformers. For example, if MonadHandler m, then MonadHandler (ReaderT r m).

HandlerFor and WidgetFor used to be transformers. yesod-core still contains some references to WidgetT and HandlerT, even in non-deprecated APIs. But some years ago, they were changed to be just monads, and the result is an interesting limbo - some code cares about mtl-style MonadWidgets and MonadHandlers, and other code just uses the base monad.

This change wasn't baseless - Snoyman has written in the past about how overused he thinks full-application monad stacks are, and the value of the "ReaderT" pattern, where monad state is limited to a reader type and (typically) the IO monad. In fact, HandlerFor and WidgetFor are exactly that - IO monads which can read the foundation site.

What's so bad about that?

For the most part, nothing. Concrete types can even add value themselves - error messages from an unexpected type are often much more readable than instance resolution failures in mtl-style code.

But the pain points I encountered came from two particular places where yesod-core interacted with code for my application:

  • when running the website, and
  • when trying to make a custom widget
Running the website

Yesod's design is very tied to the idea of a foundation site - as can be best seen in the Yesod class. Yesod is implemented for foundation site types - it describes how the corresponding site performs certain actions. For an example, take the errorHandler method:

errorHandler :: ErrorResponse -> HandlerFor site TypedContent

errorHandler defines how the server takes some kind of error and returns it to the client as content. Crucially, that response must be a HandlerFor - it cannot be a generalized MonadHandler, and it must be runnable (presumably because, otherwise Yesod could not run it). Similar constraints, using WidgetFor and HandlerFor, pepper the whole class.

These choices are not a consequence of the dropping of support for monad transformers - the same design existed well before then. But they do have the same consequences - that site must do everything. In the above example, there is the constraint (from the type) that the site, as well as some internal Yesod HandlerData, must be enough to decide how to give an error response. You cannot return a ReaderT r (... TypedContent), even if you need some additional context r not found in the site. Similarly, you cannot use a WriterT w (... TypedContent) to track some error metrics w, without having to store them in the site.

This is not an obstacle most of the time - after all, it's easy to put stuff in the site. But I was interested in making a handler that would change the output of logging on my website, without needing to change all of my code. In mtl style, this would have been a monad transformer, to modify the monad stack for my handlers. In Yesod, that change isn't so easy.

Making custom widgets

Widgets, like handlers, are required to be WidgetFors at the boundary where your application code touches Yesod's. This is more sensible - rarely do you want hidden outside context on all your widgets. But that doesn't preclude transformers appearing inside some widget code: like running widgets which all need the same DB data in the same ReaderT.

For a more concrete example, consider a problem I had last week: when visualising some submitted data to the user who submitted it, some of it would be obviously erroneous - data could be missing, values would be zero where non-zero was expected, etc. The goal was to let the user know that these anomalies existed in a little report table, and have that table link to each anomaly in the view.

The code could have been ugly, because this is a clear case of mixed responsibilities - I needed to spot and track errors in the model, but report them in a way that's directly woven into the view. But there was a more elegant possibility: the view could report anomalies as they were displayed. Once the view was finished rendering, the anomaly report would already be complete. Since the view itself did the reporting, it could even generate a unique ID per report, and make that ID link to the displayed anomaly - when the report was rendered, each link's target would already be in place on the page.

The resulting approach called for a custom widget type; one which could display basic widgets (WidgetFor site, which did not report anomalies) alongside their error-reporting cousins. This custom type was a WriterT transformer on WidgetFor - and the writer type was the anomaly report itself.

But this transformer, too, ran into problems. Yesod's whamlet quasiquoter lets you include HTML snippets as widgets in your Haskell code - and those snippets can themselves embed other widgets. For example, the snippet:

[whamlet|
  <h1>A foreboding title
    ^{anInnerWidget}
  |]

defines a widget whose layout is the <h1> header, followed by the layout of the anInnerWidget widget. What's the type of anInnerWidget? Yesod requires that it is WidgetFor, even if the outer widget type is not! In other words, custom widgets cannot use this syntax to nest each other, even though Yesod's WidgetFors can.

Custom foundation sites

Of course, Yesod still affords you plenty of control over how your handlers and widgets will run. Specifically, to allow for configuration options, DB connections, and other runtime values that would be relevant to handlers and widgets, Yesod allows your foundation site to be absolutely anything: the only thing it has to do is implement the relevant classes, like Yesod.

This flexibility on the foundation site type is much more like the polymorphism that mtl was written to take advantage of. And since Yesod code often restricts us to using HandlerFor site and WidgetFor site, there's a neat alternative to monad transformers for Yesod - site transformers.

Site transformers

A site transformer is largely what it sounds like - a wrapper for a site type which, like a monad transformer augments a monad, augments the underlying site. The transformed type still needs to implement relevant classes, like Yesod - but it is free to do those by delegating to the base class, or defining its own behaviour, as it wants.

The power of site transformers is: while your handlers and widgets are required to depend only on your site type, and no additional context, they can depend on the site type as much as they want. Morever, because access to the Yesod internals lets you change the site type for just a snippet of a handler, that snippet can depend on a different site type to the rest of the code. When running any snippet, you then only have to provide the additional context to use the modified site type.

To talk about this in more concrete terms, consider this example:

data ReaderSite r site = ReaderSite r site

This ReaderSite is a site transformation that adds additional reader context to handlers and widgets. It doesn't look exactly like a ReaderT, because the site itself is already part of a reader (remember, handlers are "Reader IO"s). Any handler with site type ReaderSite r site can read the transformed site type - so it can access the value with type r. This means that, by transforming a site to a ReaderSite r site, we can add reader context to a handler - without needing ReaderT.

For example, we can define:

ask :: HandlerFor (ReaderSite r site) r

which readas the reader context from the site. To run a ReaderSite, we have to supply the reader context - just like for ReaderT:

runReaderSite :: r -> HandlerFor (ReaderSite r site) a -> HandlerFor site a

In fact, this concept goes all the way to essentially reproducing an mtl-style design in Yesod - by defining, using, and running site transformers, it's possible to do many of the things mtl would otherwise let you do: read additional data, write to additional outputs, run parts of your monad code with a modified state, etc. All the while, your code remains compatible with Yesod itself, because the foundation site type is yours to play with.

Announcing YTL

That gets me to my main point: based on the above concepts, I've written a new utility library for writing Yesod code: ytl, the Yesod Transformer Library. Like mtl for monads, ytl describes site transformers: how to define them; how to lift handlers and widgets to transformed variants; and how to run transformed handlers and widgets with the underlying site.

The library itself is already being put into use for yesod-katip, a logging bridge I wrote between Yesod webservers and Katip scribes. But the power of the library is in its extensibility - it provides the tools to define new transformers easily, and integrate them into existing code automatically. Hopefully, others will find that ytl is just what they've been looking for.

Theorem proving in Haskell

Theorem proving in Haskell

This article follows on from the previous one on intuitionistic logic in Haskell. Unlike that one, this is not cover any theory - instead, it's a technical exploration of a Haskell library that simulates a theorem prover.

Theorem provers

Since Haskell terms are proofs, we can try to use the Haskell compiler as a theorem prover - a program that allows the user to describe a proof as a series of steps, and then checks that the proof is correct. In Haskell's case, we describe proofs as terms, and know they are correct when the term compiles and has the expected type (the statement we are trying to prove).

Coq

This approach is one taken by many theorem provers: Coq is a theorem prover with an associated dependently-typed language. Coq's type system is powerful enough to encode much more than just the propositional logic we've seen so far: it can be used to make statements about the behaviour of functions, the existence of certain kinds of values, and even other (quantified) statements.

The Coq compiler is similarly powerful: it is able to enforce that functions are strictly positive - the property that gave us terminating Haskell terms - while still allowing for a wide range of expressible terms. The power of Coq is such that you can use it to describe and prove complex statements and theorems: but its basic concepts are very similar to the ones we've seen already. Coq proofs are just code; implication terms are just functions.

Interactive theorem proving

Coq is designed around interactive theorem proving - proofs can be described as a series of steps, and an editor can step through a proof to see the effect of each step in arriving at the final result. Each step can modify the environment - the set of variables and facts known to the compiler - and the goals - the set of statements which need to be proven. For example, the type forall A B : Prop, A -> B -> A /\ B says that, from the hypotheses A and B, it is possible prove A /\ B (this is equivalent to the Haskell type a -> b -> a /\ b). A proof of that statement might look like:

Proof.            (* env: {},                              goals: { forall A B, A -> B -> A /\ B } *)
  intros A B a b. (* env: { A, B : Prop ; a : A ; b : B }, goals: { A /\ B } *)
  split.          (* env: { A, B : Prop ; a : A ; b : B }, goals: { A ; B } *)
  - exact a.      (* env: { A, B : Prop ; a : A ; b : B }, goals: { B } *)
  - exact b.      (* env: { A, B : Prop ; a : A ; b : B }, goals: {} *)
  Qed.

At the start of the proof, the environment is empty and there is one goal - the whole statement. The first step, intros, introduces names into the environment: A, B, for the types, and a, b for the proofs - and changes the goal to A /\ B. The second step splits one goal into two - A and B. The third and fourth steps each prove a single goal by providing a term which is exactly its proof.

Coq also allows for such terms to be expressed as functions directly: and vice versa. There's no distinction between what's expressible in the "code style" or the "proof style" of Coq terms.

Proof style with monads

Looking at the proof style available in Coq and other provers, one immediate candidate for a Haskell implementation springs to mind: do-notation. Haskell's do-notation already gives syntactic support for sequences of steps and name-binding, through x <- ....

In order to leverage do-notation, we then have to find an appropriate Haskell type which supports it (or otherwise implement one ourselves). While a Monad instance would be the most obvious choice, it wouldn't be powerful enough; in order to support a changing goal, something about the Monad type must change over time. We also want to have type-level restrictions: the split step, which turns a A /\ B goal into two subgoals, shouldn't be usable on a goal which isn't an /\.

An indexed monad

It turns out that we can overcome these problems by adding some type-level state to our monad. The result is called an indexed monad, which (along with definitions for indexed functors and applicatives), has the declaration:

class (IxApplicative m) => IxMonad m where
  ibind :: (a -> m j k b) -> m i j a -> m i k b

An IxMonad instance is a type constructor with 3 arguments: the old typestate, the new typestate, and the monad argument. When sequencing indexed monad values, the typestates must "align": the new typestate of the first value must be the old typestate of the second. This is easier to see in the IxApplicative definition (slightly modified here for readability):

class IxFunctor m => IxApplicative m where
  ipure :: a -> m i i a
  iap :: m i j (a -> b) -> m j k a -> m i k b

iap, as well as applying the function type, also "composes" the typestates: the resulting monad value has the old typestate of the first argument, and the new typestate of the second.

The goal as typestate

So we might want an indexed monad; but what should its typestate be? The type-level information we care about are the environment and the goal. However, the environment (a collection of variables and definitions) is already handled by Haskell as a programming language: do-notation even has a way to bind variables, with the arrow <-.

It follows that the typestate of our indexed monad turns out to be the goal. Proof steps, as monad values, are really then goal transformations. Such a monad value is itself a proof that, in a certain environment, some goal transformation is valid: a proof step that goes from a goal of a to a goal of b must also prove that from a proof of b it's possible to get a proof of a.

The Tactic monad

How then do we actually define the monad? We can do that by thinking about what valid goal transformations we will want to have.

If a monad value has some final type variable a, then it will be possible to bind that value to a variable which then has type a:

x <- myTransformation -- myTransformation has type m i j a
...                   -- from here on, x has type a

The remaining proof with goal j will have access to a proof of a through the variable x. Such a goal transformation introduces a as a hypothesis for the rest of the proof; if the goal from that point on is j, then the proof shows that a -> j.

But the original goal was i, not j - so the goal transformation has to justify why a proof of a -> j gives a proof of i. In other words, the goal transformation is an inhabitant of (a -> j) -> i.

This gives our monad definition:

data Tactic i j a
  = Tactic ((a -> j) -> i)

The name Tactic comes from the term used for such proof steps in many theorem provers.

Some tactics

Now that we know the shape of the monad, it's very easy to start writing tactic instances like the ones we would expect to see in any theorem prover (if you're not interested in examples, skip this section).

Coq's intro, for example, allows you to give a name to the left side of an ->:

intro :: Tactic (a -> b) b a

Such a goal transformation requires an inhabitant for (a -> b) -> (a -> b) - so the definition is pretty obvious:

intro = Tactic id

The left tactic simplifies proving an \/:

left :: Tactic (a \/ b) a ()

This tactic doesn't introduce any hypothesis - and that won't be a problem later, since we can always construct a value of type () when necessary.

left = Tactic \f -> Left (f ())

Canonical truth

The Tactic monad (once you define the necessary typeclass instances) lets you do proof steps with do notation: but we still can't describe a whole proof. Specifically, we don't know when we are "done" proving a statement - after all, we could always apply further goal transformations.

We could be satisfied by reducing the proof of a statement to the proof of some other statement which we've already proved i.e. producing a term of type Tactic a b (), where we know b and want to prove a. Such a proof would inhabit (() -> b) -> a) - and since () -> b is inhabited (since we know b to be true), then we can get an inhabitant for a.

In practice, however, it's useful to have a single choice for such a known statement - and that statement represents "truth" in the type system. Additionally, it would be nice for this choice of truth to have a single, canonical constructor - so that all proofs of truth are the same. Together, these justify choosing () as the representation of truth.

Even more tactics

Using this representation of truth, we can describe tactics that solve goals - that is, transform goals into the goal of (). (Again, if you don't want examples, skip this section.)

Coq's exact tactic solves a goal by giving a term of the exact type of the goal:

exact :: a -> Tactic a () ()
exact proof = Tactic \_ -> proof

The split tactic in Coq turns a goal of A /\ B into two subgoals of A and B. However, the Tactic monad doesn't support multiple goals: so how can we represent it? The answer is through subproofs:

split :: Tactic a () () -> Tactic b () () -> Tactic (a /\ b) () ()

These subproofs require solving their respective subgoals. The tactic then allows you to solve a goal of a /\ b by providing a proof for each of a and b. The resulting proofs can finally be combined:

split (Tactic getA) (Tactic getB) = Tactic \trivial -> (getA trivial, getB trivial)

This technique of subproofs allows for defining lots of other useful tactics. The assert tactic in Coq allows for stating a subgoal, proving it, and giving the resulting proof a name to use later.

assert :: Tactic a () () -> Tactic i i a

assert takes the proof of the subgoal, and allows it to be bound in do-notation. Using the bound variable later on just uses the subproof.

assert (Tactic getSubproof) = Tactic \_ -> getSubproof id

A complete proof

We can then define a complete proof as one which solves its goal:

data Proof a = Proof (Tactic a () ())

Of course, such a definition is justified only if we can use it to get an inhabitant of a. In fact, this defines a complete proof as inhabitant of (() -> ()) -> a, which can very easily be used to obtain an inhabitant of a:

useProof :: Proof a -> a
useProof (Proof (Tactic transformation)) = transformation id

Intuitionistic logic in Haskell

Intuitionistic logic in Haskell

This article assumes a familiarity with Haskell, and some basic knowledge of classical logic (if you don't know about different varieties of logic, the one you do know is probably classical).

Terminating Haskell

The Void datatype

The Void datatype is part of the Haskell standard library, and should be well-known by most Haskellers. In short, Void has the following declaration

data Void

That is: it's a datatype, with an empty collection of constructors (you may be surprised this is a valid declaration). The consequence is that it's impossible to construct any value with type Void, a fact that both programmers and the compiler can exploit.

Though a Void value is unconstructable, it is still very simple to write a valid Haskell term which has the Void type.

aVoidTerm :: Void
aVoidTerm = aVoidTerm

-- Alternatively:
aVoidTerm = undefined
-- Or even:
aVoidTerm = error "Tried to evaluate a `Void` term"

These terms all share the property of being non-terminating. While lazy evaluation lets them appear in programs without any problem, any attempt to evaluate these terms will fail: either because of an infinite loop or a runtime error.

Terminating terms

Terminating terms are the subset of all Haskell terms which can be evaluated in finite time without error. While it's impossible to decide in general if a particular term is terminating, we can restrict our language so that we can only write terminating terms. One way to do that is to require that recursive functions are strictly positive - so all recursive calls are with arguments that are "smaller" than the current argument - and total - defined for every possible argument.

If we implement those two restrictions in Haskell, we get a subset of the language that can only express terminating Haskell terms - though it can't express every terminating Haskell term. In terminating Haskell, we can no longer write terms with a Void type: there's no way to define such a term recursively, because it won't get "smaller"; and there's no way to leave the value undefined.

If you can force the evaluation of a Void term, you can't get some terminating value, so there is no well-defined "after" evaluation. The compiler even lets you exploit this fact to write a terminating term that evaluates a Void value, and then returns a value of any type.

{-# LANGUAGE EmptyCase #-}
useAVoid :: Void -> a
useAVoid void = case void of

This code uses the EmptyCase language extension to allow us to write a case statement with no branches - remembering that case statements have to be total, this means that the above code only compiles because Void has no cases to branch on. While the function definition itself is terminating, the result can have type a, for any choice of a, because by evaluating the argument it will never terminate and be forced to produce an a.

Inhabited types

Void has the property of being uninhabited, because it has no "inhabitants" - valid terminating terms which have the Void type. Types with inhabitants are, unsurprisingly, said to be inhabited.

We have already seen that Void -> a is inhabited for any choice of a, even uninhabited choices - in fact, this is only because Void is uninhabited. For a (terminating) function with type a -> b, b can be uninhabited only if a is uninhabited - otherwise the function could evaluate the argument of type a, have it terminate, and be forced to produce a terminating value of type b: an impossibility.

A consequence is that the type a -> Void is inhabited only for choices of a which are uninhabited. Moreover, if a is uninhabited, then we can write a terminating term with type a -> Void - just like we did for Void -> a - by exploiting the fact that a has no inhabitants. The result is: a -> Void is inhabited if and only if a is uninhabited, and vice versa.

We can extend this reasoning about inhabitants to many other basic Haskell types. Maybe a, for example, is always inhabited by the terminating term Nothing, even for uninhabited choices of a. Either a b is inhabited provided one of a or b is inhabited, because you could wrap the terminating term with type a (or b) in a Left (or Right) constructor to give a terminating term of type Either a b. Conversely, if Either a b is inhabited, then at least one of a or b must be inhabited (though the proof is much more difficult to summarize). In a similar vein, the tuple type (a, b) is inhabited if and only if both a, b are inhabited.

Types and logic

Astute readers may have already spotted the point of the above discussion: this reasoning about inhabited types looks a lot like formal logic.

If inhabitedness is "truth", then uninhabitedness is "falsehood". Continuing the comparison, a -> Void is the "negation" of a - since it is uninhabited if and only if a is inhabited, and vice versa; Either a b is the "or" of a and b, since it is inhabited if and only if at least one of a, b is'; and (a, b) is the "and".

It remains to check that a -> b follows the expected behaviour of "implies" - that a -> b is uninhabited (aka false) if and only if a is inhabited (true) and b is uninhabited (false). In fact, due to arguments which include the ones made earlier on, we find that this is the case - and that ->, as well as resembling the "implies" arrow, also seems to act like one.

The similarities don't stop there. If we introduce some type aliases to make our types easier to read:

{-# LANGUAGE TypeOperators #-}
type Not a = a -> Void
type a /\ b = (a, b)
type a \/ b = Either a b

Then we can wonder if some inhabitants exist for certain types which correspond to logical statements we know to be true. One example would be the commutativity of /\ - that a /\ b -> b /\ a. A programmer could quickly produce such an inhabitant:

andComm :: a /\ b -> b /\ a
-- i.e. :: (a, b) -> (b, a)
andComm (x, y) = (y, x)

We can also wonder if types which corresponding to false logical statements are uninhabited. For example, a /\ Not a should never hold, so the type should be uninhabited. It follows that Not (a /\ Not a) is inhabited: in fact it is.

explosion :: Not (a /\ Not a)
--   i.e. :: (a, a -> Void) -> Void
explosion (x, f) = f x

Programmers who struggle with logic may notice that rewriting the type in terms of Haskell types makes for much simpler reading: it's pretty easy to spot that (a, a -> Void) -> Void is inhabited, and even to immediately write out the above inhabitant.

The excluded middle

A pretty important rule for classical logic is the "law of the excluded middle" - that every statement is either definitely true, or definitely false.

We can express this in our types as a \/ Not a. But can we inhabit it? For particular choices of a, sure: if we knew that a was Int, or Void, or even Maybe b, we would be able to produce an inhabitant easily:

intLOEM :: Int \/ Not Int
intLOEM = Left 1

voidLOEM :: Void \/ Not Void
--  i.e. :: Either Void (Void -> Void)
voidLOEM = Right id

maybeLOEM :: (Maybe b) \/ Not (Maybe b)
maybeLOEM = Left (Nothing)

But for a polymorphic a, we struggle. While every type in Haskell is definitely inhabited or uninhabited, we can't in general produce an inhabitant of the type, or an inhabitant for the type's "negation", without knowing which type we're dealing with.

LOEM :: a \/ Not a
LOEM = ??? -- uh oh

Does this mean that our strange similarities are just a coincidence? After all, a \/ Not a is certainly true - so we expect to be able to produce an inhabitant. Notably, we also fail to "negate" the law of excluded middle - we can't produce an inhabitant for Not (a \/ Not a) either. In the logic of our types, we don't know if a \/ Not a is "true" or "false".

What has actually happened is that we've departed from classical logic: our types do behave like a logical system, just a different one.

What makes classical logic?

While most users of logic - particularly mathematicians - take the law of excluded middle as a universal truth, and apply it liberally in proofs, it's not a necessary part of any logical system. Classical logic is actually defined in part by the existence of this law, and double negation: Not (Not a) -> a (interested readers can read more about these laws).

Double negation is another statement that doesn't have an inhabitant in Haskell - and nor does its negation. In a logical system without either law, several statements which are true in classical logic also no longer hold: Peirce's law, ((a -> b) -> a) -> a; the double contrapositive, (Not b -> Not a) -> (a -> b); the implies equivalence (a -> b) -> (Not a \/ b); de Morgan's laws, such as Not (Not a /\ Not b) -> (a \/ b); and many more. Similarly, none of these Haskell types have inhabitants.

Intuitionistic logic

If the logic that we see in Haskell types isn't classical logic, what then is it?

The feature of Haskell that prevents all these types from having inhabitants is that, in Haskell, we must construct terms explicitly. In classical logic, the statement a \/ b holds if and only if at least one of a, b holds - but our knowledge of a \/ b doesn't necessarily tell us which one it is. In Haskell, on the other hand, you can use an Either a b to get either an a or a b, by deconstructing the term. In other words, Haskell requires that every term of type a \/ b only be constructed, from a term with type a or b.

This limitation doesn't just appear in Haskell. In fact, this exact behaviour describes intuitionistic logic - a subset of classical logic. Intuitionistic logic is precisely what you get when you get rid of the laws of the excluded middle and double negation frrom classical logic. The resulting system still makes sense - it's just less powerful. Lots of statements with classical proofs have no equivalent intuitionistic proof.

Constructive mathematics and decidability

Just like classical logic tries to formalise the reasoning of classical mathematics, intuitionistic logic tries to formalise the reasoning of constructive mathematics (for this reason, it is sometimes called constructive logic).

Godel's successful proof of the first Incompleteness theorem showed that every system of reasoning about numbers included statements that were undecidable. These statements couldn't be proven, and their contradiction couldn't be proven - in other words, it was impossible for some statement a to prove either a or Not a.

As a result, several mathematicians became dissatisfied with classical logic: a binary idea of truth no longer seemed correct. They devised a new mathematics without the use of the law of excluded middle or double negation in proofs - since a proof of Not (Not a) only showed that a wasn't false: not that it was true. This system was termed constructive mathematics.

In constructive mathematics, as in intuitionistic logic, proofs are only possible by explicit construction. To prove a \/ b, it is necessary to give a proof of a or a proof of b - it's not enough to argue that at least one must be true. This system ends up being weaker - several statements which are provable in classical mathematics are undecidable in constructive mathematics.

Decidability as a constraint

Returning to Haskell for a moment, we can use this insight to produce a weaker version of the law of the excluded middle, just for decidable statements. Haskell allows us to express decidability as a constraint on a logical statement (a type), by using a typeclass:

class Decidable a where
  decide :: a \/ Not a

LOEM :: (Decidable a) => a \/ Not a
LOEM = decide

Restricting ourselves to decidable statements is even enough to re-introduce double negation:

doubleNegation :: (Decidable a) => Not (Not a) -> a
doubleNegation doubleNegative
  = case (decide :: a \/ Not a) of
      Left a -> a
      Right (notA) -> useAVoid (doubleNegative notA)

The Curry-Howard isomorphism

So Haskell types look like logical statements - and in particular, it looks like statements provable in intuitionistic logic correspond to inhabited Haskell types. But what does that mean for Haskell terms?

Code as proof

To the Haskell compiler, a (terminating) term is a demonstration that a particular type is inhabited. In declaring the type signature for a definition, you claim the type is inhabited: in the definition itself, you prove it. When you consider types as a logic, this analogy makes even more sense: where a type is a logical statement, a term is a proof that the type is inhabited, and so that the statement is true. Proving the negation of a statement is done by showing that the negated type is inhabited.

This matchup - types as statements, terms as proof - hasn't gone unnoticed among computer scientists: it's been described in informal papers since the 70s. Named the Curry-Howard Correspondence (or isomorphism), it was first spotted in the lambda calculus - the functional abstract computer that Haskell was inspired by.

Computing proofs

One of the most interesting parts of the CHC comes from considering -> - both implication and function. The type a -> b is both the logical statement that a implies b, and the type of transformations of values of type a into values of type b. But since a value of type a is a proof for a, such a transformation also transforms proofs: given a proof for a, it yields a proof for b.

And since proofs are code, that means that evaluating a proof involving running those proof transformations. A Haskell term like f x with type b is made of a proof f of a -> b, and a proof x of a (for some a). In running f x, we use the proof of a -> b combined with the proof of a to get a proof for b: and the resulting value has the same type. By evaluating a proof, we get a new proof for the same statement. In other words, computation simplifies proofs.

Cut-free proofs

But what does this "simplification" actually mean in a proof? How can proofs transform other proofs? The answer comes from considering what our logical types actually mean. a -> b is a representation of the statement "from the hypothesis a, it is possible to prove b". Inhabitants of a -> b are proofs which assume that a is true, and use it to construct a proof of b. But in order to use such a proof to prove b, it is necessary to produce a proof of a: and the resulting proof has some redundancy. If you can prove a, there's no need for another proof to assume that a is true - it could just use the proof of a.

Consider the same situation in Haskell. If you apply a function to an argument

(\x -> ...) t

then computing the function application substitutes the argument value into the function body.

let x = t in ...

This term could be written both ways: the behaviour is the same. The only difference is that the first version, when computed, turns into the second.

This construction - proving both an implication and its assumption - is called a cut in a proof. When we evaluate a proof as a Haskell term, we actually eliminate the cuts in the proof: since a cut is just a function application, and computing function applications is how Haskell code is executed. Since we are limited to terminating terms, executing proofs must eventually give us a simplified proof with no cuts in it (because it cannot be simplified any more).

This line of reasoning leads us to a very significant result about intuitionistic (and classical) proofs: the cut elimination theorem. Shown by Gentzen in 1935, the theorem states that every statement with a proof that contains cuts, also has a proof that contains no cuts. We have the framework to prove the same result: if every intuitionistic proof corresponds to a terminating Haskell term, where cuts are function application, then computation does the rest of the work. To get a cut-free version of an existing proof, it is sufficient to run the proof fully.