Personal Websitehttps://ivanbakel.github.io/A portfolio and content website.enContents © 2020 <a href="mailto:ivb@vanbakel.io">Isaac van Bakel</a> Tue, 29 Dec 2020 00:22:57 GMTNikola (getnikola.com)http://blogs.law.harvard.edu/tech/rss- The Yesod Transformer Library https://ivanbakel.github.io/posts/yesod-transformer-library-announced/Isaac van Bakel<div><h2>The Yesod Transformer Library</h2>
<p>This post assumes a familiarity with monad transformers.</p>
<p>As part of my job, I've been spending lots of time in the <a href="https://www.yesodweb.com/">Yesod</a> ecosystem - <code>yesod-core</code> (and <code>yesod-*</code>), <code>persistent</code>, and the <code>shakespeare</code> templating languages. These libraries are particularly easy to contribute back to: the efforts of Michael Snoyman and Matt Parsons, as well as everyone else who puts hard work into those communities, mean that interaction is pleasant, considerate, and meaningful. Their ambitious nature also means that small contributions are relatively easy to make - just fill any utility hole in the vast APIs that already exist.</p>
<p>However, Yesod's design does come with some pain points. Notably, <code>yesod-core</code> is in a complicated relationship with <a href="https://hackage.haskell.org/package/mtl">monad transformers</a>: it used to support them, and now it doesn't.</p>
<h3>Monad transformers in Yesod</h3>
<p>Yesod is based around two monads: <code>HandlerFor site</code>, which is essentially a request handler for some <em>foundation site</em> type <code>site</code>; and <code>WidgetFor site</code>, which is a specialised handler for building webpages. Specifically, <code>WidgetFor</code> lets you use do-notation to build a page imperitavely - you can write to the page piece by piece, appending HTML, CSS, and JS directly to the head or body.</p>
<p>Yesod <em>also</em> has two monad classes: <code>MonadHandler</code>, which can be implemented by any monad that can run a <code>HandlerFor</code>; and <code>MonadWidget</code>, which has the same property for <code>WidgetFor</code>. Each base monad type implements its respective monad class. But these classes also have lifting instance for loads of monad transformers. For example, if <code>MonadHandler m</code>, then <code>MonadHandler (ReaderT r m)</code>.</p>
<p><a href="https://github.com/yesodweb/yesod/commit/47ee7384ea123135d090c4e931657cb11c583b94"><code>HandlerFor</code> and <code>WidgetFor</code> used to be transformers</a>. <code>yesod-core</code> still contains some references to <code>WidgetT</code> and <code>HandlerT</code>, even in non-deprecated APIs. But some years ago, they were changed to be just monads, and the result is an interesting limbo - some code cares about <code>mtl</code>-style <code>MonadWidget</code>s and <code>MonadHandler</code>s, and other code just uses the base monad.</p>
<p>This change wasn't baseless - Snoyman has written in the past about how overused he thinks full-application monad stacks are, and the value of the <a href="https://www.fpcomplete.com/blog/2017/06/readert-design-pattern/">"ReaderT" pattern</a>, where monad state is limited to a reader type and (typically) the <code>IO</code> monad. In fact, <code>HandlerFor</code> and <code>WidgetFor</code> are exactly that - <code>IO</code> monads which can read the foundation site.</p>
<h4>What's so bad about that?</h4>
<p>For the most part, nothing. Concrete types can even add value themselves - error messages from an unexpected type are often much more readable than instance resolution failures in <code>mtl</code>-style code.</p>
<p>But the pain points I encountered came from two particular places where <code>yesod-core</code> interacted with code for my application:</p>
<ul>
<li>when running the website, and</li>
<li>when trying to make a custom widget</li>
</ul>
<h5>Running the website</h5>
<p>Yesod's design is very tied to the idea of a foundation site - as can be best seen in <a href="https://hackage.haskell.org/package/yesod-core-1.6.18.8/docs/Yesod-Core.html#t:Yesod">the <code>Yesod</code> class</a>. <code>Yesod</code> is implemented for foundation site types - it describes how the corresponding site performs certain actions. For an example, take the <code>errorHandler</code> method:</p>
<pre class="code literal-block"><span class="nf">errorHandler</span> <span class="ow">::</span> <span class="kt">ErrorResponse</span> <span class="ow">-></span> <span class="kt">HandlerFor</span> <span class="n">site</span> <span class="kt">TypedContent</span>
</pre>
<p><code>errorHandler</code> defines how the server takes some kind of error and returns it to the client as content. Crucially, that response <em>must</em> be a <code>HandlerFor</code> - it cannot be a generalized <code>MonadHandler</code>, and it must be runnable (presumably because, otherwise Yesod could not run it). Similar constraints, using <code>WidgetFor</code> and <code>HandlerFor</code>, pepper the whole class.</p>
<p>These choices are not a consequence of the dropping of support for monad transformers - the same design existed well before then. But they do have the same consequences - that <code>site</code> must do everything. In the above example, there is the constraint (from the type) that the <code>site</code>, as well as some internal Yesod <code>HandlerData</code>, must be enough to decide how to give an error response. You cannot return a <code>ReaderT r (... TypedContent)</code>, even if you need some additional context <code>r</code> not found in the <code>site</code>. Similarly, you cannot use a <code>WriterT w (... TypedContent)</code> to track some error metrics <code>w</code>, without having to store them in the <code>site</code>.</p>
<p>This is not an obstacle most of the time - after all, it's easy to put stuff in the site. But I was interested in making a handler that would change the output of logging on my website, without needing to change all of my code. In <code>mtl</code> style, this would have been a monad transformer, to modify the monad stack for my handlers. In Yesod, that change isn't so easy.</p>
<h5>Making custom widgets</h5>
<p>Widgets, like handlers, are required to be <code>WidgetFor</code>s at the boundary where your application code touches Yesod's. This is more sensible - rarely do you want hidden outside context on all your widgets. But that doesn't preclude transformers appearing <em>inside</em> some widget code: like running widgets which all need the same DB data in the same <code>ReaderT</code>.</p>
<p>For a more concrete example, consider a problem I had last week: when visualising some submitted data to the user who submitted it, some of it would be obviously erroneous - data could be missing, values would be zero where non-zero was expected, etc. The goal was to let the user know that these anomalies existed in a little report table, <em>and</em> have that table link to each anomaly in the view.</p>
<p>The code could have been ugly, because this is a clear case of mixed responsibilities - I needed to spot and track errors in the model, but report them in a way that's directly woven into the view. But there was a more elegant possibility: the view could report anomalies as they were displayed. Once the view was finished rendering, the anomaly report would already be complete. Since the view itself did the reporting, it could even generate a unique ID per report, and make that ID link to the displayed anomaly - when the report was rendered, each link's target would already be in place on the page.</p>
<p>The resulting approach called for a custom widget type; one which could display basic widgets (<code>WidgetFor site</code>, which did not report anomalies) alongside their error-reporting cousins. This custom type was a <code>WriterT</code> transformer on <code>WidgetFor</code> - and the writer type was the anomaly report itself.</p>
<p>But this transformer, too, ran into problems. Yesod's <code>whamlet</code> quasiquoter lets you include HTML snippets as widgets in your Haskell code - and those snippets can themselves embed other widgets. For example, the snippet:</p>
<pre class="code literal-block"><span class="p">[</span><span class="n">whamlet</span><span class="o">|</span>
<span class="o"><</span><span class="n">h1</span><span class="o">></span><span class="kt">A</span> <span class="n">foreboding</span> <span class="n">title</span>
<span class="o">^</span><span class="p">{</span><span class="n">anInnerWidget</span><span class="p">}</span>
<span class="o">|</span><span class="p">]</span>
</pre>
<p>defines a widget whose layout is the <code><h1></code> header, followed by the layout of the <code>anInnerWidget</code> widget. What's the type of <code>anInnerWidget</code>? Yesod requires that it is <code>WidgetFor</code>, <em>even if</em> the outer widget type is not! In other words, custom widgets cannot use this syntax to nest each other, even though Yesod's <code>WidgetFor</code>s can.</p>
<h3>Custom foundation sites</h3>
<p>Of course, Yesod still affords you plenty of control over how your handlers and widgets will run. Specifically, to allow for configuration options, DB connections, and other runtime values that would be relevant to handlers and widgets, Yesod allows your foundation site to be absolutely anything: the only thing it has to do is implement the relevant classes, like <code>Yesod</code>.</p>
<p>This flexibility on the foundation site type is much more like the polymorphism that <code>mtl</code> was written to take advantage of. And since Yesod code often restricts us to using <code>HandlerFor site</code> and <code>WidgetFor site</code>, there's a neat alternative to monad transformers for Yesod - <strong>site transformers</strong>.</p>
<h4>Site transformers</h4>
<p>A site transformer is largely what it sounds like - a wrapper for a <code>site</code> type which, like a monad transformer augments a monad, augments the underlying <code>site</code>. The transformed type still needs to implement relevant classes, like <code>Yesod</code> - but it is free to do those by delegating to the base class, or defining its own behaviour, as it wants.</p>
<p>The power of site transformers is: while your handlers and widgets are required to depend only on your <code>site</code> type, and no additional context, they can depend on the site type <em>as much as they want</em>. Morever, because access to the Yesod internals lets you change the site type for just a snippet of a handler, that snippet can depend on a different site type to the rest of the code. When running any snippet, you then only have to provide the additional context to use the modified site type.</p>
<p>To talk about this in more concrete terms, consider this example:</p>
<pre class="code literal-block"><span class="kr">data</span> <span class="kt">ReaderSite</span> <span class="n">r</span> <span class="n">site</span> <span class="ow">=</span> <span class="kt">ReaderSite</span> <span class="n">r</span> <span class="n">site</span>
</pre>
<p>This <code>ReaderSite</code> is a site transformation that adds additional reader context to handlers and widgets. It doesn't look exactly like a <code>ReaderT</code>, because the <code>site</code> itself is already part of a reader (remember, handlers are "Reader IO"s). Any handler with site type <code>ReaderSite r site</code> can read the transformed site type - so it can access the value with type <code>r</code>. This means that, by transforming a <code>site</code> to a <code>ReaderSite r site</code>, we can <em>add reader context</em> to a handler - without needing <code>ReaderT</code>.</p>
<p>For example, we can define:</p>
<pre class="code literal-block"><span class="nf">ask</span> <span class="ow">::</span> <span class="kt">HandlerFor</span> <span class="p">(</span><span class="kt">ReaderSite</span> <span class="n">r</span> <span class="n">site</span><span class="p">)</span> <span class="n">r</span>
</pre>
<p>which readas the reader context from the site. To <em>run</em> a <code>ReaderSite</code>, we have to supply the reader context - just like for <code>ReaderT</code>:</p>
<pre class="code literal-block"><span class="nf">runReaderSite</span> <span class="ow">::</span> <span class="n">r</span> <span class="ow">-></span> <span class="kt">HandlerFor</span> <span class="p">(</span><span class="kt">ReaderSite</span> <span class="n">r</span> <span class="n">site</span><span class="p">)</span> <span class="n">a</span> <span class="ow">-></span> <span class="kt">HandlerFor</span> <span class="n">site</span> <span class="n">a</span>
</pre>
<p>In fact, this concept goes all the way to essentially reproducing an <code>mtl</code>-style design in Yesod - by defining, using, and running site transformers, it's possible to do many of the things <code>mtl</code> would otherwise let you do: read additional data, write to additional outputs, run parts of your monad code with a modified state, etc. All the while, your code remains compatible with <code>Yesod</code> itself, because the foundation site type is yours to play with.</p>
<h3>Announcing YTL</h3>
<p>That gets me to my main point: based on the above concepts, I've written a new utility library for writing Yesod code: <a href="https://github.com/ivanbakel/ytl"><code>ytl</code>, the Yesod Transformer Library</a>. Like <code>mtl</code> for monads, <code>ytl</code> describes site transformers: how to define them; how to lift handlers and widgets to transformed variants; and how to run transformed handlers and widgets with the underlying site.</p>
<p>The library itself is already being put into use for <a href="https://github.com/ivanbakel/yesod-katip"><code>yesod-katip</code>, a logging bridge I wrote between Yesod webservers and Katip scribes</a>. But the power of the library is in its extensibility - it provides the tools to define new transformers easily, and integrate them into existing code automatically. Hopefully, others will find that <code>ytl</code> is just what they've been looking for.</p></div>https://ivanbakel.github.io/posts/yesod-transformer-library-announced/Mon, 28 Dec 2020 18:01:36 GMT
- Theorem proving in Haskellhttps://ivanbakel.github.io/posts/theorem-proving-in-haskell/Isaac van Bakel<div><h2>Theorem proving in Haskell</h2>
<p>This article follows on from <a href="https://ivanbakel.github.io/posts/intuitionistic-logic-in-haskell/">the previous one</a> on intuitionistic logic in Haskell. Unlike that one, this is not cover any theory - instead, it's a technical exploration of a Haskell library that simulates a theorem prover.</p>
<h3>Theorem provers</h3>
<p>Since Haskell terms are proofs, we can try to use the Haskell compiler as a <strong>theorem prover</strong> - a program that allows the user to describe a proof as a series of steps, and then checks that the proof is correct. In Haskell's case, we describe proofs as terms, and know they are correct when the term compiles and has the expected type (the statement we are trying to prove).</p>
<h4>Coq</h4>
<p>This approach is one taken by many theorem provers: <a href="https://coq.inria.fr/">Coq</a> is a theorem prover with an associated dependently-typed language. Coq's type system is powerful enough to encode much more than just the propositional logic we've seen so far: it can be used to make statements about the behaviour of functions, the existence of certain kinds of values, and even other (quantified) statements.</p>
<p>The Coq compiler is similarly powerful: it is able to enforce that functions are <em>strictly positive</em> - the property that gave us terminating Haskell terms - while still allowing for a wide range of expressible terms. The power of Coq is such that you can use it to describe and prove complex statements and theorems: but its basic concepts are very similar to the ones we've seen already. Coq proofs are just code; implication terms are just functions.</p>
<h4>Interactive theorem proving</h4>
<p>Coq is designed around <em>interactive theorem proving</em> - proofs can be described as a series of steps, and an editor can step through a proof to see the effect of each step in arriving at the final result. Each step can modify the <strong>environment</strong> - the set of variables and facts known to the compiler - and the <strong>goals</strong> - the set of statements which need to be proven. For example, the type <code>forall A B : Prop, A -> B -> A /\ B</code> says that, from the hypotheses <code>A</code> and <code>B</code>, it is possible prove <code>A /\ B</code> (this is equivalent to the Haskell type <code>a -> b -> a /\ b</code>). A proof of that statement might look like:</p>
<pre class="code literal-block"><span class="kn">Proof</span><span class="o">.</span> <span class="c">(* env: {}, goals: { forall A B, A -> B -> A /\ B } *)</span>
<span class="k">intros</span> <span class="n">A</span> <span class="n">B</span> <span class="n">a</span> <span class="n">b</span><span class="o">.</span> <span class="c">(* env: { A, B : Prop ; a : A ; b : B }, goals: { A /\ B } *)</span>
<span class="k">split</span><span class="o">.</span> <span class="c">(* env: { A, B : Prop ; a : A ; b : B }, goals: { A ; B } *)</span>
<span class="o">-</span> <span class="kp">exact</span> <span class="n">a</span><span class="o">.</span> <span class="c">(* env: { A, B : Prop ; a : A ; b : B }, goals: { B } *)</span>
<span class="o">-</span> <span class="kp">exact</span> <span class="n">b</span><span class="o">.</span> <span class="c">(* env: { A, B : Prop ; a : A ; b : B }, goals: {} *)</span>
<span class="kn">Qed</span><span class="o">.</span>
</pre>
<p>At the start of the proof, the environment is empty and there is one goal - the whole statement. The first step, <code>intros</code>, introduces names into the environment: <code>A</code>, <code>B</code>, for the types, and <code>a</code>, <code>b</code> for the proofs - and changes the goal to <code>A /\ B</code>. The second step <code>split</code>s one goal into two - <code>A</code> and <code>B</code>. The third and fourth steps each prove a single goal by providing a term which is <code>exact</code>ly its proof.</p>
<p>Coq also allows for such terms to be expressed as functions directly: and vice versa. There's no distinction between what's expressible in the "code style" or the "proof style" of Coq terms.</p>
<h3>Proof style with monads</h3>
<p>Looking at the proof style available in Coq and other provers, one immediate candidate for a Haskell implementation springs to mind: do-notation. Haskell's do-notation already gives syntactic support for sequences of steps and name-binding, through <code>x <- ...</code>.</p>
<p>In order to leverage do-notation, we then have to find an appropriate Haskell type which supports it (or otherwise implement one ourselves). While a <code>Monad</code> instance would be the most obvious choice, it wouldn't be powerful enough; in order to support a changing goal, something about the <code>Monad</code> type must change over time. We also want to have type-level <em>restrictions</em>: the <code>split</code> step, which turns a <code>A /\ B</code> goal into two subgoals, shouldn't be usable on a goal which isn't an <code>/\</code>.</p>
<h4>An indexed monad</h4>
<p>It turns out that we can overcome these problems by adding some <em>type-level state</em> to our monad. The result is called an <a href="https://hackage.haskell.org/package/indexed"><strong>indexed monad</strong></a>, which (along with definitions for indexed functors and applicatives), has the declaration:</p>
<pre class="code literal-block"><span class="kr">class</span> <span class="p">(</span><span class="kt">IxApplicative</span> <span class="n">m</span><span class="p">)</span> <span class="ow">=></span> <span class="kt">IxMonad</span> <span class="n">m</span> <span class="kr">where</span>
<span class="n">ibind</span> <span class="ow">::</span> <span class="p">(</span><span class="n">a</span> <span class="ow">-></span> <span class="n">m</span> <span class="n">j</span> <span class="n">k</span> <span class="n">b</span><span class="p">)</span> <span class="ow">-></span> <span class="n">m</span> <span class="n">i</span> <span class="n">j</span> <span class="n">a</span> <span class="ow">-></span> <span class="n">m</span> <span class="n">i</span> <span class="n">k</span> <span class="n">b</span>
</pre>
<p>An <code>IxMonad</code> instance is a type constructor with 3 arguments: the old typestate, the new typestate, and the monad argument. When sequencing indexed monad values, the typestates must "align": the new typestate of the first value must be the old typestate of the second. This is easier to see in the <code>IxApplicative</code> definition (slightly modified here for readability):</p>
<pre class="code literal-block"><span class="kr">class</span> <span class="kt">IxFunctor</span> <span class="n">m</span> <span class="ow">=></span> <span class="kt">IxApplicative</span> <span class="n">m</span> <span class="kr">where</span>
<span class="n">ipure</span> <span class="ow">::</span> <span class="n">a</span> <span class="ow">-></span> <span class="n">m</span> <span class="n">i</span> <span class="n">i</span> <span class="n">a</span>
<span class="n">iap</span> <span class="ow">::</span> <span class="n">m</span> <span class="n">i</span> <span class="n">j</span> <span class="p">(</span><span class="n">a</span> <span class="ow">-></span> <span class="n">b</span><span class="p">)</span> <span class="ow">-></span> <span class="n">m</span> <span class="n">j</span> <span class="n">k</span> <span class="n">a</span> <span class="ow">-></span> <span class="n">m</span> <span class="n">i</span> <span class="n">k</span> <span class="n">b</span>
</pre>
<p><code>iap</code>, as well as applying the function type, also "composes" the typestates: the resulting monad value has the old typestate of the first argument, and the new typestate of the second.</p>
<h4>The goal as typestate</h4>
<p>So we might want an indexed monad; but what should its typestate be? The type-level information we care about are the environment and the goal. However, the environment (a collection of variables and definitions) is already handled by Haskell as a programming language: do-notation even has a way to bind variables, with the arrow <code><-</code>.</p>
<p>It follows that the typestate of our indexed monad turns out to be the <em>goal</em>. Proof steps, as monad values, are really then <em>goal transformations</em>. Such a monad value is itself a proof that, in a certain environment, some goal transformation is valid: a proof step that goes from a goal of <code>a</code> to a goal of <code>b</code> must also prove that from a proof of <code>b</code> it's possible to get a proof of <code>a</code>.</p>
<h4>The <code>Tactic</code> monad</h4>
<p>How then do we actually define the monad? We can do that by thinking about what valid goal transformations we will want to have.</p>
<p>If a monad value has some final type variable <code>a</code>, then it will be possible to bind that value to a variable which then has type <code>a</code>:</p>
<pre class="code literal-block"><span class="nf">x</span> <span class="ow"><-</span> <span class="n">myTransformation</span> <span class="c1">-- myTransformation has type m i j a</span>
<span class="o">...</span> <span class="c1">-- from here on, x has type a</span>
</pre>
<p>The remaining proof with goal <code>j</code> will have access to a proof of <code>a</code> through the variable <code>x</code>. Such a goal transformation <em>introduces</em> <code>a</code> as a hypothesis for the rest of the proof; if the goal from that point on is <code>j</code>, then the proof shows that <code>a -> j</code>.</p>
<p>But the original goal was <code>i</code>, not <code>j</code> - so the goal transformation has to justify why a proof of <code>a -> j</code> gives a proof of <code>i</code>. In other words, the goal transformation is an inhabitant of <code>(a -> j) -> i</code>.</p>
<p>This gives our monad definition:</p>
<pre class="code literal-block"><span class="kr">data</span> <span class="kt">Tactic</span> <span class="n">i</span> <span class="n">j</span> <span class="n">a</span>
<span class="ow">=</span> <span class="kt">Tactic</span> <span class="p">((</span><span class="n">a</span> <span class="ow">-></span> <span class="n">j</span><span class="p">)</span> <span class="ow">-></span> <span class="n">i</span><span class="p">)</span>
</pre>
<p>The name <code>Tactic</code> comes from the term used for such proof steps in many theorem provers.</p>
<h4>Some tactics</h4>
<p>Now that we know the shape of the monad, it's very easy to start writing tactic instances like the ones we would expect to see in any theorem prover (if you're not interested in examples, skip this section).</p>
<p>Coq's <code>intro</code>, for example, allows you to give a name to the left side of an <code>-></code>:</p>
<pre class="code literal-block"><span class="nf">intro</span> <span class="ow">::</span> <span class="kt">Tactic</span> <span class="p">(</span><span class="n">a</span> <span class="ow">-></span> <span class="n">b</span><span class="p">)</span> <span class="n">b</span> <span class="n">a</span>
</pre>
<p>Such a goal transformation requires an inhabitant for <code>(a -> b) -> (a -> b)</code> - so the definition is pretty obvious:</p>
<pre class="code literal-block"><span class="nf">intro</span> <span class="ow">=</span> <span class="kt">Tactic</span> <span class="n">id</span>
</pre>
<p>The <code>left</code> tactic simplifies proving an <code>\/</code>:</p>
<pre class="code literal-block"><span class="nf">left</span> <span class="ow">::</span> <span class="kt">Tactic</span> <span class="p">(</span><span class="n">a</span> <span class="o">\/</span> <span class="n">b</span><span class="p">)</span> <span class="n">a</span> <span class="nb">()</span>
</pre>
<p>This tactic doesn't introduce any hypothesis - and that won't be a problem later, since we can always construct a value of type <code>()</code> when necessary.</p>
<pre class="code literal-block"><span class="nf">left</span> <span class="ow">=</span> <span class="kt">Tactic</span> <span class="nf">\</span><span class="n">f</span> <span class="ow">-></span> <span class="kt">Left</span> <span class="p">(</span><span class="n">f</span> <span class="nb">()</span><span class="p">)</span>
</pre>
<h3>Canonical truth</h3>
<p>The <code>Tactic</code> monad (once you define the necessary typeclass instances) lets you do proof steps with do notation: but we still can't describe a whole proof. Specifically, we don't know when we are "done" proving a statement - after all, we could always apply further goal transformations.</p>
<p>We could be satisfied by reducing the proof of a statement to the proof of some other statement which we've already proved i.e. producing a term of type <code>Tactic a b ()</code>, where we know <code>b</code> and want to prove <code>a</code>. Such a proof would inhabit <code>(() -> b) -> a)</code> - and since <code>() -> b</code> is inhabited (since we know <code>b</code> to be true), then we can get an inhabitant for <code>a</code>.</p>
<p>In practice, however, it's useful to have a <em>single</em> choice for such a known statement - and that statement represents "truth" in the type system. Additionally, it would be nice for this choice of truth to have a single, <strong>canonical</strong> constructor - so that all proofs of truth are the same. Together, these justify choosing <code>()</code> as the representation of truth.</p>
<h4>Even more tactics</h4>
<p>Using this representation of truth, we can describe tactics that <em>solve</em> goals - that is, transform goals into the goal of <code>()</code>. (Again, if you don't want examples, skip this section.)</p>
<p>Coq's <code>exact</code> tactic solves a goal by giving a term of the exact type of the goal:</p>
<pre class="code literal-block"><span class="nf">exact</span> <span class="ow">::</span> <span class="n">a</span> <span class="ow">-></span> <span class="kt">Tactic</span> <span class="n">a</span> <span class="nb">()</span> <span class="nb">()</span>
<span class="nf">exact</span> <span class="n">proof</span> <span class="ow">=</span> <span class="kt">Tactic</span> <span class="nf">\</span><span class="kr">_</span> <span class="ow">-></span> <span class="n">proof</span>
</pre>
<p>The <code>split</code> tactic in Coq turns a goal of <code>A /\ B</code> into two subgoals of <code>A</code> and <code>B</code>. However, the <code>Tactic</code> monad doesn't support multiple goals: so how can we represent it? The answer is through <em>subproofs</em>:</p>
<pre class="code literal-block"><span class="nf">split</span> <span class="ow">::</span> <span class="kt">Tactic</span> <span class="n">a</span> <span class="nb">()</span> <span class="nb">()</span> <span class="ow">-></span> <span class="kt">Tactic</span> <span class="n">b</span> <span class="nb">()</span> <span class="nb">()</span> <span class="ow">-></span> <span class="kt">Tactic</span> <span class="p">(</span><span class="n">a</span> <span class="o">/\</span> <span class="n">b</span><span class="p">)</span> <span class="nb">()</span> <span class="nb">()</span>
</pre>
<p>These subproofs require solving their respective subgoals. The tactic then allows you to solve a goal of <code>a /\ b</code> by providing a proof for each of <code>a</code> and <code>b</code>. The resulting proofs can finally be combined:</p>
<pre class="code literal-block"><span class="nf">split</span> <span class="p">(</span><span class="kt">Tactic</span> <span class="n">getA</span><span class="p">)</span> <span class="p">(</span><span class="kt">Tactic</span> <span class="n">getB</span><span class="p">)</span> <span class="ow">=</span> <span class="kt">Tactic</span> <span class="nf">\</span><span class="n">trivial</span> <span class="ow">-></span> <span class="p">(</span><span class="n">getA</span> <span class="n">trivial</span><span class="p">,</span> <span class="n">getB</span> <span class="n">trivial</span><span class="p">)</span>
</pre>
<p>This technique of subproofs allows for defining lots of other useful tactics. The <code>assert</code> tactic in Coq allows for stating a subgoal, proving it, and giving the resulting proof a name to use later.</p>
<pre class="code literal-block"><span class="nf">assert</span> <span class="ow">::</span> <span class="kt">Tactic</span> <span class="n">a</span> <span class="nb">()</span> <span class="nb">()</span> <span class="ow">-></span> <span class="kt">Tactic</span> <span class="n">i</span> <span class="n">i</span> <span class="n">a</span>
</pre>
<p><code>assert</code> takes the proof of the subgoal, and allows it to be bound in do-notation. Using the bound variable later on just uses the subproof.</p>
<pre class="code literal-block"><span class="nf">assert</span> <span class="p">(</span><span class="kt">Tactic</span> <span class="n">getSubproof</span><span class="p">)</span> <span class="ow">=</span> <span class="kt">Tactic</span> <span class="nf">\</span><span class="kr">_</span> <span class="ow">-></span> <span class="n">getSubproof</span> <span class="n">id</span>
</pre>
<h3>A complete proof</h3>
<p>We can then define a complete proof as one which solves its goal:</p>
<pre class="code literal-block"><span class="kr">data</span> <span class="kt">Proof</span> <span class="n">a</span> <span class="ow">=</span> <span class="kt">Proof</span> <span class="p">(</span><span class="kt">Tactic</span> <span class="n">a</span> <span class="nb">()</span> <span class="nb">()</span><span class="p">)</span>
</pre>
<p>Of course, such a definition is justified only if we can use it to get an inhabitant of <code>a</code>. In fact, this defines a complete proof as inhabitant of <code>(() -> ()) -> a</code>, which can very easily be used to obtain an inhabitant of <code>a</code>:</p>
<pre class="code literal-block"><span class="nf">useProof</span> <span class="ow">::</span> <span class="kt">Proof</span> <span class="n">a</span> <span class="ow">-></span> <span class="n">a</span>
<span class="nf">useProof</span> <span class="p">(</span><span class="kt">Proof</span> <span class="p">(</span><span class="kt">Tactic</span> <span class="n">transformation</span><span class="p">))</span> <span class="ow">=</span> <span class="n">transformation</span> <span class="n">id</span>
</pre></div>https://ivanbakel.github.io/posts/theorem-proving-in-haskell/Sun, 20 Sep 2020 11:35:13 GMT
- Intuitionistic logic in Haskellhttps://ivanbakel.github.io/posts/intuitionistic-logic-in-haskell/Isaac van Bakel<div><h2>Intuitionistic logic in Haskell</h2>
<p>This article assumes a familiarity with Haskell, and some basic knowledge of classical logic (if you don't know about different varieties of logic, the one you <em>do</em> know is probably classical).</p>
<h3>Terminating Haskell</h3>
<h4>The <code>Void</code> datatype</h4>
<p>The <code>Void</code> datatype is part of the Haskell standard library, and should be well-known by most Haskellers. In short, <code>Void</code> has the following declaration</p>
<pre class="code literal-block"><span class="kr">data</span> <span class="kt">Void</span>
</pre>
<p>That is: it's a datatype, with an empty collection of constructors (you may be surprised this is a valid declaration). The consequence is that it's impossible to construct any value with type <code>Void</code>, a fact that both programmers and the compiler can exploit.</p>
<p>Though a <code>Void</code> value is <em>unconstructable</em>, it is still very simple to write a valid Haskell term which has the <code>Void</code> type.</p>
<pre class="code literal-block"><span class="nf">aVoidTerm</span> <span class="ow">::</span> <span class="kt">Void</span>
<span class="nf">aVoidTerm</span> <span class="ow">=</span> <span class="n">aVoidTerm</span>
<span class="c1">-- Alternatively:</span>
<span class="nf">aVoidTerm</span> <span class="ow">=</span> <span class="n">undefined</span>
<span class="c1">-- Or even:</span>
<span class="nf">aVoidTerm</span> <span class="ow">=</span> <span class="ne">error</span> <span class="s">"Tried to evaluate a `Void` term"</span>
</pre>
<p>These terms all share the property of being <strong>non-terminating</strong>. While lazy evaluation lets them appear in programs without any problem, any attempt to evaluate these terms will fail: either because of an infinite loop or a runtime error.</p>
<h4>Terminating terms</h4>
<p><strong>Terminating</strong> terms are the subset of all Haskell terms which can be evaluated in finite time without error. While it's impossible to decide in general if a particular term is terminating, we can restrict our language so that we can <em>only</em> write terminating terms. One way to do that is to require that recursive functions are <strong>strictly positive</strong> - so all recursive calls are with arguments that are "smaller" than the current argument - and <strong>total</strong> - defined for every possible argument.</p>
<p>If we implement those two restrictions in Haskell, we get a subset of the language that can only express terminating Haskell terms - though it can't express every terminating Haskell term. In terminating Haskell, we can <em>no longer</em> write terms with a <code>Void</code> type: there's no way to define such a term recursively, because it won't get "smaller"; and there's no way to leave the value <code>undefined</code>.</p>
<p>If you can force the evaluation of a <code>Void</code> term, you can't get some terminating value, so there is no well-defined "after" evaluation. The compiler even lets you exploit this fact to write a terminating term that evaluates a <code>Void</code> value, and then returns a value of <em>any type</em>.</p>
<pre class="code literal-block"><span class="cm">{-# LANGUAGE EmptyCase #-}</span>
<span class="nf">useAVoid</span> <span class="ow">::</span> <span class="kt">Void</span> <span class="ow">-></span> <span class="n">a</span>
<span class="nf">useAVoid</span> <span class="n">void</span> <span class="ow">=</span> <span class="kr">case</span> <span class="n">void</span> <span class="kr">of</span>
</pre>
<p>This code uses the <code>EmptyCase</code> language extension to allow us to write a <code>case</code> statement with no branches - remembering that <code>case</code> statements have to be total, this means that the above code only compiles because <code>Void</code> has <em>no cases</em> to branch on. While the function definition itself is terminating, the result can have type <code>a</code>, for any choice of <code>a</code>, because by evaluating the argument it will never terminate and be forced to produce an <code>a</code>.</p>
<h4>Inhabited types</h4>
<p><code>Void</code> has the property of being <strong>uninhabited</strong>, because it has no "inhabitants" - valid terminating terms which have the <code>Void</code> type. Types with inhabitants are, unsurprisingly, said to be <strong>inhabited</strong>.</p>
<p>We have already seen that <code>Void -> a</code> is inhabited for any choice of <code>a</code>, even uninhabited choices - in fact, this is only because <code>Void</code> is uninhabited. For a (terminating) function with type <code>a -> b</code>, <code>b</code> can be uninhabited only if <code>a</code> is uninhabited - otherwise the function could evaluate the argument of type <code>a</code>, have it terminate, and be forced to produce a terminating value of type <code>b</code>: an impossibility.</p>
<p>A consequence is that the type <code>a -> Void</code> is inhabited only for choices of <code>a</code> which are uninhabited. Moreover, if <code>a</code> is uninhabited, then we can write a terminating term with type <code>a -> Void</code> - just like we did for <code>Void -> a</code> - by exploiting the fact that <code>a</code> has no inhabitants. The result is: <code>a -> Void</code> is inhabited if and only if <code>a</code> is uninhabited, and vice versa.</p>
<p>We can extend this reasoning about inhabitants to many other basic Haskell types. <code>Maybe a</code>, for example, is always inhabited by the terminating term <code>Nothing</code>, even for uninhabited choices of <code>a</code>. <code>Either a b</code> is inhabited provided one of <code>a</code> or <code>b</code> is inhabited, because you could wrap the terminating term with type <code>a</code> (or <code>b</code>) in a <code>Left</code> (or <code>Right</code>) constructor to give a terminating term of type <code>Either a b</code>. Conversely, if <code>Either a b</code> is inhabited, then <em>at least one</em> of <code>a</code> or <code>b</code> must be inhabited (though the proof is much more difficult to summarize). In a similar vein, the tuple type <code>(a, b)</code> is inhabited if and only if <em>both</em> <code>a</code>, <code>b</code> are inhabited.</p>
<h3>Types and logic</h3>
<p>Astute readers may have already spotted the point of the above discussion: this reasoning about inhabited types looks a lot like formal logic.</p>
<p>If inhabitedness is "truth", then uninhabitedness is "falsehood". Continuing the comparison, <code>a -> Void</code> is the "negation" of <code>a</code> - since it is uninhabited if and only if <code>a</code> is inhabited, and vice versa; <code>Either a b</code> is the "or" of <code>a</code> and <code>b</code>, since it is inhabited if and only if at least one of <code>a</code>, <code>b</code> is'; and <code>(a, b)</code> is the "and".</p>
<p>It remains to check that <code>a -> b</code> follows the expected behaviour of "implies" - that <code>a -> b</code> is uninhabited (aka false) if and only if <code>a</code> is inhabited (true) and <code>b</code> is uninhabited (false). In fact, due to arguments which include the ones made earlier on, we find that this is the case - and that <code>-></code>, as well as resembling the "implies" arrow, also seems to act like one.</p>
<p>The similarities don't stop there. If we introduce some type aliases to make our types easier to read:</p>
<pre class="code literal-block"><span class="cm">{-# LANGUAGE TypeOperators #-}</span>
<span class="kr">type</span> <span class="kt">Not</span> <span class="n">a</span> <span class="ow">=</span> <span class="n">a</span> <span class="ow">-></span> <span class="kt">Void</span>
<span class="kr">type</span> <span class="n">a</span> <span class="o">/\</span> <span class="n">b</span> <span class="ow">=</span> <span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span>
<span class="kr">type</span> <span class="n">a</span> <span class="o">\/</span> <span class="n">b</span> <span class="ow">=</span> <span class="kt">Either</span> <span class="n">a</span> <span class="n">b</span>
</pre>
<p>Then we can wonder if some inhabitants exist for certain types which correspond to logical statements we know to be true. One example would be the commutativity of <code>/\</code> - that <code>a /\ b -> b /\ a</code>. A programmer could quickly produce such an inhabitant:</p>
<pre class="code literal-block"><span class="nf">andComm</span> <span class="ow">::</span> <span class="n">a</span> <span class="o">/\</span> <span class="n">b</span> <span class="ow">-></span> <span class="n">b</span> <span class="o">/\</span> <span class="n">a</span>
<span class="c1">-- i.e. :: (a, b) -> (b, a)</span>
<span class="nf">andComm</span> <span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span> <span class="ow">=</span> <span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="n">x</span><span class="p">)</span>
</pre>
<p>We can also wonder if types which corresponding to false logical statements are <em>uninhabited</em>. For example, <code>a /\ Not a</code> should never hold, so the type should be uninhabited. It follows that <code>Not (a /\ Not a)</code> is inhabited: in fact it is.</p>
<pre class="code literal-block"><span class="nf">explosion</span> <span class="ow">::</span> <span class="kt">Not</span> <span class="p">(</span><span class="n">a</span> <span class="o">/\</span> <span class="kt">Not</span> <span class="n">a</span><span class="p">)</span>
<span class="c1">-- i.e. :: (a, a -> Void) -> Void</span>
<span class="nf">explosion</span> <span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">f</span><span class="p">)</span> <span class="ow">=</span> <span class="n">f</span> <span class="n">x</span>
</pre>
<p>Programmers who struggle with logic may notice that rewriting the type in terms of Haskell types makes for much simpler reading: it's pretty easy to spot that <code>(a, a -> Void) -> Void</code> is inhabited, and even to immediately write out the above inhabitant.</p>
<h4>The excluded middle</h4>
<p>A pretty important rule for classical logic is the "law of the excluded middle" - that every statement is either definitely true, or definitely false. </p>
<p>We can express this in our types as <code>a \/ Not a</code>. But can we inhabit it? For particular choices of <code>a</code>, sure: if we knew that <code>a</code> was <code>Int</code>, or <code>Void</code>, or even <code>Maybe b</code>, we would be able to produce an inhabitant easily:</p>
<pre class="code literal-block"><span class="nf">intLOEM</span> <span class="ow">::</span> <span class="kt">Int</span> <span class="o">\/</span> <span class="kt">Not</span> <span class="kt">Int</span>
<span class="nf">intLOEM</span> <span class="ow">=</span> <span class="kt">Left</span> <span class="mi">1</span>
<span class="nf">voidLOEM</span> <span class="ow">::</span> <span class="kt">Void</span> <span class="o">\/</span> <span class="kt">Not</span> <span class="kt">Void</span>
<span class="c1">-- i.e. :: Either Void (Void -> Void)</span>
<span class="nf">voidLOEM</span> <span class="ow">=</span> <span class="kt">Right</span> <span class="n">id</span>
<span class="nf">maybeLOEM</span> <span class="ow">::</span> <span class="p">(</span><span class="kt">Maybe</span> <span class="n">b</span><span class="p">)</span> <span class="o">\/</span> <span class="kt">Not</span> <span class="p">(</span><span class="kt">Maybe</span> <span class="n">b</span><span class="p">)</span>
<span class="nf">maybeLOEM</span> <span class="ow">=</span> <span class="kt">Left</span> <span class="p">(</span><span class="kt">Nothing</span><span class="p">)</span>
</pre>
<p>But for a <em>polymorphic</em> <code>a</code>, we struggle. While every type in Haskell is definitely inhabited or uninhabited, we can't in general produce an inhabitant of the type, or an inhabitant for the type's "negation", without knowing <em>which</em> type we're dealing with.</p>
<pre class="code literal-block"><span class="kt">LOEM</span> <span class="ow">::</span> <span class="n">a</span> <span class="o">\/</span> <span class="kt">Not</span> <span class="n">a</span>
<span class="kt">LOEM</span> <span class="ow">=</span> <span class="o">???</span> <span class="c1">-- uh oh</span>
</pre>
<p>Does this mean that our strange similarities are just a coincidence? After all, <code>a \/ Not a</code> is certainly true - so we expect to be able to produce an inhabitant. Notably, we also fail to "negate" the law of excluded middle - we can't produce an inhabitant for <code>Not (a \/ Not a)</code> either. In the logic of our types, we don't know if <code>a \/ Not a</code> is "true" or "false".</p>
<p>What has actually happened is that we've departed from <em>classical</em> logic: our types do behave like a logical system, just a different one.</p>
<h4>What makes classical logic?</h4>
<p>While most users of logic - particularly mathematicians - take the law of excluded middle as a universal truth, and apply it liberally in proofs, it's not a necessary part of any logical system. Classical logic is actually defined in part by the existence of this law, and double negation: <code>Not (Not a) -> a</code> (interested readers can <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.6941">read more about these laws</a>).</p>
<p>Double negation is another statement that doesn't have an inhabitant in Haskell - and nor does its negation. In a logical system without either law, several statements which are true in classical logic also no longer hold: Peirce's law, <code>((a -> b) -> a) -> a</code>; the double contrapositive, <code>(Not b -> Not a) -> (a -> b)</code>; the implies equivalence <code>(a -> b) -> (Not a \/ b)</code>; de Morgan's laws, such as <code>Not (Not a /\ Not b) -> (a \/ b)</code>; and many more. Similarly, none of these Haskell types have inhabitants.</p>
<h4>Intuitionistic logic</h4>
<p>If the logic that we see in Haskell types isn't classical logic, what then is it?</p>
<p>The feature of Haskell that prevents all these types from having inhabitants is that, in Haskell, we must <em>construct terms explicitly</em>. In classical logic, the statement <code>a \/ b</code> holds if and only if at least one of <code>a</code>, <code>b</code> holds - but our knowledge of <code>a \/ b</code> doesn't necessarily tell us which one it is. In Haskell, on the other hand, you can use an <code>Either a b</code> to get either an <code>a</code> or a <code>b</code>, by <em>deconstructing</em> the term. In other words, Haskell requires that every term of type <code>a \/ b</code> only be <em>constructed</em>, from a term with type <code>a</code> or <code>b</code>.</p>
<p>This limitation doesn't just appear in Haskell. In fact, this exact behaviour describes <strong>intuitionistic logic</strong> - a subset of classical logic. Intuitionistic logic is precisely what you get when you get rid of the laws of the excluded middle and double negation frrom classical logic. The resulting system still makes sense - it's just less powerful. Lots of statements with classical proofs have no equivalent intuitionistic proof.</p>
<h4>Constructive mathematics and decidability</h4>
<p>Just like classical logic tries to formalise the reasoning of classical mathematics, intuitionistic logic tries to formalise the reasoning of <strong>constructive mathematics</strong> (for this reason, it is sometimes called <em>constructive logic</em>).</p>
<p>Godel's successful proof of the first Incompleteness theorem showed that every system of reasoning about numbers included statements that were <strong>undecidable</strong>. These statements couldn't be proven, and their contradiction couldn't be proven - in other words, it was impossible for <em>some</em> statement <code>a</code> to prove either <code>a</code> or <code>Not a</code>.</p>
<p>As a result, several mathematicians became dissatisfied with classical logic: a binary idea of truth no longer seemed correct. They devised a new mathematics without the use of the law of excluded middle or double negation in proofs - since a proof of <code>Not (Not a)</code> only showed that <code>a</code> wasn't false: not that it was true. This system was termed <strong>constructive mathematics</strong>.</p>
<p>In constructive mathematics, as in intuitionistic logic, proofs are only possible by explicit construction. To prove <code>a \/ b</code>, it is necessary to give a proof of <code>a</code> or a proof of <code>b</code> - it's not enough to argue that at least one must be true. This system ends up being weaker - several statements which are provable in classical mathematics are undecidable in constructive mathematics.</p>
<h4>Decidability as a constraint</h4>
<p>Returning to Haskell for a moment, we can use this insight to produce a weaker version of the law of the excluded middle, just for <em>decidable</em> statements. Haskell allows us to express decidability as a constraint on a logical statement (a type), by using a typeclass:</p>
<pre class="code literal-block"><span class="kr">class</span> <span class="kt">Decidable</span> <span class="n">a</span> <span class="kr">where</span>
<span class="n">decide</span> <span class="ow">::</span> <span class="n">a</span> <span class="o">\/</span> <span class="kt">Not</span> <span class="n">a</span>
<span class="kt">LOEM</span> <span class="ow">::</span> <span class="p">(</span><span class="kt">Decidable</span> <span class="n">a</span><span class="p">)</span> <span class="ow">=></span> <span class="n">a</span> <span class="o">\/</span> <span class="kt">Not</span> <span class="n">a</span>
<span class="kt">LOEM</span> <span class="ow">=</span> <span class="n">decide</span>
</pre>
<p>Restricting ourselves to decidable statements is even enough to re-introduce double negation:</p>
<pre class="code literal-block"><span class="nf">doubleNegation</span> <span class="ow">::</span> <span class="p">(</span><span class="kt">Decidable</span> <span class="n">a</span><span class="p">)</span> <span class="ow">=></span> <span class="kt">Not</span> <span class="p">(</span><span class="kt">Not</span> <span class="n">a</span><span class="p">)</span> <span class="ow">-></span> <span class="n">a</span>
<span class="nf">doubleNegation</span> <span class="n">doubleNegative</span>
<span class="ow">=</span> <span class="kr">case</span> <span class="p">(</span><span class="n">decide</span> <span class="ow">::</span> <span class="n">a</span> <span class="o">\/</span> <span class="kt">Not</span> <span class="n">a</span><span class="p">)</span> <span class="kr">of</span>
<span class="kt">Left</span> <span class="n">a</span> <span class="ow">-></span> <span class="n">a</span>
<span class="kt">Right</span> <span class="p">(</span><span class="n">notA</span><span class="p">)</span> <span class="ow">-></span> <span class="n">useAVoid</span> <span class="p">(</span><span class="n">doubleNegative</span> <span class="n">notA</span><span class="p">)</span>
</pre>
<h3>The Curry-Howard isomorphism</h3>
<p>So Haskell types look like logical statements - and in particular, it looks like statements provable in intuitionistic logic correspond to inhabited Haskell types. But what does that mean for Haskell <em>terms</em>?</p>
<h4>Code as proof</h4>
<p>To the Haskell compiler, a (terminating) term is a demonstration that a particular type is inhabited. In declaring the type signature for a definition, you claim the type is inhabited: in the definition itself, you <em>prove</em> it. When you consider types as a logic, this analogy makes even more sense: where a type is a logical statement, a term is a proof that the type is inhabited, and so that the statement is true. Proving the negation of a statement is done by showing that the negated type is inhabited.</p>
<p>This matchup - types as statements, terms as proof - hasn't gone unnoticed among computer scientists: it's been described in <a href="http://www.dcc.fc.up.pt/~acm/howard.pdf">informal papers</a> since the 70s. Named the <strong>Curry-Howard Correspondence</strong> (or isomorphism), it was first spotted in the lambda calculus - the functional abstract computer that Haskell was inspired by.</p>
<h4>Computing proofs</h4>
<p>One of the most interesting parts of the CHC comes from considering <code>-></code> - both implication and function. The type <code>a -> b</code> is both the logical statement that <code>a</code> implies <code>b</code>, and the type of transformations of values of type <code>a</code> into values of type <code>b</code>. But since a value of type <code>a</code> is a proof for <code>a</code>, such a transformation also <em>transforms proofs</em>: given a proof for <code>a</code>, it yields a proof for <code>b</code>.</p>
<p>And since proofs are code, that means that evaluating a proof involving running those proof transformations. A Haskell term like <code>f x</code> with type <code>b</code> is made of a proof <code>f</code> of <code>a -> b</code>, and a proof <code>x</code> of <code>a</code> (for some <code>a</code>). In <em>running</em> <code>f x</code>, we use the proof of <code>a -> b</code> combined with the proof of <code>a</code> to get a proof for <code>b</code>: and the resulting value has the same type. By evaluating a proof, we get a new proof for the same statement. In other words, computation <em>simplifies</em> proofs.</p>
<h4>Cut-free proofs</h4>
<p>But what does this "simplification" actually mean in a proof? How can proofs transform other proofs? The answer comes from considering what our logical types actually mean. <code>a -> b</code> is a representation of the statement "from the hypothesis a, it is possible to prove b". Inhabitants of <code>a -> b</code> are proofs which assume that <code>a</code> is true, and use it to construct a proof of <code>b</code>. But in order to <em>use</em> such a proof to prove <code>b</code>, it is necessary to produce a proof of <code>a</code>: and the resulting proof has some redundancy. If you can prove <code>a</code>, there's no need for another proof to <em>assume</em> that <code>a</code> is true - it could just use the proof of <code>a</code>.</p>
<p>Consider the same situation in Haskell. If you apply a function to an argument</p>
<pre class="code literal-block"><span class="p">(</span><span class="nf">\</span><span class="n">x</span> <span class="ow">-></span> <span class="o">...</span><span class="p">)</span> <span class="n">t</span>
</pre>
<p>then computing the function application substitutes the argument value into the function body.</p>
<pre class="code literal-block"><span class="kr">let</span> <span class="n">x</span> <span class="ow">=</span> <span class="n">t</span> <span class="kr">in</span> <span class="o">...</span>
</pre>
<p>This term could be written both ways: the behaviour is the same. The only difference is that the first version, when computed, turns into the second.</p>
<p>This construction - proving both an implication and its assumption - is called a <strong>cut</strong> in a proof. When we evaluate a proof as a Haskell term, we actually <em>eliminate</em> the cuts in the proof: since a cut is just a function application, and computing function applications is how Haskell code is executed. Since we are limited to terminating terms, executing proofs must eventually give us a simplified proof with <em>no cuts</em> in it (because it cannot be simplified any more).</p>
<p>This line of reasoning leads us to a very significant result about intuitionistic (and classical) proofs: the <strong>cut elimination theorem</strong>. Shown by <a href="https://link.springer.com/article/10.1007/BF01201363">Gentzen</a> in 1935, the theorem states that every statement with a proof that contains cuts, also has a proof that contains <em>no</em> cuts. We have the framework to prove the same result: if every intuitionistic proof corresponds to a terminating Haskell term, where cuts are function application, then computation does the rest of the work. To get a cut-free version of an existing proof, it is sufficient to run the proof fully.</p></div>https://ivanbakel.github.io/posts/intuitionistic-logic-in-haskell/Fri, 18 Sep 2020 23:00:00 GMT