Sunday, 6 July 2014

The Logic Behind Types


Let me begin with a disclaimer, I'm not a mathematician or any sort of anything of/in anything. I'm simply trying to impress as many people as possible with big fancy words... and maybe explain some of the awesome stuff I read along the way.

In truth I'm simply trying to clarify some of this stuff for myself, so bear that in mind if some of the 'conclusions' seem naive.

Enough posturing you say? Ok, lets dive right in.

Mathematical Abstractions

Let's look at two common instances of algebras, integers with (+) and (*) and truth values with (∧ - Logical And)  and (∨ - Logical Or).
They seem suspiciously similar. In particular it's the properties of the operations and the interaction between these operations that seem to match up against each other.
  • Commutativity of (+) and (^)
  • Commutativity of (*) and (v)
  • Distributivity of (*) and (+) and (^) over (v).
  • Identity of (+) - 0 and (v) - false
  • Identity of (*) - 1 and (^) - true

What does this have to do with Haskell? We're getting on to that.

As programmers we'd abstract this behaviour behind an interface.
For this interface we have -

  • A set of elements
  • Two binary operators 
  • Special elements associated with the binary operators
  • Law 1 : The operators must be commutative and associativity
  • Law 2 : The sum type must distribute over the product type
  • Law 3 : Identity elements must exist for operators a op Identity = Identity op a = a

It is the laws that exert some structure on the set making it more than just a set. This type of mathematical structure is actually quite amazing. It is amazing because of the way it unifies different mathematical objects as we shall see in a moment. 

Haskell Types - Mathematical Objects

So lets look at some Haskell type constructors: 

data Either a b = Left a | Right b data (,) a b = (a, b)
Here Either and (,) are type constructors. The projections Left and Right of 'Either', and the first and second element of  (,) do not contain any implicit semantic meaning thus it sort of makes sense that these type constructors are commutative  (we can further formalise this by showing them equal modulo isomorphism -  the sets these types define are isomorphic if there is a bijective relation between them). 
Either a b ~ Either b a where
  Left a  -> Right a 
  Right b -> Left a

The next property we notice of Either is its associativity.  
Either a (Either b c) ~ Either (Either a b) c
-- Bijection
Left a ~ Left (Left a)
Right (Left b) ~ Left (Right b)
Right (Right c) ~ Right c
The value of an Either type will only be one of it's constituent types thus the position of the type in the stream of Either constructors does not make a difference. In fact Either seems like non-determinism at the type level, it accumulates types and when we want to work with it's values it yields up one of them. Thus using the Either type implies we must pattern match on all of it's constituents to handle its non-determinism which (Spoiler Alert) bears quite a resemblance to 'Or Elimination' in natural deduction.

The (,) type has the exact same properties as Either. Similar to Either, this arises from the fact that the position of the type within a stream of (,) is irrelevant. 

(a, b) ~ (b, a)
(a, (b, c)) ~ ((a, b), c)

It's irrelevant because the position holds no semantic meaning.
The next thing would be to look at the interaction between these type constructors. What values can Either (a, b) (a, c) take? Left (a, b) and Right (a, c). So we have the a 'and' either b or c, ~ (a, Either b c). Thus the next property exhibited is of the distributivity of the product type (,) over the sum type Either.

But these aren't the only type constructors. We also have the → type constructor. Let's look at how it interacts with the other types. (a, b) → c. 
f :: a -> b -> c
uncurry f :: (a, b) -> c
The second is simple the uncurried version of the first, thus we can establish a bijection leaving them isomorphic. Let's look at another example, c → (a,b). Well this function says that given a 'c', we can produce both an 'a' and a 'b'. Thus we must have c → a and b → a the type of which is simply (c → a, c → b).
The relation between these operators is in fact extremely similar to the relationship between (*) and (^) as well as (logical and) and (implication).

One can verify that the relationship between ∨, ∧ and → do in fact mirror Either, (,) and → in types.

What does this all mean?

Does this mean that all the frameworks we have for logic are applicable for programming with types?
Of course it does.
In fact there is a bijection between natural deduction and propositional logic, given the type

a ∧ b → a ∨b

Using natural deduction we assume (a b), perform and elimintation and then or introduction.
The type -   
(a, b) -> Either a b
We assume the existence of a value for the parameter, and then pattern match to disseminate a and b and wrap it in the type constructor giving us Either a b. Now that clear bijection is weirdly awesomely beautiful. Have we proved our program? Well we seem to have proved that a value of this type can exist.
And this is known as the Curry-Howard isomorphism. A logical statement is valid - that is it is true for all assignments to its truth values ↔  a value inhabits a type that is isomorphic to the logical statement.

'So..' you say, 'is there is an isomorphic procedure to natural deduction with regard to Haskell programs?' Yep, it's quite popularly known as programming.
The relationships could be - 

  • And Elimination                                          - Pattern matching within an equation
  • Or Elimination                                             - Pattern matching across seperate equations (cases)
  • Modus Ponens                                             - Function Application
So basically writing a functional program is analogous to writing proofs. Of course I haven't talked about higher order logics, falsity etc. The Haskell wiki is an excellent source of information to start with.


Type Algebra

Last note on type algebra, it can reveal incredibly meaningful truths for example consider the type below:

((a -> (b -> r) -> r) -> (a -> r) -> r) -> (a -> r) -> r
Yeah, it's a bit of a monster so let's use some of the properties we have figured out so far.

((a -> (b -> r) -> r) -> (a -> r) -> r) -> (a -> r) -> r
(((( a, b -> r ) -> r), a -> r) -> r, a -> r) -> r -- Uncurrying functions (r ^ ( a ^ b ) ~ r ^ (a * b))
((Either ( a, b -> r) a -> r), a -> r) -> r -- Isomorphic Logic Property: a -> r ^ b -> r = a v b -> r
(Either (a, b -> r) (Either a a) -> r) -> r -- Reapplication of above
(Either (a, b -> r) a -> r) -> r -- (Either a a) isormophic a
((a, b -> r) ->  r, a -> r) -> r
(a -> r) -> r

After all that we reduced the original type to that incredibly small looking thing?
What if I told you that the above could be used to write something that could allow you to break out of any computation no matter how many levels deep you were in? It would allow you to write exception handlers and schedulers as natural extensions to the language.
Having come to the above result, I will suspend the explanation to subsequently resume it in a post about continuations later on.