Prelude
Let me begin with a disclaimer, I'm not a mathematician or any sort of anything of/in anything. I'm simply trying to impress as many people as possible with big fancy words... and maybe explain some of the awesome stuff I read along the way.In truth I'm simply trying to clarify some of this stuff for myself, so bear that in mind if some of the 'conclusions' seem naive.
Enough posturing you say? Ok, lets dive right in.
Mathematical Abstractions
Let's look at two common instances of algebras, integers with (+) and (*) and truth values with (∧  Logical And) and (∨  Logical Or).They seem suspiciously similar. In particular it's the properties of the operations and the interaction between these operations that seem to match up against each other.
 Commutativity of (+) and (^)
 Commutativity of (*) and (v)
 Distributivity of (*) and (+) and (^) over (v).
 Identity of (+)  0 and (v)  false
 Identity of (*)  1 and (^)  true
What does this have to do with Haskell? We're getting on to that.
As programmers we'd abstract this behaviour behind an interface.
For this interface we have 
 A set of elements
 Two binary operators
 Special elements associated with the binary operators
 Law 1 : The operators must be commutative and associativity
 Law 2 : The sum type must distribute over the product type
 Law 3 : Identity elements must exist for operators a op Identity = Identity op a = a
It is the laws that exert some structure on the set making it more than just a set. This type of mathematical structure is actually quite amazing. It is amazing because of the way it unifies different mathematical objects as we shall see in a moment.
Haskell Types  Mathematical Objects
So lets look at some Haskell type constructors:
data Either a b = Left a  Right b data (,) a b = (a, b)
Here Either and (,) are type constructors. The projections Left and Right of 'Either', and the first and second element of (,) do not contain any implicit semantic meaning thus it sort of makes sense that these type constructors are commutative (we can further formalise this by showing them equal modulo isomorphism  the sets these types define are isomorphic if there is a bijective relation between them).
The next property we notice of Either is its associativity.
Either a b ~ Either b a whereLeft a > Right aRight b > Left a
The next property we notice of Either is its associativity.
The value of an Either type will only be one of it's constituent types thus the position of the type in the stream of Either constructors does not make a difference. In fact Either seems like nondeterminism at the type level, it accumulates types and when we want to work with it's values it yields up one of them. Thus using the Either type implies we must pattern match on all of it's constituents to handle its nondeterminism which (Spoiler Alert) bears quite a resemblance to 'Or Elimination' in natural deduction.
Either a (Either b c) ~ Either (Either a b) c BijectionLeft a ~ Left (Left a)Right (Left b) ~ Left (Right b)Right (Right c) ~ Right c
The (,) type has the exact same properties as Either. Similar to Either, this arises from the fact that the position of the type within a stream of (,) is irrelevant.

It's irrelevant because the position holds no semantic meaning.
But these aren't the only type constructors. We also have the → type constructor. Let's look at how it interacts with the other types. (a, b) → c.

The relation between these operators is in fact extremely similar to the relationship between (*) and (^) as well as (logical and) and (implication).
One can verify that the relationship between ∨, ∧ and → do in fact mirror Either, (,) and → in types.
What does this all mean?
Does this mean that all the frameworks we have for logic are applicable for programming with types?Of course it does.
In fact there is a bijection between natural deduction and propositional logic, given the type
a ∧ b → a ∨b
Using natural deduction we assume (a ∧b), perform and elimintation and then or introduction.
The type 
We assume the existence of a value for the parameter, and then pattern match to disseminate a and b and wrap it in the type constructor giving us Either a b. Now that clear bijection is weirdly awesomely beautiful. Have we proved our program? Well we seem to have proved that a value of this type can exist.(a, b) > Either a b
And this is known as the CurryHoward isomorphism. A logical statement is valid  that is it is true for all assignments to its truth values ↔ a value inhabits a type that is isomorphic to the logical statement.
'So..' you say, 'is there is an isomorphic procedure to natural deduction with regard to Haskell programs?' Yep, it's quite popularly known as programming.
The relationships could be 
 And Elimination  Pattern matching within an equation
 Or Elimination  Pattern matching across seperate equations (cases)
 Modus Ponens  Function Application
Type Algebra
Last note on type algebra, it can reveal incredibly meaningful truths for example consider the type below:

((a, b > r) > r, a > r) > r 
After all that we reduced the original type to that incredibly small looking thing?
What if I told you that the above could be used to write something that could allow you to break out of any computation no matter how many levels deep you were in? It would allow you to write exception handlers and schedulers as natural extensions to the language.
Having come to the above result, I will suspend the explanation to subsequently resume it in a post about continuations later on.