(TL;DR the entire blog post is a tangent explaining linear logic, scroll to the bottom for the code)
I've been learning linear logic recently. Linear logic is a fascinating kind of logic that deals with flow of data or resources, its key element is that it forbids arbitrary copying of data and forgetting about existing data.
Sometimes, learning more about certain topics in computing can lead to new insights about programming in general. And the same happened to me with linear logic.
Ignoring some details, linear logic has the following conjunction types:
A ⊗ B
(both A and B can be produced, the consumer has
to use both)A & B
(A can be produced and B can be produced, but not at
the same time, the consumer has to pick between them)And the following disjunction types:
A ⊕ B
(the consumer must be ready to handle both A and B
since it's unknown which one the producer will give)A ⅋ B
(the consumer must be ready to handle both A
and B, the producer will give both simultaneously, simultaneous here
means each has to be handled independently by the consumer with the
resources they have available)And the following "exponential" types for encoding intuitionistic or classical logic:
!A
- arbitrary amount of A
(the consumer can use it as much as
they want, the producer guarantees it can be used arbitrarily)?A
- 0 or more of A
(this is not arbitrary since it's a certain
amount that the producer rather than consumer controls)Additionally, in linear logic there is A⊥
(negative A
) meaning "the
use of A
" (in programming, you can think of it as a continuation or as
a singular function argument).
In general, if you have a proposition X
, you can use it with a
proposition X⊥
(like in programming if you have an object X
, you
can pass it to a function that takes X
).
Just like in classical logic ¬(A ∧ B) = ¬A ∨ ¬B
, in linear logic
there are similar rules:
(!A)⊥ = ?(A⊥)
/ (?A)⊥ = !(A⊥)
(A ⊗ B)⊥ = A⊥ ⅋ B⊥
/ A⊥ ⊗ B⊥ = (A ⅋ B)⊥
(A ⊕ B)⊥ = A⊥ & B⊥
/ A⊥ ⊕ B⊥ = (A & B)⊥
The first rule says that ?(A⊥)
handles !A
. In a computational
interpretation, this would mean that a function that uses a !A
would
use A
as much as it needs (zero or more), i.e. an amount that the
function caller can't control - that is precisely what ?(A⊥)
encodes
(?
- an unknown amount of, A⊥
- uses of A
).
The second rule says that A⊥ ⅋ B⊥
handles A ⊗ B
. In a computational
interpretation, A⊥ ⅋ B⊥
is just multiple function arguments (of types
A
and B
). Notice that this is different from A⊥ ⊗ B⊥
, as A⊥ ⊗ B⊥
would be two separate functions (or rather continuations) with a single
argument each, while A⊥ ⅋ B⊥
is a single continuation with two inputs.
The third rule says that A⊥ & B⊥
handles A ⊕ B
. A⊥ & B⊥
represents
a choice between the use of A
and the use of B
, and the producer of
A ⊕ B
makes that choice by providing either A
or B
.
It will probably take some time to get used to these rules (especially since what I provided is barely an explanation), so it's normal if you still don't really have an intuition of how linear logic works. There are also multiple uses/interpretations of linear logic that may help you understand it better, but I'm not gonna cover any of that.
It's interesting that we can safely use A
with A⊥ & B⊥
, or A ⊕ B
with A⊥ & B⊥ & C⊥
- i.e. we can add branches we don't use to the
&
.
This is reminiscent of the fact that in non-linear logic, A ∧ B
can
be "downgraded" to A
(and note that just like A ∧ B
in classical
logic, A & B
in linear logic is a product type).
Most importantly for this post, this third rule gives types to sum types
and pattern matches. If x
is either Nothing
or Just 5
(an additive
disjunction Nothing ⊕ Just Int
), then it must be handled with an
additive conjunction Nothing⊥ & (Just Int)⊥
. Here, the negative type
provides two options for handling either of Nothing
and Just Int
,
and the positive type picks one of them.
This instantly reminded me of Scott encoding in lambda calculus.
You can represent Maybe T
in lambda calculus as follows:
Nothing
: λn j. n
Just x
: λx. λn j. j x
Notice that here, both Nothing
and Just x
take two arguments, but
only use one of them. In fact, with beta reduction rules, the other
argument can just be discarded. The sum type, quite literally, picks
between two options, and its up to the user of that sum type to provide
both options.
This finally leads us to the end of the post - how would that translate to Nix? This is how:
let
None = { None, ... }: None;
Some = x: { Some, ... }: Some x;
match = x: x;
in
map
(n: match n {
None = 5;
Some = x: x + 6;
})
[ None
(Some 5)
(Some 6) ]
# produces [ 5 11 12 ]
This program uses an attrset instead of multiple arguments, which allows
arbitrary composition of different variants, merging "match" branches
using regular attrset merging (//
). It will not evaluate the
extraneous "match" branches, since Nix is a lazily evaluated language.
And match
can be omitted, but makes the code funnier.
In summary, linear logic helped me understand why sum types aren't and can't be the same as typescript-like union types - sum types must unambiguously pick a specific branch, while union types are more arbitrary and ad-hoc. Sum types represent a particular choice, just like accessing a single element of a non-linear product type chooses that particular element out of many, and matches represent multiple options, like products give multiple values to use. Products and sums are very symmetrical, but how they are created and how they are used is flipped.
Have any comments, questions, feedback? You can click here to leave it, publicly or privately!