I’ve spent a lot of time on graphics programming in Haskell. An uncommonly large amount of time. Few Haskell programmers work on graphics, and very few graphics programmers use Haskell. Talking to the overlap on the internet feels the same way that reaching someone on ham radio must feel, except less mainstream.

Despite the division, the two worlds share common ground. In particular, at the core of each lies a rigorous mathematical theory. Haskell culture descends from algebra and logic, whereas graphics culture comes largely from geometry and physics. Both fields, however, have been uninhibited about building strong foundations with math.

One topic in math is a part of many such foundations: linear algebra. It’s equally useful and interesting to particle physicists, mechanical engineers, computer vision researchers, game programmers ^{1}, and other tribes who might otherwise never talk to each other. Everybody has a linear problem—or uses one to approximate the real problem.

No surprise then that the divided Haskell and graphics cultures are alloyed in a Haskell package for linear algebra called Linear. My first impression was that it must be the weirdest vector library in the world, but like many things in Haskell it’s grown on me. It’s quite usable in most situations in spite of its sometimes daunting documentation.

It’s also pretty flexible, so there are many ways to use it. I’ll explain what I’ve found most useful for workaday tasks, but there seem to be more cool techniques, like sparse representation, available to the fully committed Haskell/graphics dhampir. All of this is about version 1.21.1 of Linear.

Vector constructors are defined in the modules `Linear.V2`

, `Linear.V3`

, and `Linear.V4`

(there are also 1D vectors for weirdos). Their parameters are just the components of a vector, so `V3 1 0 0`

denotes the vector

\[ \begin{bmatrix} 1\\ 0\\ 0\\ \end{bmatrix} \]

There’s nothing immediately surprising about these. You can use them in normal expressions and in patterns:

```
let v = V3 2 2 2
V2 x y = position
in v ^+^ V3 (x + y) (x + y) 0
```

The operator `(^+^)`

denotes vector addition in any dimension, but if you scroll through any of those constructor modules you won’t find it. That’s because, in Linear, vector addition is defined more generally than on just these `V`

types. It’s actually a method of a type class called `Additive`

, and the `V`

types are instances of this class. In fact, other types, such as lists and integer maps are instances of `Additive`

as well. That means that these types too are vectors, and we can add them like vectors with `(^+^)`

.

Where is `Additive`

defined? Why, right there in the module `Linear.Vector`

of course. Yes, the names are confusing.

Other common \(n\)-dimensional vector operations appear as methods of `Additive`

. Some examples are `zero`

, the zero vector, and `(^-^)`

, vector subtraction.

Some of the operations in `Linear.Vector`

appear instead as standalone functions, like `(^*)`

and `(*^)`

—both scalar multiplication. These operations are defined even more generally than addition and subtraction: they work on any instance of `Functor`

. The `Additive`

class is defined as a subclass of `Functor`

, and herein lies an insight that’s vital for reading the docs: *vectors are functors*.

In this library, the type of vectors is `f a`

, where `f`

is a functor. You can see this in the type signature of `(*^)`

:

`(*^) :: (Functor f, Num a) => a -> f a -> f a`

If the type `a`

represents the real numbers, then `f a`

is the type of the regular old vectors in \(\mathbb{R}^n\). But the type `a`

could be a vector instead, in which case `f a`

is the type of a matrix (see below).

What about a dot product? Well, only some vectors have dot products; specifically, only those that are instances of a subclass of `Additive`

called `Metric`

. Don’t worry, the `V`

types are again instances of this type class. Methods of `Metric`

define some more common vector operations:

`dot`

, the dot product`norm`

, the \(\ell^2\) norm/magnitude/length`quadrance`

, the squared norm (there’s a word for it!)

So `Metric`

is another piece of how Linear defines operations on vectors. Is it defined in the `Linear.Vector`

module? No! It’s in the module `Linear.Metric`

, along with a couple of useful standalone functions: `normalize`

, a normalized vector; and `project`

, vector projection.

What about a cross product? Is that in some *other* type class? No, it’s just a special function in `Linear.V3`

. That makes too much sense,

I hear you complain, where is the mind-blowing algebra?

Shockingly, that might be beyond the scope of this library; Linear knows that the cross product is a pragmatist’s tool and exempts it from pontification.

In an affine space you can, if you want to, distinguish between vectors and points, where vectors are strictly differences between points. Linear makes this distinction in types: the type `Point f a`

wraps a vector type `f a`

to make it a point. The vector doesn’t change; we just label it as a point with the `Point`

constructor and then try not to look at the numbers inside.

Instead, we apply functions defined in `Linear.Affine`

to it. Some of these work only on the type `Point f a`

, like `origin`

: the origin of the wrapped vectors `f a`

. But most of them also work for any instance of the `Affine`

type class, of which `Point f a`

is one example. Some are class methods, like `(.-.)`

, the vector difference between two points, and `(.+^)`

and `(.-^)`

, positive and negative vector offsets to a point. Others are standalone terms, like `distanceA`

and `qdA`

, respectively the distance and quadrature between points. To use any of these, you must import `Linear.Affine`

separately from `Linear`

:

```
import Linear
import Linear.Affine
```

An `Affine`

instance `p`

has an associated type `Diff p`

, which is the type of vectors between points and must be an instance of `Additive`

. Other instances of `Affine`

include…all of those instances of `Additive`

provided in `Linear.Vector`

, including the `V`

types. That means that vectors can not only *represent* points (via the `Point`

constructor) but also actually *be* points.

For example, `V2`

is an instance of `Affine`

, and its associated vector type `Diff V2`

is just itself, `V2`

. That means that, in this instance, the point difference `(.-.)`

is the same as the vector subtraction `(^-^)`

:

```
(.-.) :: V2 a -> V2 a -> V2 a
(^-^) :: V2 a -> V2 a -> V2 a
```

So Linear teaches us that all of this point versus vector stuff is purely a matter of convention. You can do whatever you want.

One final detail about vectors in Linear proves that this library is truly unique. As we saw, vectors have `Functor`

types, so we can map functions over them with `fmap`

. The idea here is that `fmap`

applies a function to each component of a vector. The implementation of scalar multiplication demonstrates this:

`(*^) a = fmap (a*)`

Here’s where the pool gets deep, where the cultures meet, where dogs and cats are living together. This is what amazes me about programming in Haskell.

The `V`

types have not only `Functor`

instances, but `Applicative`

instances as well. You can apply a vector of functions to a vector of arguments with (`<*>`

). You can even apply a binary function to two vectors with `liftA2`

. In fact, that is precisely how the `Num`

instance for `V`

types is implemented. You can add and subtract vectors using ordinary numerical operators from the `Num`

class:

```
x + y = x ^+^ y
x - y = x ^-^ y
```

Other `Num`

methods are implemented with `liftA2`

as well, though these operations aren’t all just their vector-space analogues as with addition and subtraction. You might expect `x * y`

to be the outer product of `x`

and `y`

: a matrix. Perhaps then you’d quickly remember that `(*)`

is a method of `Num`

, and its result has the same type as its arguments. Perhaps *then* you would *surmise* that it gives you the vector of elementwise products (A.K.A. the Hadamard product?).

Also, a numeral like `10`

can be a `V`

-type vector. If given the type `V3 a`

, it denotes a 3-vector all of whose components are `10`

—and *that* `10`

means whatever the `Num`

instance for `a`

defines it to be. We can therefore write a concise, familiar, and type-kooky expression of scalar multiplication:

`10 * V2 x y = V2 (10 * x) (10 * y)`

In another language, this would all read almost like undocumented, use-at-your-own-risk functionality. But here we have an automatically checked paper trail in the types that’s not too hard to navigate—especially if you are a full-time mathematician.

A matrix in Linear is just a vector of vectors. You construct one with nested vector expressions:

`V3 (V3 1 0 x) (V3 0 1 y) (V3 0 0 1)`

The convention here is that a matrix is a vector of row vectors. The module `Linear.Matrix`

provides functions that operate on matrices in row-major order, so the expression above denotes the matrix

\[ \begin{bmatrix} 1 & 0 & x\\ 0 & 1 & y\\ 0 & 0 & 1\\ \end{bmatrix} \]

There are also some type synonyms that give us a shorthand for matrix types. These are definitions like

`type M44 a = V4 (V4 a)`

that make other type signatures easier to read. They’re only for 2-, 3-, and 4-dimensional matrices, which are the matrices we often see in graphics. A few of these common matrices have convenient definitions.

`identity`

denotes the identity matrix. This is polymorphic in the dimension—it can produce a matrix of any size—as long as the vector type is `Traversable`

and `Applicative`

(`pure`

must produce a vector of all 1’s, and `traverse`

must help turn that vector into a diagonal matrix). The `V`

types are instances of both `Traversable`

and `Applicative`

.

We can use `fromQuaternion`

to make a 3×3 rotation matrix from a quaternion (see below). But if we need a 4×4 matrix, representing a 3D affine transformation in homogeneous coordinates, it’s easiest to use `mkTransformation`

instead.

`mkTransformation`

takes a quaternion and a 3-vector, and it produces a 4×4 matrix representing the corresponding rotation and a translation. It does this the way you would by hand: make the 3×3 rotation matrix with `fromQuaternion`

, tack on the translation as a fourth column, and then add \(\begin{bmatrix} 0 & 0 & 0 & 1 \end{bmatrix}\) as a fourth row. To get just a rotation, you make the translation zero. To get just a translation, you make the rotation zero

(that is, the quaternion \(1 + 0\mathbf{i} + 0\mathbf{j} + 0\mathbf{k}\)):

```
myRotation r = mkTransformation r zero
myTranslation v = mkTransformation (Quaternion 1 zero) v
```

Beyond these model/view matrices, `Linear.Projection`

provides the usual projection matrices. These are `ortho`

, the orthographic projection, and `perspective`

, the perspective projection. Of course, these projection

matrices are actually invertible to retain a projected vector’s depth component. So, the module provides analytically derived inverses of these matrices as well: `inverseOrtho`

and `inversePerspective`

. These are useful, because if you have a depth value associated with a point on the screen, you can un-project

this point back into your scene by transforming it with the inverse of your projection.

Defining all of these matrices is so much fun, you might forget to actually use them. As a reminder to do so, the multiplication operators are right at the top of `Linear.Matrix`

: `(!*!)`

, the product of two matrices, and `(!*)`

, the transformation of a vector.

It’s important to get these straight, because their types are so polymorphic that it may not be immediately obvious that you’re using the wrong one.

```
(!*!) :: (Functor m, Foldable t, Additive t, Additive n, Num a)
=> m (t a) -> t (n a) -> m (n a)
(!*) :: (Functor m, Foldable r, Additive r, Num a)
=> m (r a) -> r a -> m a
```

I once mistakenly wrote a vector transformation as a matrix product with `(!*!)`

, as in

`scaled (V3 2 2 2) !*! V3 1 0 1 -- wrong!`

instead of

`scaled (V3 2 2 2) !* V3 1 0 1 -- right`

But rather than finding my error immediately, the type checker must have assumed that the vector I meant to transform was a matrix—matrices are vectors, after all. This incorrect assumption propagated, and the final error message was that something unrelated was seriously fucked up. It took me a long time to discover that the only problem was an extra exclamation point.

Haskell programmers know this experience too well. This deductive debugging through the types can be very efficient, but sometimes it’s nearly as bad as printing variables at run time. Avoiding those situations is an art.

Besides defining and applying matrices, we can also transform them into other matrices. `transpose`

gives us the transpose of a matrix, and there are inverses defined for 2-, 3-, and 4-dimensional square matrices. Respectively, these are `inv22`

, `inv33`

, and `inv44`

.

That’s assuming they’re invertible, of course, which we can confirm with their respective determinants: `det22`

, `det33`

, and `det44`

. We can extract other data from a matrix too. `diagonal`

gives the diagonal vector. `trace`

^{2} computes the trace, or the sum of the components of the diagonal. OK, so maybe these aren’t your bread and butter, but isn’t it nice to know that they’re there?

Finally, if you’re comfortable with lenses and the Lens package ^{3}, you can gain access a bunch of different subsections of a matrix using lenses like `column`

, which gives you any column vector; `translation`

, which is the 3-vector representing the translation part of an affine transformation matrix; and `_m22`

and all its cousins, which give you the lower-dimensional rows and columns of larger matrices.

It’s worth noting that you can get a column without using lenses by matching row patterns with the transpose:

`let V4 _ column2 _ _ = transpose m in column2`

Ideally, laziness would let the process skip unnecessary work here, but I haven’t measured it.

It’s also worth noting that there’s a whole LU decomposition section in `Linear.Matrix`

that you can use to solve linear systems, but I haven’t tried it out yet. It just goes to show how much is packed into this library.

Ugh, quaternions.

Man, didn’t Heaviside solve all this? —Tyler Hennen

I have been, until lately, very tender and merciful towards quaternionic fads, thinking it possible that Prof. Tait might modify his obstructive attitude. But there is seemingly no chance of that. Whether this be so or not, I think it is practically certain that there is no chance whatever for Quaternions as a practical system of mathematics for the use of physicists. —Oliver Heaviside

Let us indulge obstructive attitudes and accept that many graphics people represent rotations with quaternions. Linear, for one, is here to support them.

The `Linear.Quaternion`

module begins by defining a data type `Quaternion`

in somewhat strange fashion, as a scalar and a 3-vector:

`data Quaternion !a !(V3 a)`

These are *not* an angle and an axis of a rotation in \(\mathbb{R}^3\). Rather, these are the four components of the complex number space that give the quaternion its name. The scalar is the real part, and the vector is the three imaginary parts.

From the source code this seems to be a convenient structure for implementing a litany of functions on quaternions, including a `Float`

instance that defines sines and exponentials of quaternions. It also supports the implementation of branch cuts in the inverse trigonometric functions, which Linear’s original author and maintainer Edward Kmett has said were the original motivation for shipping the library in the first place. Here, as elsewhere, strangeness has a purpose.

Scroll past all of that (keep scrolling) and you will eventually find some more familiar fare for treating quaternions as 3D orientations. Here is where you can define a quaternion from an axis and an angle with `axisAngle`

. You can rotate a vector with a quaternion using `rotate`

. You can perform proper spherical interpolation between two quaternions with `slerp`

. And you can obtain a rotation matrix from a quaternion with `fromQuaternion`

(see above).

In case all this still looks too difficult, remember that at first I didn’t want to mess with it either. Initially, I learned it all begrudgingly.

Linear came to my attention as a dependency of other graphics libraries I’ve used, namely:

- Sdl2, foreign bindings to SDL
- GPipe, a functional and (very) strongly typed interface to GPU programming

Wherever these libraries deal with vectors, they use Linear’s vectors. But my application code already used a different vector type that served me well. Why should I adopt this weird thing,

I snarled, particles of spit lashing my screen, where I can’t even find the type of a vector?

And at first, I didn’t adopt it. Instead I translated between my application’s representation and Linear’s whenever I needed to send vectors to Sdl2 or GPipe. Say I needed a `Rectangle`

in Sdl2 based on `position`

, a vector of my preferred type `Point2 Double`

. I would instead base it on an equivalent vector `position'`

, whose type was `V2 Double`

from Linear. Case by case this wasn’t a great burden, especially with pattern matching:

```
let Point2 x y = position
position' = P (V2 x y)
...
in Rectangle position' ...
```

Too often, though, I had to qualify or selectively import names that both vector systems defined, as in

`import Linear (V2 (..), V3 (..), V4 (..), Quaternion (..))`

or

```
(xform Linear.!*! Linear.scaled (V4 1 1 1 (1/20)))
Linear.!* Linear.point (V3 450 50 0)
```

Gross. It all added up to a chunk of work that I just don’t have to do anymore. My application code now uses Linear freely in place of any other vector stuff ^{4}. Everything works smoothly. And I’m more confident in my understanding of some details about linear algebra that no one ever really talks about.

My only big gripe is that its dependency list is too long: 23 packages, including Lens, which itself has 34 dependencies (with some overlap). It’s acceptable to foist this upon application programmers as a curated experience, but Linear can and does support library programmers as well. Thankfully Sdl2 lets you opt out of adding Linear to your dependency list with the `no-linear`

flag, meaning you don’t have to pull in all of Lens just to get pixels on the screen. Maybe there should be a Microlinear

package, analogous to the Microlens package.

There’s more to Linear, but we’ve covered most of what I’ve done with it. Hopefully this has piqued your interest and helps get you started using the library. If you get really frustrated, remember that programming sucks and go do something else.

While it is indeed useful, its practicality just makes me more excited by the ideas inside it. This is a graphics programmer’s Haskell package, a functional programmer’s graphics library. The fusion of these two cultures points toward more interesting futures for both of them. Linear lets us peek into a world of functional graphics programming that is still being born.

Some in the game industry refer to linear algebra vaguely as

3D math.

Maybe they use that term to include some geometry too. The world may never know.↩︎This conflicts annoyingly with

`trace`

in the`Debug.Trace`

module. The`Trace`

class should be in its own module that’s not imported from the omnibus`Linear`

module, as`Affine`

is.↩︎Lenses can be quite convenient, and while they might be weird in advanced cases they aren’t hard to get started with. I’ve had some type inference problems using lenses from the Lens library, but the lens concept is well worth putting in your toolbox regardless of the implementation.↩︎

Admittedly, to use some parts of Yampa with Linear’s types, I had to invest some energy in declaring them instances of the vector and affine spaces that Yampa uses.↩︎