Polymorphism
A polymorphic function (say, the identity function, fn x -> x
) has
type variables in its type (in this case, the type looks exactly like
the function, fn a -> a
). This means the function does not know what
type its argument is, and doesn't need to know — it treats it as an
opaque value that, apart from being moved around, is not used in any
way.
Two ways of compiling such functions suggest themselves. The most
obvious way is to make sure all values have the same size, so that it
is obvious what a value looks like, and what it takes to move it
around. The drawback of this approach is that it requires "boxing" of
data that doesn't have the expected size. A pair of integers, for
example, can not be passed around in registers, but has to be
allocated on the heap, and passed by pointer (or, if you want to get
clever, you can combine two 32-bit integers into a 64-bit value). This
appears to mean that all "slots" in composite datatypes must also
always have the same size. For example, a polymorphic length
function on lists of any type will have to know that the layout of the
cons objects is always the same.
Thus, you'll either end up boxing everything, or implement a system where stuff is boxed before being passed to a polymorphic function, and unboxed after it is returned from such a function. The latter is quite complex, and can easily lead to situations where to code is wasting huge amounts of time (and generating lots of garbage) on boxing. All of this seems rather distasteful.
So that brings us to the other approach. In this approach, like in C++ templates, an instance of a polymorphic function is compiled for every size of arguments (types) that it is applied to. This is a bit of a nightmare when it comes to separate compilation of modules — I haven't gotten to implementing modules yet, but I hope I can work out something sane — but produces really nice code. The quadratic (or even exponential, in relation to the amount of polymorphic parameters) code blowup that it seems to imply probably isn't too bad in practice. The amount of different argument-sizes is limited, since we'll want to allocate big objects on the heap anyway.
I have to admit, at this point, that you're listening to someone who hasn't implemented a garbage collector for his language yet, and while I have some ideas about what my collector will look like, it is entirely possible that this whole approach breaks down when I have to be able to write code that finds pointers on the stack.
In any case, I now have a compiler that "lazily" compiles functions — it only compiles them when it comes across an actual reference to them, which not only adds a primitive form of dead-code eliminiation (unused functions don't show up in the object code), but also makes it much easier to compile specific incarnations of polymorphic functions, since the types the function operates on are known at the point where it is referenced. The compiler (currently only capable of compiling single files) keeps a cache of compiled functions, and when it is asked for a function that it already has, it just returns the existing one.
... I claimed above that the type of a function is known at the point
where it is referenced. This is not always the case. There are
pathological programs, such as def foo x -> foo x. foo 1
, and also
some non-pathological ones, such as type List a = Cons a (List a),
Nil. Nil
, that can't be conclusively typed. Note that the above
infinite loop would run perfectly fine, if compiled. It never
terminates, but it is a valid program.
I am currently just resolving any unbound type variable to a 32-bit opaque type, which is unlikely to be sound, but allows the above programs to compile and run. I'll have to dive a bit further into type theory before I really understand the implications here.