# Arithmetic types as lexically scoped declarations

**Posted:**January 15th, 2015 |

**Author:**Mars |

**Filed under:**Design | 2 Comments »

There are many algorithms for evaluating arithmetic operations, each useful in different situations. Sometimes you want simple, machine-word integers, and sometimes you want floating-point doubles; but sometimes it would be nice to have infinite-precision fixed-point arithmetic instead.

The usual solution for this problem is a system of related numeric types, with various rules about implicit or explicit conversions between those types, such that the output type for any given operation may be inferred from its inputs. Radian does exactly this, though its type implementations are nameless internal details and not explicitly declared or referenced.

This all works well enough, but every design has its tradeoffs, and I wonder if another strategy might be more convenient. Since Radian types are implicit, there’s no straightforward way for a programmer to specify the type of computation they want, and thus the math package has to maintain as much precision as it can – whether that information is actually useful or not. For the great majority of arithmetic operations, simple machine-word integers are plenty, but the radian library has to overflow into bignums just in case the programmer later decides to care.

Another weakness of the traditional arithmetic model is that there is no interface for specifying behavior when there are multiple valid possibilities. What should the math package do when a program divides by zero? Should it raise an exception, return some special NaN code, or merely approximate infinity and get on with life? Each could be the right answer for some situation, but language designers are generally obligated to pick one and hope it will work for everyone.

What if we separated the ideas of numeric type and arithmetic type? What if, instead of delegating arithmetic operations to number objects, we delegated them to some “calculator” object? One might have a “machine-word integer” calculator, an “IEEE double” calculator, or a “4x IEEE single vector” calculator, each one implementing the various arithmetic operators using some consistent mechanism. Perhaps “calculator” is an interface, with methods named “add”, “subtract”, “multiply”, and so on. The standard library might provide some common calculators, as listed above, but programmers with specialized needs could implement their own calculators in any fashion they saw fit, with any specialized rounding, approximation, or exception-handling behavior they might happen to need. Instead of specifying the types of variables, then, one would specify the types of computations.

Let’s explore how this might work in Radian. At present, arithmetic operators are syntactic sugar for a method call on the left operand, where each operator has a specific name, so these statements are equivalent:

```
def foo = bar + baz
def foo = bar.add(baz)
```

Let’s do something else instead: we’ll still call an “add” method, but we’ll imagine that there is some object named “arithmetic” which implements it. These two methods would then become equivalent:

```
def foo = bar + baz
def foo = arithmetic.add(bar, baz)
```

Perhaps the language would provide an implicit global definition for “arithmetic” which links against the existing standard library code. Outer-scope symbols can always be overridden by local symbols, so any function or object or control structure would be free to apply its own calculator object by merely defining its own “arithmetic” symbol:

```
function float_add(x, y):
import fancy_arithmetic
def arithmetic = fancy_arithmetic.configure(my_handler, 42, true)
result = x + y
end function
```

The addition function still compiles to the same `arithmetic.add(x,y)`

as ever, but the function has provided its own definition of `arithmetic`

, so it can control the algorithm used. In this example I am imagining that it might include some parameters detailing the desired exception behavior and precision.

Since this is a lexical structure, not a dynamic part of the call stack, it is still possible to determine a value’s type at compile time – in fact it becomes much easier to determine what kind of result a given arithmetic operation will have, since the compiler can always tell which calculator object is currently in play. If the language library defined some standard calculator type which could be implemented efficiently using hardware primitives, the compiler might be able to detect that and generate accordingly more efficient code, instead of leaving all the decisions to runtime as it currently must.

It seems unlikely that this is a new idea, but I haven’t been able to find any references to previous experiments along these lines. If you’re familiar with such a project I would love to hear about it.