Skip to content

De Bruijn indices for recursive functions

Published:
\gdef\fix{\text{fix}} \gdef\step{\Rightarrow} \gdef\subst#1#2#3{#1[#2\mapsto #3]} \gdef\substt#1#2#3#4#5{#1[#2\mapsto #3, #4\mapsto #5]} \gdef\upup{\Uparrow\Uparrow}

λ\lambda-calculus and operational semantics

Operational semantics is a way to specify behaviors of programming languages by viewing each program as an abstract expression, which could “reduce” or “unfold” to other (simpler or not) abstract expressions via a series of transition steps.

For example, considering the very famous lambda calculus, with syntax for variables, abstractions, and applications:

e    x    λx. e    e1 e2e\;\coloneqq\;x\;|\;\lambda x.~e\;|\;e_1~e_2

Suppose for the sake of examples that we also have numbers and arithmetic operators in the lambda calculus. Just intuitively, we know that the expression (λx. x+1) 42(\lambda x.~x+1)~42 should reduce to 4343. When we see this function application syntax, we bind the argument 4242 to the variable xx introduced by the lambda, and when we evaluate the body x+1x+1, we know that the 4242 should be substituted for xx in the body.

One way to rigorously define such operation is via a meta-procedure called subtitution. It is denoted as e0[xe]\subst{e_0}{x}{e'} and is inductively defined on the grammar in the following way:

The case of substituting lambdas can be tricky, as one might run into variable captures. The problem is solved by using capture-free substitutions. Details can be found in this lecture note.

There are various reduction strategies that all involve substitution. For our purpose, we will stick to call-by-value (CBV) semantics, which forces arguments to be fully evaluating before performing any substitution.

Semantics for recursive functions

The classic way to encode recursive functions in lambda calculus is through fixed-point combinators. For example, see the CBV Y-combinator:

Yλt. (λf. t(λz. ffz))(λf. t(λz. ffz))Y \triangleq \lambda t.~(\lambda f.~t(\lambda z.~ffz))(\lambda f.~t(\lambda z.~ffz))

Being a fixed-point combinator, it satisfies the equation that for any function FF, F(YF)=YFF(YF)=YF. To define the recursive factorial function using the Y-combinator, for example, one needs to define a “pre-factorial” function first, and apply the combinator to that. However, while this is a perfectly valid way to represent recursion, it is not close to how we actually write programs in practice. An alternative is to use explicit recursion, by augmenting the syntax of the calculus with a new construct

e       fix f x. ee~\coloneqq \;\dots\;|\;\fix~f~x.~e

We also need a notion of parallel substitution, which extends the single substitution definition and substitutes multiple variables simultaneously. For example, the reduction rule for applying an argument to a fix-expression is defined as

(fix f x. e)ee[f(fix f x. e),xe](\fix~f~x.~e)e' \step \substt{e}{f}{(\fix~f~x.~e)}{x}{e'}

where in addition to substituting ee' for xx, we also expand free occurrences of ff in ee to copies of the fix-expression itself.

De Bruijn indices

Although explicitly variable names is essential for practical usage of programming languages, it introduces a lot of headache, as we previously hinted. For example, many metatheoretic results about lambda calculus (and its derivatives) are often framed with the caveat of “up to α\alpha-equivalence”. In the definition of substitution above, we also had a dubious renaming operation, which needs to be able to come up with fresh variable names.

A common solution is to switch to a nameless representation of bound variables using De Bruijn indices. In such representation, there are no variable names, and bound variables are replaced by natural numbers which refer to how many levels of lambda binders out it refers to. For example, the expression λx. λy. x\lambda x.~\lambda y.~x is written as λ. λ. 1\lambda. ~\lambda.~1 in the nameless style.

Since there are no more variable names, the subtitution operation will need to take care of re-indexing the variables. See this lecture note for a detailed definition of how to do so.

One of the main advantages of De Bruijn indices is that α\alpha-equivalence is not longer a concern at all. This makes it amenable for implementing proof assistants or other dependently typed languages. Yet another use case is in mechanized verification of programming language semantics in proof assistants like Coq or Agda. In fact, this blog post is inspired by a problem I encountered when trying to formalize the semantics for a language for distributed systems in Coq.

Nameless recursive functions

Say we want to formalize the operational semantics of a Turing-complete programming language in proof assistants. It will be nice to be able to combine De Bruijn indices and the explicit recursion we defined above. It turns out this is not so hard to do! The grammar is just ordinary nameless style lambda calculus:

e    x    λ. e    e1 e2e\;\coloneqq\;x\;|\;\lambda.~e\;|\;e_1~e_2

Although syntactically unchanged, the λ\lambda case is semantically different. Now instead of binding one index, it binds two indices, one corresponding to the argument and one to the lambda itself. This is manifested in the new CBV semantics:

e1e1e1 e2e1 e2e2e2(λ. e1) e2(λ. e1) e2  (λ. e) ve[0v,1(λ. e)]\frac{e_1 \step e_1'}{e_1~e_2 \step e_1'~e_2}\quad \frac{e_2 \step e_2'}{(\lambda.~e_1)~e_2 \step (\lambda.~e_1)~e_2'}\quad \frac{\;}{(\lambda.~e)~v \step \substt{e}{0}{v}{1}{(\lambda.~e)}}

A quick note for notation: [mv1, m+1v2][m\mapsto v_1,~m+1\mapsto v_2] denotes the infinite parallel substitution that maps the indices nn to v1v_1, m+1m+1 to v2v_2, and all other indices unchanged. Given an infinite parallel substitution σ\sigma, which is just a function from natural numbers to expressions, the “double up” operator \upup applied to σ\sigma yields a new substitution with the indices shifted up by two. For example, this can written as a fairly short inductive definition in languages like OCaml or Gallina:

let rec upup (sigma : nat -> expr) =
  function
  | 0 -> Var 0
  | S 0 -> Var 1
  | S (S n') -> sigma n'

We call it “double up” because it is nothing more than applying the conventional \Uparrow twice.

Now, applying substitutions to expressions is very straightforward. Since this is CBV and not full reduction, we don’t need to shift open indices:

n[σ]σ(n)(λ. e)[σ]λ. (e[σ])(e1 e2)[σ](e1[σ]) (e2[σ])\begin{align*} n[\sigma] &\triangleq \sigma(n)\\ (\lambda.~e)[\sigma] &\triangleq \lambda.~(e[\upup\sigma])\\ (e_1~e_2)[\sigma] &\triangleq (e_1[\sigma])~(e_2[\sigma]) \end{align*}

For example, the OCaml recursive function let rec f x = f x can be expressed in our calculus as λ. 1 0\lambda.~1 ~0. For a sanity check, let’s try to β\beta-reduce the expression with some argument:

(λ. 1 0) v(1 0)[0v,1(λ. 1 0)]=(1[0v,1(λ. 1 0)]) (0[0v,1(λ. 1 0)])=(λ. 1 0) v\begin{align*} &(\lambda.~1 ~0)~v \\ \step &\substt{(1~0)}{0}{v}{1}{(\lambda.~1 ~0)}\\ = &(\substt{1}{0}{v}{1}{(\lambda.~1 ~0)})~(\substt{0}{0}{v}{1}{(\lambda.~1 ~0)})\\ = &(\lambda.~1 ~0)~v \end{align*}

Note that since introducing a binder increments the indices by 22, the function λf. λx. (f x)\lambda f.~\lambda x.~(f~x) (which is not recursive) in regular lambda calculus is represented as λ. λ. (2 0)\lambda.~\lambda.~(2~0) in our formulation.