Split-and-Fold in FRI

Divide and Conquer for Efficient Proximity Testing

PUBLISHED ON NOV 5, 2024

Notation

Let $\mathbb F$ denote a finite field.

The Fast Reed-Solomon Interactive Oracle Proof of Proximity (FRI) allows a prover to convince a verifier that a given function $$f:D\subseteq\mathbb F\longrightarrow\mathbb F$$ is a low-degree polynomial function, or at least “close” (in the sense of proximal) to such a low-degree polynomial function. More formally and in the language of codes, the FRI protocol enables a prover to convince a verifier that $$ \Delta\left(f,\text{RS}[\mathbb F,D,\rho]\right)<\delta, $$ where $\text{RS}[\mathbb F,D,\rho]$ is a Reed-Solom code with rate $\rho$ and $\Delta$ is a suitable distance measure.1 Essentially, FRI is a statistical test for the claim that a function $f$ is “close” to $\text{RS}[\mathbb F,D,\rho]$. How exactly the distance measure $\Delta$ is defined and what “close” (or “proximal”) means, is not important for this article. For a more comprehensive treatment of Reed-Solom codes, the FRI protocol, and a discussion of the concept of proximity see the article on The FRI Protocol.

In this article, we focus on the main component of FRI, i.e., the split-and-fold procedure. In particular, we discuss how FRI reduces the problem of proving proximity for a function $f=f^{(0)}$ over $D=D^{(0)}$ to the problem of proving proximity for a function $f^{(1)}$ over $D^{(1)}$,$~$where the size of $D^{(1)}$ is at least half2 the size of $D^{(0)}$.

The reduction approach in FRI strongly resembles the one applied by the Fast Fourier Transform (FFT), which iteratively halves the problem size. The efficiency of this divide-and-conquer approach, both in FRI as well as in the FFT, relies on a special “symmetry” property of the underlying domain $D$.3

Specification

From now on and for a more streamlined presentation, we assume that $D=D^{(0)}$ is the set ofh $n$-th roots of unity, where $n=2^k$, for $k\in\mathbb N$. Regarding the existence of such roots of unity, we state: if $q=|\mathbb F|$ then $$ \text{$2^k$-th roots of unity exist}\Longleftrightarrow 2^{k}\mid q-1. $$ This is not a restriction. Although the split-and-fold approach in FRI also works in a more general setting for, e.g., $n=3^k$ or $n=5^k$ (or even mixed factors), in practice, using roots of unity with respect to a large power of $2$ suffices.

Hence, from now on let $D^{(0)}=\langle \omega \rangle = \{\omega,\omega^2,\ldots,\omega^n\}$ for a primitive $n=2^k$-th root of unity.

Before we dive into the actual split-and-fold procedure in FRI, it is insightful to consider the correspondence between functions on $D$ and polynomial residue classes modulo $Z_D(X)\coloneqq \prod_{d\in D}(X-d)=X^n-1$. Expressed more formally, there is a ring isomorphism $$ \psi: \mathbb F[X]/ \langle X^n-1\rangle \longrightarrow \mathbb F^D\coloneqq\{f:D\rightarrow\mathbb F\mid f\text{ a function}\}. $$ This follows immediately from the (First) Isomorphism Theorem for rings. Indeed, there is a ring homomorphism $$ \begin{alignat*}{2} \psi’:~ &\mathbb F[X] &&\longrightarrow \mathbb F^D\\ &p(X) &&\longmapsto (d\mapsto p(d)) \end{alignat*} $$ such that $\ker\psi’=Z_D(X)$ and $\text{im}\,\psi’ = \mathbb F^D$ (check!). As remarked above, $D$ is the set of $n=2^k$-th roots of unity and therefore $Z_D(X) = X^n-1$. As a consequence, by the Isomorphism Theorem, we get $$ \mathbb F[X]/\langle X^n-1\rangle \cong \mathbb F^D. $$ In other words, whenever we speak of a function $f:D\rightarrow\mathbb F$, we may treat it as a (univariate) polynomial $f(X)$ with maximum degree $n-1$, i.e., as an element in the quotient ring $\mathbb F[X]/ \langle X^n-1\rangle$.

Notation

We use the same letter $f$ to, both, denote a function $f:D\rightarrow\mathbb F$ and the unique polynomial $f(X)$ in $\mathbb F[X]/ \langle X^n-1\rangle$ corresponding $f$.

An important parameter of the split-and-fold process in FRI is the splitting constant $\eta$. For $\eta=1$ the initial problem size is split into halves, for $\eta=2$ into fourths, for $\eta=3$ into eigths, and so on. The case $\eta=0$ is the trivial case, where no split (and no fold) happens.

Different splitting parameters

The main technique for folding (or compressing) several instances into a single instance is to take a random algebraic combination of the input instances. This technique is successfully used in (many) different contexts, not only in FRI but also in the Inner Product Argument (which is used in Halo and Halo2) or for efficient batch opening and verifying commitments in PLONK. See the article on Batching Techniques in PLONK for a more detailed discussion of the latter context.

The following illustration visualizes a single iteration of the split-and-fold process in FRI for $\eta=1$. Split-and-fold in FRI

The high-level approach for several iterations of the split-and-fold process is as follows. Let $f:D\longrightarrow \mathbb F$ be the function for which we want to establish a low-degree claim.

  1. Split the initial instance $f$ into $2^\eta$ smaller instances $f_0,\ldots,f_{\eta-1}$, each of relative size $\frac{1}{2^\eta}$ compared to the the initial size.
  2. Fold (or compress) the $2^\eta$ smaller instances $f_0,\ldots,f_{\eta-1}$ into a single instance by taking a random algebraic combination $f_0 + \alpha \cdot f_1 + \cdots + \alpha^{2^\eta-1}\cdot f_{2^\eta-1}$ of the smaller instances, for $\alpha\in\mathbb F$ uniformly random.
  3. Repeat the process for the folded instance.

How often the split-and-fold process is repeated is a parameter of the FRI protocol. In general, a logarithmic number of iterations is enough (logarithmic in $|D|$, or, to be more precise, logarithmic in the low-degree claim $k$).

Let us elaborate on this process. For the sake of a better understanding, the following explanation will be more verbose for the case $\eta=1$. This case should provide enough intuition for the reasoning and proofs behind the general case.

Splitting

Conceptually, the first step in the split-and-fold technique is to split the function $f$ into several different parts. For $\eta=1$, we can write $f:D\longrightarrow\mathbb F$ as the following polynomial:4 $$ \begin{aligned} f(X) &= \sum_{i=0}^{n-1} a_i X^i = \sum_{i=0}^{n/2-1}a_{2i} X^{2i} + \sum_{i=0}^{n/2-1}a_{2i+1} X^{2i+1}\\ &= \underbrace{\sum_{i=0}^{n/2-1}a_{2i} X^{2i}}_{\eqqcolon f_0(X^2)} + X\cdot \underbrace{\sum_{i=0}^{n/2-1}a_{2i+1} X^{2i}}_{\eqqcolon f_1(X^2)}\\ &= f_0(X^2)+X\cdot f_1(X^2). \end{aligned} $$ This is the same approach as in the FFT. The polynomial $f_0(X^2)$ collects all terms in $f(X)$ with even exponents and $X\cdot f_1(X)$ collects all terms with odd exponents. We note, the polynonomials $f_0(X),f_1(X)$ are polynomials of degree at most $\frac{n-1}{2}$. In other words, for $\eta=1$, we partition the set of all possible exponents $E=\{0,1,2,\ldots,n-1\}$ into two sets $$E_0^{(1)}\coloneqq \{0,2,\ldots,n-2\}$$ and $$E_1^{(1)}\coloneqq \{1,3,\ldots,n-1\}$$.

In a similar way, for $\eta=2$ we arrive at $$ \begin{aligned} f(X) = f_0(X^4) + X\cdot f_1(X^4) + X^2\cdot f_2(X^4) + X^3\cdot f_3(X^4), \end{aligned} $$ and for $\eta=3$ at $$ \begin{aligned} f(X) = f_0(X^8) + X\cdot f_1(X^4) + \cdots + X^7\cdot f_7(X^8). \end{aligned} $$ For general $\eta$ we get5 $$ \begin{alignat}{1} \label{eq:sum-decomposition} f(X) = \sum_{i=0}^{2^\eta-1} X^{i}\cdot f_i(X^{2^\eta}). \end{alignat} $$ As indicated above for the case $\eta=1$, a way to visualize this splitting is by having a look at how the set of exponents $E\subseteq\{0,1,\ldots,n-1\}$ in $f$ is partioned. We partition $E$ as follows: $$ \begin{aligned} \eta=1:\; E &= E_0^{(1)}\cup E_1^{(1)}, \\ \eta=2:\; E &= E_0^{(2)}\cup E_1^{(2)}\cup E_2^{(2)} \cup E_3^{(2)}, \\ \eta=3:\; E &= E_0^{(3)}\cup E_1^{(3)}\cup \ldots \cup E_7^{(3)}, \\ \end{aligned} $$ where $$ E_i^{(\eta)}\coloneqq \{e\in E: e=i\bmod 2^{\eta}\}. $$ In the general case, we get $$ E = \bigcup_{i=0}^{2^{\eta}-1} E_i^{(\eta)}. $$ We remark, by definition it holds $|E_i^{(\eta)}|\leq \frac{n}{2^{\eta}}$. These observations yield $$ \begin{alignat}{2} f(X) &= \sum_{i=1}^{n-1} a_i X^i = \sum_{j=0}^{2^{\eta}-1}\sum_{i\in E_j^{(\eta)}} a_i X^{i} = \sum_{j=0}^{2^{\eta}-1} X^{j}\cdot\sum_{i\in E_j^{(\eta)}} a_i X^{i-j}\\ \label{eq:sum-partitions} &= \sum_{j=0}^{2^{\eta}-1} X^{j}\cdot \underbrace{\left(\sum_{i=0}^{n/2^{\eta}-1} a_{i\cdot 2^{\eta}+j}\cdot X^{i}\right)}_{=f_j(X^{2^{\eta}})}. \end{alignat} $$

In particular, we point out the equality between \eqref{eq:sum-decomposition} and \eqref{eq:sum-partitions}.

Example. Let’s say $\deg(f)=11$ and $f(X) = \sum_{i=0}^{11} a_i X^i$. For $\eta=0,1,2,3$, below figure depicts how the set of exponents is partioned. For each $\eta$, same colors (or, equivalently, same numbers) indicate that the corresponding exponents lie in the same partition set. Partioning of exponents

For $\eta=1$, we just split exponents into even and odd ones, i.e., the sets of exponents that are congruent $0$ and $1$ modulo $2$. For $\eta=2$, we have four partition sets, i.e., the sets of exponents congruent $0$, $1$, $2$, and $3$ modulo $4$. Writing down the resulting partition for $\eta=1$ gives us $$ \begin{aligned} f(X) &=\sum_{i=0}^{11} a_i X^i \\ &= (a_0+a_2 X^2+\cdots+a_{10} X^{10}) + X\cdot (a_1+a_3X^2 + \cdots + a_{11} X^{10})\\ &= f_0(X^2) + X\cdot f_1(X^2). \end{aligned} $$ Here, $f_0(X)$ and $f_1(X)$ are given by $$ f_0(X) = a_0 + a_2 X + a_4 X^2 + a_6 X^3 + a_8 X^4 + a_{10} X^5, $$ and $$ f_1(X) = a_1 + a_3 X + a_5 X^2 + a_7 X^3 + a_9 X^4 + a_{11} X^5.\qex $$

Notation

Since the split-and-fold process is iterated several times, we also need to reflect this in our notation. We, therefore, write $f^{(i)}$, $f_j^{(i)}$, and $D^{(i)}$ for the input polynomial, the (intermediate) split instances, and the domain in iteration $i=0,1,\ldots$, respectively.

Folding

With our discussion of the splitting procedure at hand, we may proceed to the folding part. In the first iteration of the split-and-fold procedure, we fold (or compress) all the polynomials $f_0^{(0)},\ldots,f_{2^{\eta}-1}^{(0)}$ into a single new polynomial $f^{(1)}$ by taking a random algebraic combination $$ f^{(1)}(X)\coloneqq \sum_{i=0}^{2^{\eta}-1} \alpha_0^{i}\cdot f_i^{(0)}(X), $$ for $\alpha_0\in\mathbb F$ uniformly random. The true power of folding comes from the fact that the domain $D^{(1)}$ of $f^{(1)}$ has only a fraction of the size of $D^{(0)}$. To be more precise, we have $$ \frac{|D^{(1)}|}{|D^{(0)}|} = \frac{1}{2^{\eta}}. $$

To see this, consider the following. As designated above, $D=D^{(0)}$ is the set of $n$-th roots of unity, where $n=2^k$ for some $k\in\mathbb N$. Consequently, we have $$ D^{(0)} = \langle \omega\rangle, $$ for $\omega$ a primitive $n$-th root of unity. Let us assume we have $\eta=1$, i.e., we iteratively split the problem size in half. For $f^{(0)} = f$ this yields $$ f^{(0)}(X) = f_0^{(0)}(X^2) + X\cdot f_1^{(0)}(X^2). $$ Therefore, the “folding” of $f^{(0)}$ is given by $$ f^{(1)}(X) = f_0^{(0)}(X^2) + \alpha_0\cdot f_1^{(0)}(X^2), $$ for $\alpha_0\in\mathbb F$ uniformly random. The salient point of folding is the fact that in the definition of $f^{(1)}$ only square values enter into $f_0$ and $f_1$. Because of the symmetry of $n$-th roots of unity we have $$ D^{(1)} \coloneqq \{\omega^2 : \omega\in D^{(0)}\} = \{\omega^2,\omega^4,\omega^6,\ldots,\omega^n\} = \langle \omega^2\rangle. $$ This point is crucial. It means, the domain over which $f_0$ and $f_1$ are working is, in fact, $D^{(1)}$. And $D^{(1)}$ has only half the size of $D^{(0)}$. The same reasoning applies for general $\eta$. In the general case, we have $$ f^{(1)}(X) = f_0^{(0)}\left(X^{2^{\eta}}\right) + \alpha_0\cdot f_1^{(0)}\left(X^{2^{\eta}}\right) + \cdots + \alpha_0^{2^{\eta}-1}\cdot f_{2^{\eta}-1}^{(0)}\left(X^{2^{\eta}}\right), $$ and, thus $$ D^{(1)} \coloneqq \{\omega^{2^{\eta}} : \omega\in D^{(0)}\} = \langle \omega^{2^{\eta}}\rangle. $$

Several Iterations

So far we have described the split-and-fold process for a single iteration. In FRI, this process would be iterated several times. The iteration is straightforward: we apply the whole process again to $f^{(1)}$ and get $f^{(2)}$, then to $f^{(2)}$ and get $f^{(3)}$, and so forth.

Consistency Checks

What do we gain from the split-and-fold process? In essence, we reduce the problem of proving proximity for $f=f^{(0)}$ to the problem of proving proximity for $f^{(1)}$. Roughly, the logic is as follows:

  • if $f^{(1)}$ is close to $\text{RS}[\mathbb F,D^{(1)},\rho]$, and
  • if $f^{(1)}$ was “folded correctly” (i.e., $f^{(1)}$ was, indeed, constructed from $f^{(0)}$ as described above),
  • then with high likelihood also $f^{(0)}$ is close to $\text{RS}[\mathbb F,D^{(0)},\rho]$.

For a concrete analysis of how large the likelihood will be (i.e., how large the soundness error of above inference is) and a more detailed discussion of the complete FRI protocol, we refer to the write-up on The FRI Protocol. In the last iteration of the split-and-fold procedure, the prover provides the terminal function in full and the verifier constructs a Reed-Solomon codeword from this terminal function. Intuitively, a cheating prover can therefore be detected with high confidence by checking the consistency of the folding in each iteration.

The remainder of this article will focus on how we can check that $f^{(1)}$ has been “folded correctly”, i.e., that it holds $$ f^{(1)}(X) = \sum_{i=0}^{2^{\eta}-1} X^i\cdot f_j^{(0)}(X). $$ We may call this a consistency check. Above polynomial identity can be checked probabilistically by evaluating it at (a few) random points in $D$. However, in FRI, the verifier is only given query access to the functions $f^{(0)}$ and $f^{(1)}$, but not to the functions $f_j^{(0)}$. Hence, we need a way to conduct the consistency check only in terms of the funtions $f^{(0)}$, $f^{(1)}$, without the intermediate functions $f_j^{(0)}$.

Let us first discuss how we achieve this for the case $\eta=1$. In this case, the consistency check is given by $$ f^{(1)}(X) = f_0^{(0)}(X)+\alpha_0\cdot f_1^{(0)}(X). $$ We define the bivariate polynomial $$ Q(X,Y)\coloneqq f_0^{(0)}(X)+Y\cdot f_1^{(0)}(X)\in \mathbb F[X,Y]. $$ Then, we have the equalities $$ \begin{alignat*}{3} Q(X^2,X) &= f_0^{(0)}(X^2) + X\cdot f_1^{(0)}(X^2) &&= f^{(0)}(X),\\ Q(X,\alpha_0) &= f_0^{(0)}(X) + \alpha_0\cdot f_1^{(0)}(X) &&= f^{(1)}(X),\\ Q(X^2,\alpha_0) &= f_0^{(0)}(X^2) + \alpha_0\cdot f_1^{(0)}(X^2) &&= f^{(1)}(X^2),\\ \end{alignat*} $$ where $\alpha_0$ is the same $\alpha_0$ that has been used in the construction of $f^{(1)}$. Let $\varphi$ be a primitive second root of unity, i.e., a primitive root of $X^2-1$. We define the rational Lagrange functions $L_0, L_1\in\mathbb F(X,Y)$ as follows: $$ \begin{aligned} L_0(X,Y)\coloneqq \frac{Y-\varphi X}{X-\varphi X},\\ L_1(X,Y)\coloneqq \frac{Y-X}{\varphi X- X}. \end{aligned} $$ For notational purposes, it will be convenient to work with the rational function $L\in\mathbb F(X,Y)$ given by $$ L(X,Y)\coloneqq f^{(0)}(X)\cdot L_0(X,Y) + f^{(0)}(\varphi X)\cdot L_1(X,Y). $$ Let us assume we can show that it holds6 $$ \begin{alignat*}{2} Q(X^{2},Y) &\stackrel{}{=} L(X,Y)\\ &\stackrel{}{=} \frac{f^{(0)}(X)+f^{(0)}(\varphi X)}{2} + Y\cdot\frac{f^{(0)}(X)+\varphi\cdot f^{(0)}(\varphi X)}{2X}. \end{alignat*} $$ Then, we substitute $Y=\alpha_0$ and easily derive $$ \begin{alignat}{2} f^{(1)}(X^2) &= Q(X^2,\alpha_0)\\ &= L(X,\alpha_0)\\ \label{eq:bivariate-identity} &= f^{(0)}(X)\cdot L_0(X,\alpha_0) + f^{(0)}(\varphi X)\cdot L_1(X,\alpha_0)\\ &= \frac{f^{(0)}(x)+f^{(0)}(\varphi x)}{2} + \alpha_0\cdot\frac{f^{(0)}(X)+\varphi\cdot f^{(0)}(\varphi X)}{2X}. \end{alignat} $$ The equality in \eqref{eq:bivariate-identity} only contains the functions $f^{(0)}$ and $f^{(1)}$. Hence, our sought consistency check is $$ f^{(1)}(X^2) = f^{(0)}(X)\cdot L_0(X,\alpha_0) + f^{(0)}(\varphi X)\cdot L_1(X,\alpha_0). $$

To conduct the consistency in the case $\eta=1$, we sample a uniformly random $x\in D^{(0)}$ and query $f^{(1)}$ at $x^{2}$, and $f^{(0)}$ at $x,\varphi x$, where $\varphi$ is a primitive root of $X^2-1$ (i.e., $\varphi=-1$). Subsequently, we check if $$ f^{(1)}\left(x^{2}\right) \stackrel{?}{=} f^{(0)}(x)\cdot L_0(x,\alpha_0) + f^{(0)}(\varphi x)\cdot L_1(x,\alpha_0). $$

For a general splitting parameter $\eta$ we need $\varphi$ to be a primitive $2^{\eta}$-th root of unity, i.e., a primitive root of $X^{2^{\eta}}-1$. In the general case, we split $f^{(0)}$ as $$ f^{(0)}(X) = \sum_{i=0}^{2^{\eta}-1} X^{i}\cdot f_i^{(0)}(X^{2^{\eta}}) . $$ We define $$ Q(X,Y)\coloneqq \sum_{i=0}^{2^{\eta}-1} Y^i\cdot f_i^{(0)}(X), $$ and $$ L(X,Y)\coloneqq \sum_{i=0}^{2^{\eta}-1} f^{(0)}(\varphi^{i} X)\cdot L_i(X,Y), $$ where $$ \begin{aligned} L_i(X,Y) \coloneqq \prod_{j=0,\ldots,2^{\eta}-1:\;j\neq i} \frac{Y-\varphi^j X }{\varphi^i X - \varphi^j X}. \end{aligned} $$ Then, we assume it holds $$ \begin{alignat}{2} \label{eq:general-bivariate-identity}Q(X^{2^{\eta}},Y) &\stackrel{}{=} L(X,Y)\\ \label{eq:general-lagrange}&= \sum_{i=0}^{2^{\eta}-1} f^{(0)}(\varphi^{i} X)\cdot L_i(X,Y).\\ \end{alignat} $$

It is interesting to note that the right side of \eqref{eq:general-lagrange} resembles Lagrange interpolation over the supporting points $$ (X,f^{(0)}(X)),(\varphi X,f^{(0)}(\varphi X)),\ldots,(\varphi^{2^{\eta}-1}X,f^{(0)}(\varphi^{2^{\eta}-1}X)). $$ Hence, to conduct the consistency check in the general case, we sample a uniformly random $x\in D^{(0)}$ and query $f^{(1)}$ at $x^{2^{\eta}}$, and $f^{(0)}$ at $x,\varphi x,\ldots, \varphi^{2^{\eta}-1} x$. Subsequently, we check if $$ f^{(1)}\left(x^{2^{\eta}}\right) \stackrel{?}{=} \sum_{i=0}^{2^{\eta}-1} f^{(0)}(\varphi^{i} x)\cdot L_i(x,\alpha_0). $$

The only aspect which remains to be shown is a proof for $\eqref{eq:general-bivariate-identity}$. We exercise it here.

Claim. It holds $ Q(X^{2^{\eta}},Y) = L(X,Y). $

Proof. Let $\varphi\in\mathbb F$ be a primitive $2^{\eta}$-th root of unity. The polynomial $$ Q(X^{2^{\eta}},Y) = \sum_{i=0}^{2^{\eta}-1} Y^i\cdot f_i^{(0)}(X^{2^{\eta}})\in\mathbb F(X)[Y] %\sum_{i=0}^{2^{\eta}-1} f^{(0)}(\varphi^{i} x)\cdot L_i(x,Y) $$ is a polynomial of degree $2^{\eta}-1$ in $Y$ over the rational function field $\mathbb F(X)$7 with the property $$ Q(X^{2^{\eta}},X) = f^{(0)}(X),\ldots, Q(X^{2^{\eta}},\varphi^{2^{\eta}-1} X) = f^{(0)}(\varphi^{2^{\eta}-1} X). %f^{(0)}(x),\ldots, Q(x^{2^{\eta}},\varphi^{2^{\eta}-1} x) = f^{(0)}(\varphi^{2^{\eta}-1} x). $$ Furthermore, the polynomial $$ L(X,Y) = \sum_{i=0}^{2^{\eta}-1} f^{(0)}(\varphi^{i} X)\cdot L_i(X,Y)\in\mathbb F(X)[Y] $$ is also a polynomial in $Y$ of degree $2^{\eta}-1$ over $\mathbb F(X)$ with $$ L(X,X) = f^{(0)}(X),\ldots, L(X,\varphi^{2^{\eta}-1} X) = f^{(0)}(\varphi^{2^{\eta}-1} X). $$ We observe that, both, $Q(X^{2^{\eta}},Y)$ and $L(X,Y)$ are of degree $2^{\eta}-1$ in $Y$ and they agree on the set $\{ X,\varphi X,\ldots,\varphi^{2^{\eta}-1}X\}$ of $2^{\eta}$ distinct points in $\mathbb F(X)$. Hence $$ Q(X^{2^{\eta}},Y) = L(X,Y).\qed $$


  1. The used distance measure is the (relative) Hamming distance. For $f,g:D\subseteq\mathbb F\longrightarrow\mathbb F$, it is defined by $$ \Delta(f,g)=\frac{|\{x\in D: f(x)\neq g(x)\}|}{|D|}. $$ The distance of a function $f$ from the code $V\coloneqq\text{RS}[\mathbb F,D,\rho]$ is defined as $$ \Delta(f,V)\coloneqq\min_{v\in V} \Delta(f,v). $$ ↩︎

  2. More generally, we could also reduce the problem size to a third, a fourth, etc. ↩︎

  3. Intuitively, if $D=\{a_1,\ldots,a_n\}$, for $n\in\mathbb N$ even, this “symmetry” property can be described as $$ \forall i=1,\ldots,n: a_i = -a_{i+(n/2)}. $$ Definitely, the $n$-th roots of unity $\{\omega,\omega^2,\ldots,\omega^n=1\}$ satisfy this property. The $n$-th roots of unity are the roots of the polynomial $X^n-1$ in $\mathbb F$. They form a cyclic subgroup of the multiplicative group of $\mathbb F$ (i.e., the multiplicative group of all non-zero elements in $\mathbb F$). ↩︎

  4. We note, every function $f:D\longrightarrow\mathbb F$ can be uniquely represented by a polynomial of degree at most $|D|-1$. ↩︎

  5. Here, we overload notation by using the same symbol $f_i$ for several different values of $\eta$. However, for each $\eta$, the polynomial $f_i$ would be a different one. For true formal disambiguity, we needed to index the function $f_i$ also with the splitting parameter $\eta$, i.e., we needed to write $f_{i,\eta}$. The same held true for $f^{(i)}$ and $D^{(i)}$ defined later in the article. For the sake of a simpler notation and presentation, we are opting to be less precise and believe this causes no confusion. ↩︎

  6. The polynomial $f^{(0)}(X)\cdot L_0(X,Y) + f^{(0)}(\varphi X)\cdot L_1(X,Y)$ resembles the Lagrange interpolation polynomial over $\mathbb F(X)$ with points $(X,f^{(0)}(X))$ and $(\varphi X,f^{(0)}(\varphi X))$. Furthermore, the equality $$ \begin{alignat*}{2} Q(X^2,Y) &= \frac{f^{(0)}(X)+f^{(0)}(\varphi X)}{2} + Y\cdot\frac{f^{(0)}(X)+\varphi\cdot f^{(0)}(\varphi X)}{2X}\\ &= \frac{f^{(0)}(X)+f^{(0)}(-X)}{2} + Y\cdot\frac{f^{(0)}(X) - f^{(0)}(-X)}{2X} \end{alignat*} $$ is an additional way to carry out the consistency check. This additional way is often stated as an explanatory example of the consistency check for the special case $\eta=1$. While it can be generalized as well, the equality $$ Q(X^2,Y) = f^{(0)}(X)\cdot L_0(X,Y) + f^{(0)}(\varphi X)\cdot L_1(X,Y) $$ generalizes in a more straightforward manner. ↩︎

  7. $\mathbb F(X)$ is the field of rational functions over $\mathbb F$ in the variable $X$. ↩︎