Category Archives: Algebraic Geometry

Errata: Zariski – Samuel

$\newcommand{\aaa}{\mathfrak{a}} \newcommand{\mmm}{\mathfrak{m}} \newcommand{\qqq}{\mathfrak{q}} $

  1. Volume 2, Chapter VIII, Section 10 (Theory of Multiplicities), Proof of Theorem 22: in the case of zero-divisors in dimension 1, the set up is this:
    • $A$ is a local ring with maximal ideal $\mmm$ such that $A/\mmm$ is an infinite field,
    • $\qqq$ is an ideal of $A$ with $\sqrt{\qqq} = \mmm$,
    • $x \in A$ is outside of all isolated prime ideals of $0$,
    • $x$ is also superficial of order $1$ for $\qqq$, i.e. there is a nonnegative integer $c$ such that $(\qqq^n: Ax) \cap \qqq^c = \qqq^{n-1}$ for $n \gg 1$,
    • $\dim(A) = 1$,
    • $\mmm$ is an embedded prime ideal of $0$, i.e. all elements of $\mmm$ are zero divisors.
      With this, they define $\aaa := (0: Ax)$ and at some point claim that $x$ is a non zero-divisor in $A/\aaa$. This is false, as seen from the following example: $A := k[[x, y]]/\langle x^2y, y^2 \rangle$, where $k$ is an infinite field, and $\qqq = \mmm = \langle x, y \rangle$. Then every element of $A$ can be represented as $f(x) + ay + bxy$ with $f \in k[[x]]$, $a, b \in k$. It is straightforward to check that $\langle x^2y, y^2 \rangle = \langle y \rangle \cap \langle x^2, y^2 \rangle$ induces a primary decomposition of the zero ideal in $A$, and $x$ satisfies all the above properties. However, in this case $\aaa = Axy$, and $x$ is a zero-divisor in $A/\aaa$, since $xy = 0 \in A/\aaa$. This gives the required counterexample. (The proof can be salvaged by defining $\aaa$ as $\bigcup_n (0: Ax^n)$.)

BKK bound – an “elementary proof”

Some comments of Jonathan Korman made me realize that it would have been good to include in How Many Zeroes? an “elementary” proof of the Bernstein-Kushnirenko formula via Hilbert polynomials. This approach only yields a “weak” version of the bound since it applies only to the case that the number of solutions is finite. Nevertheless, it is a rewarding approach since this shows how it can be useful to interpret polynomials, or more generally regular functions on a variety, as linear sections after an appropriate embedding, so that the number of solutions of systems of polynomials can be interpreted as the degree of a variety, and in addition, how the geometric concept of degree of a variety can be interpreted in terms of its Hilbert polynomial, an extremely fruitful algebraic tool discovered by Hilbert which by now occupies a central role in algebraic geometry.

The following posts describe this approach:

  1. The first one that introduces the degree of a projective variety,
  2. The second post describes the connection between the degree of a projective variety and its Hilbert polynomial
  3. The third one proves the weak version of Kushnirenko’s formula using Hilbert polynomials, and sketches the derivation of Bernstein’s and Bézout’s formulae.

Bernstein-Kushnirenko and Bézout’s theorems (weak version)

$\DeclareMathOperator{\conv}{conv} \newcommand{\dprime}{^{\prime\prime}} \DeclareMathOperator{\interior}{interior} \newcommand{\kk}{\mathbb{K}} \newcommand{\kstar}{\kk^*} \newcommand{\kstarn}{(\kk^*)^n} \newcommand{\kstarnn}[1]{(\kk^*)^{#1}} \DeclareMathOperator{\mv}{MV} \newcommand{\pp}{\mathbb{P}} \newcommand{\qq}{\mathbb{Q}} \newcommand{\rnonnegs}{\mathbb{r}^s_{\geq 0}} \newcommand{\rr}{\mathbb{R}} \newcommand{\scrA}{\mathcal{A}} \newcommand{\scrL}{\mathcal{L}} \newcommand{\scrP}{\mathcal{P}} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\vol}{vol} \newcommand{\znonneg}{\mathbb{Z}_{\geq 0}} \newcommand{\znonnegs}{\mathbb{Z}^s_{\geq 0}} \newcommand{\zz}{\mathbb{Z}}$
In earlier posts we defined the degree of a projective variety (defined over an algebraically closed field $\kk$) and showed that it can be determined from its Hilbert polynomial. In this post we use these notions to

  1. obtain Kushnirenko’s formula for the number of solutions of $n$ generic polynomials on $(\kk \setminus \{0\})^n$,
  2. sketch a derivation of the formula in Bernstein’s theorem using Kushnirenko’s formula, and
  3. deduce the formula in Bézout’s theorem from Bernstein’s formula.

For the proof of Kushnirenko’s formula we use a beautiful argument of Khovanskii (A. G. Khovanskii, Newton Polyhedron, Hilbert Polynomial, and Sums of Finite Sets, Functional Analysis and Its Applications, vol 26, 1992). Bernstein’s formula then follows from multi-additivity properties of intersection numbers and mixed volumes. The formula from Bézout’s theorem is in turn a special case of Bernstein’s formula. Note that the statements we prove in this post are “weak” forms of these formulae in the sense that we only consider the case that the intersection is finite. In the “strong” form (as e.g. in Sections VII.3 and VIII.3 of How Many Zeroes?) these provide an upper bound for the number of isolated solutions, even if the solution set contains non-isolated points.

Following the “toric” tradition, we write $\kstar$ to denote the “torus” $\kk \setminus \{0\}$. Consider the $n$-dimensional torus $\kstarn$ with coordinates $(x_1, \ldots, x_n)$. Every $\alpha = (\alpha_1, \ldots, \alpha_n) \in \zz^n$ corresponds to a monomial $x_1^{\alpha_1} \cdots x_n^{\alpha_n}$ which we denote by $x^\alpha$. A regular function on $\kstarn$, i.e. a Laurent polynomial in $(x_1, \ldots, x_n)$, is a polynomial in $x_1, x_1^{-1}, \ldots, x_n, x_n^{-1}$, and can be expressed as a sum
\[f = \sum_\alpha c_\alpha x^\alpha\]
such that $c_\alpha \neq 0$ for at most finitely many $\alpha \in \zz^n$; the support of $f$, denoted $\supp(f)$, is the finite set of all $\alpha$ such that $c_\alpha \neq 0$. We say that $f$ is supported at $\scrA$ if $\supp(f) \subseteq \scrA$.

Theorem 1 (Kushnirenko). Given a finite subset $\scrA$ of $\zz^n$, the number of solutions on $\kstarn$ of $n$ generic Laurent polynomials supported at $\scrA$, counted with appropriate multiplicity, is $n!$ times the volume of the convex hull of $\scrA$.

Proof of Theorem 1: Step 1 – reduction to count of $|s\scrA|$

Let $\alpha_0, \ldots, \alpha_N$ be the elements of $\scrA$. Consider the map
\[\phi_\scrA: \kstarn \to \pp^N\]
which maps
\[x \mapsto [x^{\alpha_0}: \cdots : x^{\alpha_N}]\]
Let $X_\scrA$ be the closure of $\phi_\scrA(\kstarn)$ in $\pp^N$. A linear polynomial $\sum_i c_iz_i$, where $[z_0: \cdots :z_N]$ are homogeneous coordinates on $\pp^N$, restricts on the image of $\phi_\scrA$ to a Laurent polynomial $\sum_i c_ix^{\alpha_i}$ supported at $\scrA$. Since $\dim(X_\scrA) = \dim(\phi_\scrA(\kstarn)) \leq n$, a generic codimension $n$ linear subspace of $\pp^N$ either does not intersect $X_\scrA$ (when $\dim(X_\scrA) < n$), or, when $\dim(X_\scrA) = n$, intersects $X_\scrA$ at $\deg(X_\scrA)$ generic points on the image of $\phi_\scrA$, each with multiplicity $1$ (Theorem 4 of Degree of a projective variety). Since in the latter case, for a generic point $z$ on the image of $\phi_\scrA$ there are precisely $\deg(\phi_\scrA)$ points (counted with appropriate multiplicity) in $\phi_\scrA^{-1}(z)$, we have the following:

Observation 2. The number (counted with appropriate multiplicity) of solutions on $\kstarn$ of $n$ generic Laurent polynomials supported at $\scrA$ is either zero (when $\dim(\phi_\scrA(\kstarn)) < n$), or the degree of $X_\scrA$ times the degree of the map $\phi_\scrA$ (when $\dim(\phi_\scrA(\kstarn)) = n$).

Claim 3. Let $G$ be the subgroup of $\zz^n$ generated by $A – \alpha_0 = \{\alpha_i – \alpha_0: i = 0, \ldots, N\}$. Then

  1. there is $r \geq 0$, called the rank of $G$, such that $G \cong \zz^r$,
  2. $\dim(\phi(\scrA)) = r$,
  3. If $r = n$, then $\deg(\phi_\scrA)$ is the index (as a subgroup) of $G$ in $\zz^n$.

Proof. In the coordinates $(z_1/z_0, \ldots, z_N/z_0)$ on $U_0 := \pp^N \setminus V(z_0)$, the map $\phi_\scrA$ reduces to a map from $\kstarn \to \kstarnn{N}$ given by
\[x \mapsto (x^{\alpha_1 – \alpha_0}, \ldots, x^{\alpha_N – \alpha_0})\]
Now Claim 3 follows from Proposition VI.1 of How Many Zeroes? (the main ingredient in the proof is the observation that there is a basis $\beta_1, \ldots, \beta_n$ of $\zz^n$ such that $m_1\beta_1, \ldots, m_r\beta_r$ is a basis of $G$ for some $r \geq 0$).

In the case that $r < n$, it is clear that the volume of the convex hull $\conv(\scrA)$ of $\scrA$ is zero, and therefore Theorem 1 follows from Observation 2 and Claim 3. So from now on, assume $r = n$. Due to Observation 2 and Claim 3, in order to prove Theorem 1 it remains to compute $\deg(X_\scrA)$. Let $R = \kk[z_0, \ldots, z_N]/I(X_\scrA)$ be the homogeneous coordinate ring of $X_\scrA$. The theory of Hilbert polynomials (Theorems 1 and 2 of the preceding post) imply the following:

Observation 4. For $s \gg 1$, the dimension (as a vector space over $\kk$) of the degree $s$ graded component $R_s$ of $S$ is given by a polynomial $P_\scrA(s)$ of the form
\[P_\scrA(s) = \deg(X_\scrA)\frac{s^n}{n!} + \text{terms of degree} < n\]
In particular,
\[\deg(X_\scrA) = n! \lim_{s \to \infty} \frac{\dim_\kk(R_s)}{s^n}\]

Claim 5. For each $s \geq 0$,
\[\dim_\kk(R_s) = |s\scrA|\]
where $s\scrA$ is the set of all possible $s$-fold sums $\alpha_{i_1} + \cdots + \alpha_{i_s}$ with $\alpha_{i_j} \in \scrA$.

Proof. Indeed, for each $s \geq 0$, the substitutions $z_i = x^{\alpha_i}$, $i = 0, \ldots, N$, induce a $\kk$-linear map
\[\sigma:\kk[z_0, \ldots, z_N]_s \to \scrL_{s\scrA}\]
where $\kk[z_0, \ldots, z_N]_s$ is the set of homogeneous polynomials of degree $s$ in $(z_0, \ldots, z_N)$ and $\scrL_{s\scrA}$ is the set of Laurent polynomials supported at $s\scrA$. It is straightforward to check that $\sigma$ is surjective and $\ker(\sigma)$ is the set $I(X_\scrA)_s$ of homogeneous polynomials of degree $s$ in $I(X_\scrA)$. Consequently, as a vector space over $\kk$, $R_s$ is isomorphic to $\scrL_{s\scrA}$. In particular,
\[\dim_\kk(R_s) = \dim_\kk(\scrL_{s\scrA}) =|s\scrA|\]
which completes the proof of Claim 5.

Proof of Theorem 1: Step 2 – $|s\scrA|$ in terms of $\vol(\conv(\scrA))$

The following lemma is a special case of Proposition 3.7 from Discriminants, Resultants, and Multidimensional Determinants by Gelfand, Kapranov and Zelevinsky.

Lemma 6. Let $\scrP$ be the closure of a bounded open subset of $\rr^n$ such that $\scrP$ contains the origin and the boundary of $\scrP$ is piecewise linear. Then
\[\lim_{s \to \infty} \frac{|s\scrP \cap \zz^n|}{s^n} = \vol(\scrP)\]
where by $s\scrP$ we denote the “homothetic” set $\{s\alpha : \alpha \in \scrP\}$.

Proof. We proceed by induction on $n$. If $n = 1$, $\scrP$ is a closed interval, say of length $l$, and the lemma follows from the observation that
\[sl \leq |s\scrP \cap \zz| \leq sl + 1\]
In the general case we associate to each point $\alpha$ in $\zz^n$ a “lattice cube” which is the translation $\alpha + I^n$ of the standard cube
\[I^n := \{(x_1, \ldots, x_n) \in \rr^n: 0 \leq x_i \leq 1\]
Let $N(s)$ be the number of lattice cubes contained in $s\scrP$. Since the diameter of $I^n$ is $\sqrt{n}$, it follows that
\[0 \leq |s\scrP \cap \zz^n| – N(s) \leq a(s)\]
where $a(s)$ is the number of elements in $\zz^n \cap s\scrP$ with distance $\leq \sqrt{n}$ from the boundary of $s\scrP$. Similarly, since the volume of $I^n$ is 1,
\[0 \leq \vol(s\scrP) – N(s) \leq b(s)\]
where $b(s)$ is the volume of the set of all points in $s\scrP$ with distance $\leq \sqrt{n}$ from the boundary of $s\scrP$. It follows from the induction hypothesis and basic properties of volume that both $a(s)$ and $b(s)$ grow as constant times $s^{n-1}$ as $s \to \infty$. Consequently,
\[
\lim_{s \to \infty} \frac{|s\scrP \cap \zz^n|}{s^n}
=\lim_{s \to \infty} \frac{\vol(s\scrP)}{s^n}
=\lim_{s \to \infty} \frac{s^n\vol(\scrP)}{s^n}
= \vol(\scrP)
\]
which completes the proof of the lemma.

We formally state a corollary of the observation that $a(s)$ in the proof of Lemma 6 grows as a constant times $s^{n-1}$ as $s \to \infty$.

Proposition 7. Let $\scrP$ and $s\scrP$ be as in Lemma 6. For a fixed $r \in \rr$, let $\scrP’_{s, r}$ be the subset of $s\scrP$ consisting of all points with distance $\leq r$ from the boundary of $s\scrP$. Then
\[\lim_{s \to \infty} \frac{|\scrP’_{s, r} \cap \zz^n|}{s^n} = 0\]

The main idea of the proof of the following lemma is the same as that of the standard proof of Gordan’s lemma.

Lemma 8. Pick $\alpha_1, \ldots, \alpha_q \in \zz^n$ such that the subgroup generated by the $\alpha_i$ coincides with $\zz^n$. Then there is a constant $C$ with the following property: for every collection of real numbers $\lambda_1, \ldots, \lambda_q$ such that $\sum_i \lambda_i \alpha_i \in \zz^n$, there are integers $l_1, \ldots, l_q$ such that
\[
\begin{align*}
&\sum_i l_i\alpha_i = \sum_i \lambda_i \alpha_i,\ \text{and} \\
&\sum_i |\lambda_i – l_i| < C
\end{align*}
\]

Proof. Let
\[\scrP := \{\sum_i \lambda_i\alpha_i: 0 \leq \lambda_i \leq 1\} \subseteq \rr^n\]
Since $\scrP$ is compact and $\zz^n$ is discrete, the intersection $\scrP \cap \zz^n$ is finite. Since the $\alpha_i$ generate $\zz^n$, for each $\alpha \in \scrP \cap \zz^n$, we can fix a representation $\alpha = \sum_i l_i(\alpha)\alpha_i$ with $l_i(\alpha) \in \zz$. Now given $\beta \in \zz^n$ such that $\beta = \sum_i \lambda_i \alpha_i$ with $\lambda_i \in \rr$,
\[\alpha := \sum_i (\lambda_i – \lfloor \lambda_i \rfloor) \alpha_i \in \scrP \cap \zz^n\]
where $\lfloor \lambda_i \rfloor$ is the greatest integer $\leq \lambda_i$. Consequently,
\[
\beta
= \sum_{i=1}^q \lfloor \lambda_i \rfloor \alpha_i + \alpha
= \sum_{i=1}^q (\lfloor \lambda_i \rfloor + l_i(\alpha)) \alpha_i
\]
with
\[
\sum_{i=1}^q |\lambda_i – \lfloor \lambda_i \rfloor – l_i(\alpha)|
\leq \sum_{i=1}^q |\lambda_i – \lfloor \lambda_i \rfloor| + \sum_{i=1}^q|l_i(\alpha)|
\leq q + Q
\]
where $Q$ is the maximum of $\sum_i|l_i(\alpha)|$ over all $\alpha \in \scrP \cap \zz^n$. Therefore the lemma holds with $C := q + Q$.

Theorem 9. Let $\scrA = \{\alpha_1, \ldots, \alpha_q\}$ be a finite subset of $\zz^n$ and $\scrP := \conv(\scrA) \subseteq \rr^n$. Assume the subgroup of $\zz^n$ generated by all the pairwise differences $\alpha_i – \alpha_j$ coincides with $\zz^n$. Then there is $r \in \rr$ with the following property: for each positive integer $s$, every element of $s\scrP \cap \zz^n$ whose distance from the boundary of $s\scrP$ is greater than or equal to $r$ belongs to the set $s\scrA$ of all $s$-fold sums $\alpha_{i_1} + \cdots + \alpha_{i_s}$ of elements from $\scrA$.

Proof. Fix a nonnegative integer $s$. We want to understand which points of $s\scrP \cap \zz^n$ can be represented as elements from $s\scrA$. Replacing each $\alpha_i$ by $\alpha_i – \alpha_1$ if necessary, we may assume that one of the $\alpha_i$ is the origin. Then
\[s\scrA = \{\sum_i l_i\alpha_i:\ l_i \in \znonneg,\ \sum_i l_i \leq s\}\]
Moreover, the $\alpha_i$ satisfy the hypothesis of Lemma 8. Let $C$ be the constant prescribed by Lemma 8. If $\alpha \in s\scrP \cap \zz^n$ then
\[\alpha = \sum_i \lambda_i \alpha_i\]
with nonnegative real numbers $\lambda_i$ such that $\sum_i \lambda_i \leq s$. By Lemma 8, there are $l_i \in \zz$ such that
\[\alpha = \sum_i l_i \alpha_i,\ \text{and}\ \sum_i|\lambda_i – l_i| < C\]
Consequently, if the $\lambda_i$ further satisfies the following inequalities:
\[\lambda_i \geq C,\ \text{and}\ \sum_i \lambda_i \leq s – C\]
then the condition $\sum_i|\lambda_i – l_i| < C$ implies that
\[l_i > 0,\ \text{and}\ \sum_i l_i < s\]
i.e. $\alpha \in s\scrA$. In short, we proved the following:

Observation 10. Let $\scrP_{s,C}$ be the subset of $s\scrP$ consisting of all $\beta = \sum_i \lambda_i \alpha_i$ such that each $\lambda_i \geq C$ and $\sum_i \lambda_i \leq s – C$. Then
\[\scrP_{s,C} \cap \zz^n \subseteq s\scrA\]

Now we go back to the proof of Theorem 9. Consider the map $\pi: \rr^q \to \rr^n$ that maps the $i$-th standard unit element in $\rr^q$ to $\alpha_i$ for each $i$. Let
\[
\Delta := \{(\lambda_1, \ldots, \lambda_q): \lambda_i \geq 0\ \text{for each}\ i,\ \sum_i \lambda_i \leq 1\}
\subseteq \rr^q
\]
be the standard simplex in $\rr^q$. Let $\Delta_{s,C}$ be the sub-simplex of $s\Delta$ defined by the inequalities $\lambda_i \geq C$ for each $i$, and $\sum_i \lambda_i \leq s – C$. Then
\[\pi(s\Delta) = s\scrP,\ \text{and}\ \pi(\Delta_{s, C}) = \scrP_{s,C}\]
Note that every point in $s\Delta$ whose distance from the boundary of $s\Delta$ is $\geq C\sqrt{q}$ belongs to $\Delta_{s,C}$. Khovanskii then argues (in the proof of Theorem 3 of Newton Polyhedron, Hilbert Polynomial, and Sums of Finite Sets) that this implies that “every point in $s\scrP$ whose distance from the boundary of $s\scrP$ is $\geq C\sqrt{q}||\pi||$ is in $\scrP_{s, C}$”, where
\[||\pi|| := \sup_{v \neq 0} \frac{||\pi(v)||}{||v||}\]
is the “norm” of $\pi$, which also equals the norm of the largest eigenvalue of $\pi$. However, it is not clear how this conclusion follows from the preceding statement: indeed, it is possible to have $\alpha’$ close to the boundary of $s\Delta$, but $\pi(\alpha’)$ to have large distance from the boundary of $s\scrP$, e.g. when one of the $\alpha_i$ is in the interior of $\scrP$, then $se_i$ is in the boundary of $s\Delta$, but $\pi(se_i) = s\alpha’$ is far away from the boundary of $s\scrP$ when $s$ is large. We are going to close this gap by a more precise modification of the argument.

Indeed, consider the collection $S$ of all sets $\scrA’ \subseteq \scrA$ such that $\conv(\scrA’)$ intersects interior of $\scrP$. For each such $\scrA’$, we fix one element
\[ \alpha_{\scrA’} \in \conv(\scrA’) \cap \interior(\scrP) \]
and two representations of $\alpha_{\scrA’}$ as convex combinations of $\scrA’$ and $\scrA$:
\[\alpha_{\scrA’}
= \sum_{\alpha_i \in \scrA’} \epsilon’_{\scrA’, i} \alpha_i
= \sum_{\alpha_i \in \scrA} \epsilon_{\scrA’, i} \alpha_i
\]
where
\[
\sum_{\alpha_i \in \scrA} \epsilon_{\scrA’, i}
= \sum_{\alpha_i \in \scrA’} \epsilon’_{\scrA’, i}
= 1
\]
and in addition, all $\epsilon_{\scrA’, i}$ and $\epsilon’_{\scrA’, j}$ are positive. For each $\scrA’$, we also fix a large constant $C_{\scrA’}$ such that which we precisely specify a bit further down. Consider a nonnegative $\rr$-linear combination
\[\alpha= \sum_i \lambda_i \alpha_i \in s\scrP\]
such that
\[\sum_i \lambda_i \leq s\]
Assume $\alpha \in \scrP_{s, C}$, but the $\lambda_i$ does not satisfy the defining conditions of $\scrP_{s,C}$ from Observation 10, i.e. either some $\lambda_i < C$ or $\sum_i \lambda_i > s – C$. We want to find some conditions under which it is possible to find a different representation of $\alpha$ which does satisfy the defining conditions of $\scrP_{s,C}$.

Claim 11. Pick $\scrA’ \in S$. If $C_{\scrA’}$ is chosen sufficiently large (independently of $s$), then the following holds: if $\lambda_i > \epsilon’_{\scrA’,i}C_{\scrA’}$ for each $\alpha_i \in \scrA’$, then there is a representation
\[\alpha = \sum_i \lambda’_i \alpha_i\]
with $\lambda’_i \geq C$ for each $i$, and $\sum_i \lambda’_i \leq s – C$.

Proof. Assume $\lambda_i > \epsilon’_{\scrA’,i}C_{\scrA’}$ for each $\alpha_i \in \scrA’$. Then
\[
\begin{align*}
\alpha
&= \sum_{\alpha_i \in \scrA’} (\lambda_i – \epsilon’_{\scrA’,i}C_{\scrA’}) \alpha_i
+ C_{\scrA’} \sum_{\alpha_i \in \scrA’} \epsilon’_{\scrA’, i} \alpha_i
+ \sum_{\alpha_i \not\in \scrA’}\lambda_i \alpha_i \\
&= \sum_{\alpha_i \in \scrA’} (\lambda_i – \epsilon’_{\scrA’,i}C_{\scrA’}) \alpha_i
+ C_{\scrA’}\sum_{\alpha_i \in \scrA} \epsilon_{\scrA’, i} \alpha_i
+ \sum_{\alpha_i \not\in \scrA’}\lambda_i \alpha_i \\
&= \sum_{\alpha_i \in \scrA’} ((\lambda_i – \epsilon’_{\scrA’,i}C_{\scrA’}) + \epsilon_{\scrA’, i}C_{\scrA’}) \alpha_i
+ \sum_{\alpha_i \not\in \scrA’}(\lambda_i + \epsilon_{\scrA’,i}C_{\scrA’}) \alpha_i \\
\end{align*}
\]
If $C_{\scrA’}$ is chosen such that
\[\epsilon_{\scrA’, i}C_{\scrA’} \geq 2C\ \text{for each}\ \alpha_i \in \scrA’\]
then in the final representation of $\alpha$ above, each coefficient is $\geq 2C \geq C$, so to ensure that $\alpha \in \scrP_{s, C}$ it only remains to bound the sum of the coefficients by $s – C$. However, that is easy: first note that during the above change of representation for $\alpha$, the sum of the coefficients is unchanged, since we replace a convex combination (where the sum of coefficients is $1$) by another convex combination. Denote the coefficients of the final representation of $\alpha$ above by $\tilde \lambda_i$. Since we originally had $\sum_i \lambda_i \leq s$, after the change of representation we have
\[\sum_i \tilde \lambda_i \leq s,\ \text{and}\ \tilde \lambda_i \geq 2C\ \text{for each}\ i\]
Now recall that there is $i_0$ such that $\alpha_{i_0}$ is the origin. Consequently we can simply reduce $\tilde \lambda_{i_0}$ to $\tilde \lambda_{i_0} – C$ to satisfy all defining conditions of $\scrP_{s, C}$. This completes the proof of Claim 11.

Now we are ready to prove Theorem 9. Replacing $\scrA$ by $\scrA – \alpha_i$ for some $i$ if necessary, we may assume that the origin is a vertex of $\scrP$. Due to Observation 10, it suffices to show that there is $r$ independent of $s$ such that all elements of $s\scrP \setminus \scrP_{s, C}$ has distance smaller than $r$ from the boundary of $s\scrP$. Pick $\alpha \in s\scrP \setminus \scrP_{s, C}$ and a representation
\[\alpha = \sum_i \lambda_i \alpha_i\]
such that each $\lambda_i$ is nonnegative and $\sum_i \lambda_i \leq s$. Claim 11 implies that for each subset $\scrA’$ of $\scrA$ such that $\conv(\scrA’)$ intersects the interior of $\scrP$, and each $\alpha_i \in \scrA’$,
\[\lambda_i \leq \epsilon’_{\scrA’, i}C_{\scrA’}\]
Consequently, if
\[C’ := \max_{\scrA’, i} \epsilon’_{\scrA’, i} C_{\scrA’} \]
then there are two possible cases:

Case 1: $\lambda_i < C’$ for each $i$. Since by our assumption the origin is a vertex of $s\scrP$, it then follows that the the distance of $\alpha$ from the boundary is bounded by
\[r_1 := \max\{||\sum_i \epsilon_i\alpha_i || : 0 \leq \epsilon_i \leq C’\}\]

Case 2: there is $i$ such that $\lambda_i \geq C’$. In that case Claim 11 implies that all $\alpha_i$ such that $\lambda_i \geq C’$ are contained in a proper face $\scrP’$ of $\scrP$. We consider two subcases:

Case 2.1: $\scrP’$ contains the origin. In this case $s’\scrP’ \subseteq s\scrP’$ whenever $0 \leq s’ \leq s$, and consequently,
\[\alpha’ := \sum_{\alpha_i \in \scrP’} \lambda_i \alpha_i \in s\scrP’\]
In particular, $\alpha’$ is on the boundary of $s\scrP$ and
\[||\alpha – \alpha’|| = || \sum_{\alpha_i \not\in \scrP’}\lambda_i\alpha_i|| \leq r_1 \]

Case 2.2: $\scrP’$ does not contain the origin. Let $\scrA^* \subseteq \scrP’$ be the set of all $\alpha_i$ such that $\lambda_i \geq C’$. We are going to show that
\[\sum_{\alpha_i \in \scrA^*} \lambda_i \geq s – qC’\]
(recall that $q := |\scrA|$). Indeed, otherwise we would have
\[
\sum_i \lambda_i
= \sum_{\alpha_i \in \scrA^*} \lambda_i + \sum_{\alpha_i \not\in \scrA^*} \lambda_i
< s – qC’ + (q-1)C’
= s – C’
\]
Consequently, if $\alpha_{i_0} $ is the origin, then we have another representation for $\alpha$ for which the sum of the coefficients is $\leq s$:
\[\alpha = \sum_{i \neq i_0} \lambda_i \alpha_i + (\lambda_{i_0} + C’)\alpha_{i_0}\]
However, since $\alpha_{i_0} \not\in \scrP’$, it follows that $\scrA’ := \scrA^* \cup \{\alpha_{i_0}\}$ is one of the sets considered in Claim 11, and therefore Claim 11 would imply that $\alpha \in \scrP_{s, C}$ which is a contradiction to the choice of $\alpha$. This proves that
\[\sum_{\alpha_i \in \scrA^*} \lambda_i \geq s – qC’\]
Pick $\alpha_{i’} \in \scrP’$ and let
\[
\alpha’ := \sum_{\alpha_i \in \scrP’} \lambda_i \alpha_i
+ (s – \sum_{\alpha_i \in \scrP’} \lambda_i) \alpha_{i’}
\in s\scrP’
\]
Then $\alpha’$ is in the boundary of $s\scrP$ and
\[
\begin{align*}
||\alpha – \alpha’||
&= || \sum_{\alpha_i \not\in \scrP’}\lambda_i\alpha_i
– (s – \sum_{\alpha_i \in \scrP’} \lambda_i) \alpha_{i’} ||
\leq r
\end{align*}
\]
where
\[r := \max\{||\sum_i \epsilon_i\alpha_i || : -qC’ \leq \epsilon_i \leq C’\}\]

Since $r \geq r_1$, we showed that in all case the distance of $\alpha$ from the boundary of $s\scrP$ is $\leq r$, as required to complete the proof of Theorem 9.

The following result describes the connection between $|s\scrA|$ and the volume of the convex hull of $\scrA$ that we were aiming for in this section:

Corollary 12. Let $\scrA = \{\alpha_1, \ldots, \alpha_q\}$ be a finite subset of $\zz^n$. Assume the subgroup $G$ of $\zz^n$ generated by all the pairwise differences $\alpha_i – \alpha_j$ is a subgroup of finite index $m$ in $\zz^n$. Then
\[
\lim_{s \infty} \frac{|s\scrA|}{s^n} = \frac{1}{m}\vol(\conv(\scrA))
\]
where $s\scrA$ of all $s$-fold sums $\alpha_{i_1} + \cdots + \alpha_{i_s}$ of elements from $\scrA$.

Proof. In the case that $m = 1$, the result follows immediately from combining Lemma 6, Proposition 7 and Theorem 9. In the general case, there is a basis $\beta_1, \ldots, \beta_n$ such that $m_1\beta_1, \ldots, m_n\beta_n$ is a basis of $G$ with
\[m = \prod_i m_i\]
Note that $\beta_1, \ldots, \beta_n$ is also a basis of $\rr^n$, so the map $\psi:\rr^n \to \rr^n$ that sends
\[\beta_i \mapsto \frac{1}{m_i}\beta_i\]
is well defined. Note that $\psi$ is an isomorphism over $\rr$ and maps $G$ onto $\zz^n$. The $m = 1$ case of the corollary then implies that
\[
\lim_{s \infty} \frac{|s\scrA|}{s^n} = \vol(\conv(\psi(\scrA)))
\]
Now note that the change of coordinates between the standard basis of $\rr^n$ and $\beta_1, \ldots, \beta_n$ is given by a matrix with determinant one (since it is a matrix with integer entries and its inverse is also a matrix with integer entries), and therefore it preserves volumes. However, after this change of coordinates $\psi$ corresponds to a diagonal matrix with diagonal $(1/m_1, \ldots, 1/m_n)$ and determinant $1/\prod_i m_i$. Consequently,
\[
\vol(\conv(\psi(\scrA))) = \vol(\conv(\scrA))/\prod_i m_i = \vol(\conv(\scrA))/m
\]
which completes the proof of Corollary 12.

Proof of Theorem 1: Step 3 – conclusion

Given a finite subset $\scrA$ of $\zz^n$, let $M$ be the number of solutions on $\kstarn$ of $n$ generic Laurent polynomials supported at $\scrA$, counted with appropriate multiplicity. Let $G$ be the subgroup of $\zz^n$ generated by all the pairwise differences of elements of $\scrA$. As explained in the paragraph preceding Observation 4, if the rank of $G$ is smaller than $n$, then both $M$ and the volume of the convex hull of $\scrA$ are zero, so that Theorem 1 is true. On the other hand, if rank equals $n$, then the index of $G$ in $\zz^n$ is finite; call it $m$. Then Observation 2, Claim 3, Observation 4 and Claim 5 imply that
\[M = n!m\lim_{s \to \infty} \frac{|s\scrA|}{s^n}\]
Corollary 12 then implies that
\[M = n! \vol(\conv(\scrA))\]
which proves Theorem 1.

Bernstein and Bézout’s formulae

Given bounded convex sets $\scrP_1, \ldots, \scrP_s$ in $\rr^n$, the function $\rnonnegs \to \rr$ which maps
\[
(\lambda_1, \ldots, \lambda_s) \mapsto \vol(\lambda_1 \scrP_1 + \cdots + \lambda_s \scrP_s)
\]
is given by a (necessarily unique) homogeneous polynomial of degree $n$ in the $\lambda_i$ (see e.g. Theorem V.39 of How Many Zeroes? for the case that each $\scrP_i$ is a polytope). For each $\alpha = (\alpha_1, \ldots, \alpha_s) \in \znonnegs$ such that $\sum_i \alpha_i = n$, we write $\nu_\alpha(\scrP_1, \ldots, \scrP_s)$ for the coefficient of $\lambda^\alpha$ in the expression of $\vol(\lambda_1 \scrP_1 + \cdots + \lambda_s \scrP_s)$ as a polynomial in $\lambda_1, \ldots, \lambda_s$. It then follows from elementary properties of commutative semigroups that the map that sends
\[(\scrP_1, \ldots, \scrP_n) \mapsto \nu_{(1, \ldots, 1)}(\scrP_1, \ldots, \scrP_n)\]
is a symmetric multiadditive function on the set of $n$-tuples of bounded convex bodies in $\rr^n$ (see e.g. How Many Zeroes?, Lemma B.59); it is called the mixed volume of $\scrP_1, \ldots, \scrP_n$ and we denote it by $\mv(\scrP_1, \ldots, \scrP_n)$.

Lemma 13. Any symmetric multiadditive function $\rho: G^n \to \rr$ from a commutative semigroup $G$ is uniquely determined by its “diagonal part”, i.e. the map $G \to \rr$ that maps
\[g \in G \mapsto \rho(g, \ldots, g)\]
More precisely,
\[
\rho(g_1, \ldots, g_n)
= \frac{1}{n!}\sum_{I \subseteq \{1, \ldots, n\}}
(-1)^{n-|I|}
\rho(\sum_{i \in I}g_i, \ldots, \sum_{i \in I}g_i)
\]
In particular, $\mv$ is the unique symmetric multiadditive function on the set of $n$-tuples of bounded convex bodies (or the set of $n$-tuples of convex polytopes) such that $\mv(\scrP, \ldots, \scrP) = n! \vol(\scrP)$.

For a proof of Lemma 13, see e.g. this MathOverflow post or How Many Zeroes?, Corollary B.62.

The following is the (weak form of) the theorem of David Bernstein that we were after.

Theorem 14 (Bernstein). Given finite subsets $\scrA_1, \ldots, \scrA_n$ of $\zz^n$, the number (counted with appropriate multiplicity) of solutions on $\kstarn$ of $f_1, \ldots, f_n$, where each $f_i$ is a generic Laurent polynomial supported at $\scrA_i$, is the mixed volume of the convex hulls of $\scrA_i$.

Proof (sketch). Consider the map $\rho$ that sends $(\scrA_1, \ldots, \scrA_n)$ to the number (counted with appropriate multiplicity) of solutions on $\kstarn$ of generic Laurent polynomials supported at $\scrA_i$. Due to Theorem 1 and Lemma 13, it suffices to show that $\rho$ is symmetric and multiadditive. That can be proved easily once one proves the correct “non-degeneracy” condition that identifies when $f_1, \ldots, f_n$ are “sufficiently generic” (see How Many Zeroes?, Claim VII.28).

As a corollary of Theorem 14 we almost immediately obtain a weak form of Bézout’s theorem.

Theorem 15 (Bézout). The number of solutions (counted with appropriate multiplicity) on $\kk^n$ of generic polynomials $f_1, \ldots, f_n$ of degree respectively $d_1, \ldots, d_n$ is $\prod_i d_i$.

Proof (sketch). Every polynomial $f_i$ of degree $d_i$ in $n$ variables is supported at the simplex $\scrP_i \subseteq \rr^n$ with vertices at the origin and $d_ie_j$, $j = 1, \ldots, n$ (where the $e_j$ are the standard unit elements along the axes of $\rr^n$). If $f_i$ are generic then all solutions of $f_1, \ldots, f_n$ on $\kk^n$ actually belong to $\kstarn$. Consequently, Theorem 15 follows from Theorem 14 if we can show that the mixed volume of $\scrP_1, \ldots, \scrP_n$ is $\prod_i d_i$. However, that follows from an easy computation (see How Many Zeroes?, Claim VIII.2.1).

Degree of a variety via Hilbert polynomial

\(\newcommand{\dprime}{^{\prime\prime}} \newcommand{\kk}{\mathbb{K}} \newcommand{\pp}{\mathbb{P}} \newcommand{\qq}{\mathbb{Q}} \newcommand{\rr}{\mathbb{R}} \newcommand{\zz}{\mathbb{Z}}\)
As presented in the preceding post, the degree of a subvariety \(X\) of a projective space \(\pp^n\) is the number of points of intersection of $X$ and a “generic” $n – m$ dimensional linear subspace of $\pp^n$, where $m:= \dim(X)$. In this post we show that the degree of a variety can be computed in terms of the Hilbert Polynomial of its homogeneous coordinate ring. The presentation follows Mumford’s Algebraic Geometry I: Complex Projective Varieties, Section 6C.

The basis of this theory, as in many parts of algebraic geometry, is a beautiful result of Hilbert.

Theorem 1. Let \(M\) be a finitely generated graded module over \(R_n := \kk[x_0, \ldots, x_n]\), i.e. \(M\) is an \(R_n\)-module and there is a direct sum decomposition \(M = \bigoplus_{k \geq k_0} M_k\) such that for all homogeneous polynomial \(f\) in \(R_n\), \[fM_k \subseteq M_{k + \deg(f)}\]
Then there is a polynomial of \(P_M(t) \in \qq[t]\) of degree at most \(n\) such that \(\dim(M_k) = P_M(k)\) for all sufficiently large \(k\).

$P_M$ as in Theorem 1 is called the Hilbert polynomial of $M$. If $M = \kk[x_0, \ldots, x_n]/I(X)$, where $X$ is a subvariety of $\pp^n$ and $I(X)$ is the homogeneous coordinate ring of $X$, then $P_M$ is called the Hilbert polynomial of $X$.

Proof of Theorem 1. Proceed by induction on \(n\). If \(n = -1\), then \(R_n = \kk\), and \(M\), being a finitely generated module over \(\kk\), must be a finite dimensional vector space over \(\kk\). Consequently, \(M_k = 0\) for all sufficiently large \(k\), and the theorem holds with \(P_M \equiv 0\). In the general case, consider the map \(M \to M\) given by \(m \to x_nM\). The kernel of the map is
\[M’ := \{m \in M: x_nm = 0\}\]
and the cokernel is
\[M\dprime := M/x_nM\]
Both \(M’\) and \(M\dprime\) are finitely generated graded \(R_n\)-modules in which multiplication by \(x_n\) is \(0\). Consequently, these are finitely generated graded \(R_{n-1}\)-modules and by the induction hypothesis, there are polynomials \(P’\) and \(P\dprime\) of degree at most \(n-1\) such that \(\dim(M’_k) = P'(k)\) and \(\dim(M\dprime_k) = P\dprime(k)\) for \(k \gg 0\). Now for all \(k\) the multiplication by \(x_n\) maps \(M_k\) to \(M_{k+1}\) and induces an exact sequence of vector spaces over \(\kk\):
\[0 \to M’_k \to M_k \to M_{k+1} \to M\dprime_{k+1} \to 0\]
Consequently,
\[
\begin{align*}
\dim(M_{k+1}) – \dim(M_k)
&= \dim(M\dprime_{k+1}) – \dim(M’_k) \\
&= P\dprime(k+1) – P'(k)
\end{align*}
\]
where the first equality holds for all \(k\) and the second equality holds for \(k \gg 0\). Now, given any \(f(t) \in \qq(t)\) of degree \(d\), there is a polynomial \(g(t)\) of degree \(d+1\) such that
\[g(t+1) – g(t) \equiv f(t)\]
(note that \((t+1)^{d+1} – t^{d+1} = (d+1)t^d +\) a polynomial with a degree smaller than \(d\), and use induction on \(d\)). Consequently there is a polynomial \(Q(t) \in \qq(t)\) of degree at most \(n\) such that
\[Q(k+1) – Q(k) \equiv P\dprime(k+1) – P'(k)\]
Then for \(k \gg 0\),
\[\dim(M_{k+1}) – Q(k+1) = \dim(M_k) – Q(k)\]
In other words,
\[\dim(M_k) = Q(k) + \text{a constant}\]
for \(k \gg 0\), as required to complete the proof of Theorem 1.

Example 2. If \(M = \kk[x_0, \ldots, x_n]\) itself, then \(\dim(M_k)\) is the dimension of the vector space of homogeneous polynomials of degree \(k\), which is equal to the number of distinct monomials of degree \(k\) in \(n+1\) variables. The latter number is the same as the number of ways to place \(n\) identical dividers in \(k+n\) positions, which is
\[
\begin{align*}
\binom{k+n}{n}
&= \frac{(k+n)(k+n-1) \cdots (k+1)}{n!} \\
&= \frac{k^n}{n!} + \text{a polynomial of degree lower than}\ n
\end{align*}
\]
In particular, the Hilbert polynomial of \(M\) is
\[P_M(t) = \frac{(t+n)(t+n-1) \cdots (t+1)}{n!}\]

Example 3. If \(M = \kk[x_0, \ldots, x_n]/\langle f \rangle\), where \(f\) is homogeneous of degree \(d\), then multiplication by \(f\) induces an exact sequence
\[0 \to \kk[x]_{k-d} \to \kk[x]_k \to M_k \to 0\]
where we write \(\kk[x]\) for \(\kk[x_0, \ldots, x_n]\). Then for all \(k\)
\[
\begin{align*}
\dim(M_k)
&= \dim(\kk[x]_k) – \dim(\kk[x]_{k-d}) \\
&= \binom{k+n}{n} – \binom{k+n-d}{n}
\end{align*}
\]
Write \(\binom{k+n}{n} = \sum_{j=0}^n a_jk^j\). In Example 2 above we have seen that \(a_n = 1/n!\). Consequently,
\[
\begin{align*}
P_M(t)
&= \sum_{j=0}^n a_jt^j – \sum_{j=0}^n a_j(t-d)^j \\
&= \frac{t^n – (t-d)^n}{n!} + \sum_{j=0}^{n-1} a_j(t^j – (t-d)^j) \\
&= \frac{dt^{n-1}}{(n-1)!} + \text{a polynomial of degree lower than}\ n – 1
\end{align*}
\]

Note that in both the above examples, the Hilbert polynomial equals \(\dim(M_k)\) for all \(k\), not only when \(k\) is large. The following is an example where the equality holds only when \(k\) is sufficiently large.

Example 4. Let \(M = \kk[x_0, \ldots, x_n]/I(S)\) where \(S = \{P_1, \ldots, P_s\}\) is a finite set of points in \(\pp^n\) and \(I(S)\) is the ideal of all homogeneous polynomials vanishing on all points of \(S\). After an appropriate (linear) change of coordinates if necessary, we may assume that \(x_0|_{P_i} \neq 0\) for any \(i\). Then for all \(k\) there is an exact sequence:
\[0 \to I(S)_k \to \kk[x]_k \xrightarrow{\phi_k} \kk^s\]
where we write \(\kk[x]\) for \(\kk[x_0, \ldots, x_n]\) and the map \(\phi_k\) is defined as follows:
\[\phi_k(f) := (\frac{f}{x_0^k}(P_1), \ldots, \frac{f}{x_0^k}(P_s))\]
Consequently, \(\dim(M_k)\) is the dimension of the image of \(\phi_k\). Now, assume
\[|\kk| \gg s\]
Under this condition we are going to show that for \(k \geq s-1\), there are homogeneous polynomials \(f_1, \ldots, f_s\) of degree \(k\) such that
\[
\begin{align*}
\frac{f_i}{x_0^k}(P_j) &= 0,\ i \neq j \\
\frac{f_i}{x_0^k}(P_j) &\neq 0, i = j
\end{align*}
\]
Indeed, consider the set $E_{i,j}$ of all linear polynomials $h$ such that $(h/x_0)(P_i) = (h/x_0)(P_j)$. Then $E_{i,j}$ is a proper codimension one linear subspace of the $\kk$-vector space $\kk[x]_1$ of linear homogeneous polynomials in $(x_0, \ldots, x_n)$. We claim that if $|\kk|$ is sufficiently large, then
\[\kk[x]_1 \neq \bigcup_{i,j}E_{i,j}\]
In fact we show more generally (following a MathOverflow answer) that an affine space $V$ over $\kk$ (recall that an affine space is a set which satisfies all axioms of a vector space except for those involving the “zero element”) can not be covered by the union of finitely many proper affine hyperplanes $H_1, \ldots, H_r$ if $|\kk| > r$. Indeed, it is clearly true for $r = 1$. For $r > 1$, consider translations $H_1 + a$ with $a \in \kk$. Since $|\kk| > r$, there is $a \in \kk\setminus\{0\}$ such that $H_1 + a$ is not equal to any $H_j$ for $j > 1$. Consequently, $H_j \cap (H_1 + a)$, $j = 2, \ldots, r$, are proper affine hyperplanes of the affine space $H_1 + a$. Now we are done by induction. Consequently, if $|\kk| > \binom{s}{2}$, we can choose
\[h \in \kk[x]_1 \setminus \bigcup_{i,j}E_{i,j}\]
Then \(a_i := (h/x_0)(P_i)\) are pairwise distinct, and for $k \geq s-1$ we can construct the \(f_i\) as follows:
\[f_i := x_0^{k-s + 1}\prod_{j \neq i} (h-a_jx_0)\]
In any event, this implies that for \(\dim(M_k) = s\) for \(k\) sufficiently large. In particular, \(P_M(t) = s\) is a constant polynomial.

The examples above have the following properties: the leading coefficient of a Hilbert polynomial of a module is of the form $dt^n/n!$ for some integer $d$, and moreover, the degree of the Hilbert polynomial $P_X$ of a variety $X$ equals $\dim(X)$. As the following result shows, this is true in general, and moreover, the coefficient of the leading term of $P_X$ equals $\deg(X)$.

Theorem 2. Let $X$ be an irreducible variety in $\pp^n$ of dimension $m$ and degree $d$. Then its Hilbert polynomial has the form
\[P_X(t) = d\frac{t^m}{m!} + \text{terms of lower degree}\]
In particular, both dimension and degree of $X$ can be determined from the leading term of $P_X$.

Proof. Example 3 above and Proposition 1 in the preceding post show that the theorem holds when $X$ is a hypersurface. Now choose linear subspaces $H’, H$ of $\pp^n$ of dimension respectively $n-m-2$ and $m+1$ such that $H’ \cap X = H’ \cap H = \emptyset$ and the projection $\pi : \pp^n\setminus H’ \to H$ (which we defined in the preceding post) restricts to a birational map on $X$ (this is possible due to Lemma 2 of the preceding post). Since $\deg(X) = \deg(\pi(X))$ (Theorem 3 of the preceding post) and $\pi(X)$ is a hypersurface in $H$, it suffices to show that $P_X$ and $P_{\pi(X)}$ differ by terms of degree $< m$. Indeed, choose coordinates $[x_0: \cdots :x_n]$ on $\pp^n$ such that $H’ = V(x_0, \ldots, x_{m+1})$ and
\[ \pi: [x_0: \cdots :x_n] \in \pp^n \setminus H’ \mapsto [x_0: \cdots: x_{m+1}] \in \pp^{m+1}\]
where we identify $H$ with $\pp^{m+1}$. Let $R, R’$ respectively be the homogeneous coordinate rings of $X$ and $X’ := \pi(X)$. Both $R$ and $R’$ are integral domains and
\[
\begin{align*}
R
&= \kk[x_0, \ldots, x_n]/I(X) \\
&\supseteq \kk[x_0, \ldots, x_{m+1}]/(I(X) \cap \kk[x_0, \ldots, x_{m+1}]) \\
&= \kk[x_0, \ldots, x_{m+1}]/I(X’) \\
&= R’
\end{align*}
\]
Claim 2.1. $R$ is integral over $R’$.

Proof. Fix an arbitrary $i$, $m+1 < i \leq n$ and factor $\pi|_X$ as $\psi’_i \circ \psi_i$ where
\[
\begin{align*}
\psi_i &: [x_0: \cdots : x_n] \in X \mapsto [x_0: \cdots :x_{m+1}: x_i] \in \pp^{m+2}\\
\psi’_i &: [x_0: \cdots : x_{m+1}: x_i] \in \pp^{m+2} \setminus V(x_i) \mapsto [x_0: \cdots :x_{m+1}] \in \pp^{m+1}
\end{align*}
\]
Since $H’ \cap X = \emptyset$, $[0: \cdots : 0: 1] \not\in \psi_i(X)$ and therefore
\[x_i \in \sqrt{\langle x_0, \ldots, x_{m+1} \rangle + I(\psi_i(X))}\]
Since $I(\psi(X)) = I(X) \cap \kk[x_0, \ldots, x_{m+1}, x_i]$ is homogeneous, it follows that there is $k \geq 1$ such that
\[(x_i)^k + \sum_{j=1}^k h_i(x_0, \ldots, x_{m+1})(x_i)^{k-j} \in I(X)\]
for certain homogeneous polynomials $h_i \in \kk[x_0, \ldots, x_{m+1}]$. In particular, the image of $x_i$ in $R = \kk[x_0, \ldots, x_n]/I(X)$ is integral over $R’$. Since $i$ was arbitrary, it follows that $R$ is integral over $R’$, as claimed.

Claim 2.2. $R$ and $R’$ have the same quotient field.

Proof. In general, if $Y$ is an irreducible subvariety of $\pp^n$ such that $Y \not\subseteq V(x_0)$, then there is a map from the homogeneous coordinate ring $S$ of $Y$ to the ring of polynomials in $x_0$ over its coordinate ring $\kk[Y]$ which maps a homogeneous element $f \in S$ of degree $s$ to $(f/x_0^s)(x_0^s) \in \kk[Y][x_0]$. This map is clearly injective, and since it also maps $x_0 \mapsto x_0$, it induces an isomorphism between the quotient field of $S$ and $\kk(Y)(x_0)$. Since $X$ and $\psi(X)$ are birational and the quotient field of $R$ contains that of $R’$, this implies that these quotient fields are equal and map to $\kk(X)[x_0]$ via the above isomorphism.

Now we go back to the proof of Theorem 2. Since both $R$ and $R’$ are Noetherian, Claim 2.1 implies that $R$ is a finitely generated module over $R’$. Choose homogeneous generators $f_1, \ldots, f_r$ of $R$ as a module over $R’$. By Claim 2.2 each $f_i$ is of the form $g_i/h_i$ with $g_i, h_i \in R’$. Moreover, since both $R$ and $R’$ are graded, and the inclusion $R’ \subseteq R$ preserves the grading, it follows that $g_i, h_i$ are also homogeneous. Let $h := \prod_i h_i$. Then
\[
hR \subseteq R’
\]
If $\deg(h) = s$, then it follows that
\[
hR_{k – s} \subseteq R’_k \subseteq R_k
\]
for each $k \geq s$. Consequently,
\[
P_X(k-s) \leq P_{\pi(X)}(k) \leq P_X(k)
\]
for all $k \geq s$. But then the leading terms of $P_X$ and $P_{\pi(X)}$ must be the same. This concludes the proof of Theorem 2.

Degree of a projective variety

$\DeclareMathOperator{\codim}{codim} \newcommand{\dprime}{^{\prime\prime}}\newcommand{\kk}{\mathbb{K}} \newcommand{\local}[2]{\mathcal{O}_{#2, #1}} \newcommand{\pp}{\mathbb{P}} \newcommand{\qq}{\mathbb{Q}} \newcommand{\rr}{\mathbb{R}} \DeclareMathOperator{\res}{Res} \newcommand{\scrL}{\mathcal{L}} \DeclareMathOperator{\sing}{Sing} \newcommand{\zz}{\mathbb{Z}}$
The degree of a subvariety $X$ of a projective space $\pp^n$ defined over an algebraically closed field $\kk$ is the number of points of intersection of $X$ and a “generic” linear subspace of $\pp^n$ of “complementary dimension”, i.e. of dimension equal to $n – \dim(X)$. The goal of this post is to show that this definition is well defined – this is the statement of Theorem 3 below. We also show (in Theorem 4 below) that the degree also equals the number of points of such a generic intersection if every point is counted with an appropriate “multiplicity” (in particular, all points of intersection of $X$ and a generic complementary dimensional linear space has intersection multiplicity one). First we clarify what “generic” means in this context.

Generic Linear Subspaces. Let $\scrL_m$ be the set of $m+1$ points in “general position” in $\pp^n$, i.e. the set of all $(P_0, \ldots, P_m) \in ((\pp^n)^*)^{m+1}$ such that every $(m+1) \times (m+1)$ minor of the $(n+1) \times (m+1)$ matrix whose columns are (the homogeneous coordinates of) $P_0, \ldots, P_m$ is nonzero. Then $\scrL_m$ is a nonempty Zariski open subset of $((\pp^n)^*)^{m+1}$ and every $(P_0, \ldots, P_m) \in \scrL_m$ determines a unique $m$-dimensional linear subspace of $\pp^n$, which is the span of the $P_i$. We say that a property holds for a “generic linear subspace of dimension $m$” if it holds for the subspaces corresponding to all elements of a nonempty Zariski open subset of $\scrL_m$.

It is clear that if $X$ is a linear subspace of $\pp^n$, then the degree of $X$ is well defined and equals $1$. The first non-trivial case is the following:

Proposition 1 (Degree of a hypersurface). Let $X = V(f)$ be a hypersurface in $\pp^n$ defined by an irreducible homogeneous polynomial $f$ of degree $d$. Then a generic line intersects $X$ at $d$ points. In particular, $\deg(X)$ is well defined and equals $\deg(f)$.

Proof. Consider a line $L := \{t_0P_0 + t_1P_1: (t_0,t_1) \in \kk^2\setminus\{(0,0)\}\}$. For generic $P_0, P_1$, $f|_L$ is a degree $d$ homogeneous polynomial, and if we count the points in $L \cap V(f)$ with the corresponding multiplicity as a root of $f|_L$, then the number of points in $L \cap V(f)$ is $d$ (since $\kk$ is algebraically closed). However, we are going to show that $f|_L$ has no repeated root for generic choices of $P_0, P_1$. Indeed, choose homogeneous coordinates $[x’_0: \cdots: x’_n]$ on $\pp^n$ such that $L$ is parallel to $x’_n$-axis. Then in affine coordinates $u_i := x’_i/x_0$, $i = 1, \ldots, n$, $L$ has a parametrization of the form
\[\{(a’_1, \ldots, a’_n) + t(0, \ldots, 0, 1): t \in \kk\}\]
Consider the dehomogenization $\tilde f := f/x_0^d$ of $f$. For a generic choice of $L$ one can ensure that

  • $\deg(\tilde f|_L) = \deg(\tilde f) = \deg(f) = d$,
  • $\partial \tilde f/\partial u_n \not\equiv 0$

Since $\tilde f$ is irreducible, $\dim(X \cap V(\partial \tilde f/\partial u_n)) < \dim(X) = n -1$. Consequently, the complement in $\kk^{n-1}$ of the image of the projection of $X \cap V(\partial \tilde f/\partial u_n)$ onto the first $n-1$ coordinates contains a nonempty Zariski open subset of $\kk^{n-1}$. Therefore, for a generic choice of $L$ one can also ensure that

  • $(\partial \tilde f/\partial u_n)(a’_1, \ldots, a’_{n-1}, t) \neq 0$ for each $t \in \kk$ such that $f(a’_1, \ldots, a’_{n-1}, t) = 0$.

It follows that for generic $L$ there are precisely $d$ elements in $L \cap X$, as required to complete the proof of Proposition 1.

We reduce the general case to hypersurfaces using generic linear projections: Given a hyperplane $H$ in $\pp^n$, every point $P \in \pp^n \setminus H$ defines a projection $\pp^n \setminus \{P\} \to H$ which maps a point $x$ to the (unique) point of intersection of $H$ and the line joining $x$ and $P$. Consequently, we can identify the set of “projections onto a hyperplane of $\pp^n$” with the set of pairs $(P, H)$ such that $H$ is a hyperplane of $\pp^n$ and $P \in \pp^n \setminus H$. In general, every pair $(H’,H)$ of linear subspaces of $\pp^n$ such that

  • $H’ \cap H = \emptyset$, and
  • $\dim(H’) + \dim(H) = n-1$

defines a projection map $\pi_{H’,H}: \pp^n \setminus H’ \to H$ which maps $x \in \pp^n \setminus H’$ to the unique point where $H$ intersects with the complementary dimensional linear subspace of $\pp^n$ spanned by $x$ and $H’$. Note that we can choose coordinates $[x_0: \cdots: x_n]$ on $\pp^n$ such that $H’ = \{x_0 = \cdots = x_k=0\}$ and $H$ is the coordinate subspace spanned by $x_0, \ldots, x_k$; in that case
\[\pi_{H’,H}: [x_0: \cdots :x_n] \in \pp^n \setminus H’ \mapsto [x_0: \cdots : x_k]\]
We say that a property holds for a “generic linear projection onto a $k$-dimensional subspace” if it holds for the projection map corresponding to pairs of generic $n-k-1$ and $k$ dimensional subspaces in the sense defined above.

Linear projections can be used to make the standard result that every variety is birational to a hypersurface a bit more precise:

Lemma 2. Let $X$ be an irreducible subvariety of $\pp^n$ of dimension $m \leq n – 2$. Then a generic $m$ dimensional hyperplane $H_0$ satisfies the following properties:

  1. If $u_0, \ldots, u_m$ are linearly independent linear forms over $H_0$, then $u_1/u_0, \ldots, u_m/u_0$ restrict to algebraically independent elements in $\kk(X)$;
  2. if $H$ is a generic $m+1$ dimensional linear subspace of $\pp^n$ containing $H_0$ and $H’$ is a generic $n-m-2$ dimensional linear subspace $\pp^n$ such that $H’ \cap X = H’ \cap H = \emptyset$, then
    • the image of $X$ under the linear projection $\pi_{H’,H}: \pp^n \setminus V(H’) \to H$ is birational to $X$, and
    • the degree $d$ of the polynomial defining $\pi_{H’,H}(X)$ as a hypersurface of $H$ equals the degree of the field extension $[\kk(X): \kk(u_1/u_0, \ldots, u_m/u_0)]$. In particular, $d$ does not depend on $H$ or $H’$.

We sketch a proof of Lemma 2 at the end of this post (note that the arguments from the sketch show that for Lemma 2 it is sufficient to have $|\kk| = \infty$ in place of being algebraically closed). Now we use the lemma to show that degree is well defined.

Indeed, fix generic $H_0 \subseteq H$ as in Lemma 2. For generic $H’$ as in Lemma 2, let $U_{H’, H}$ be the Zariski open subset of $X$ such that

  • $\pi = \pi_{H’, H}$ induces an isomorphism between $U_{H’, H}$ and $\pi(U_{H’, H})$, and
  • $U_{H’,H} = \pi^{-1}(\pi(U_{H’,H})) \cap X$.

Observation. Since $\pi(U_{H’,H})$ is open in $\pi(X)$ and $\pi(X)$ is irreducible of dimension $m$, the complement $\pi(X) \setminus \pi(U_{H’,H})$ has dimension less than $m$, and consequently, if $L$ is a generic $n-m$ dimensional linear subspace of $\pp^n$ containing $H’$, then $\pi(L\setminus H’)$ is a line which does not intersect $\pi(X) \setminus \pi(U_{H’,H})$.

We now reformulate this observation in a more elaborate way: let $S$ be the subset of
\[\scrL := \scrL_{n-m} \times \scrL_{n-m-2} \times \scrL_{m+1}\]
consisting of all $(L, H’,H)$ such that $L \supseteq H’$. Note that $S$ is a subvariety of $\scrL$, since the condition $L \subseteq H’$ can be defined by vanishing of certain polynomials in coordinates of $\scrL$. Let $S’$ be the subset of $S \times X$ consisting of all $(L,H’,H,x)$ such that

  • $H’ \cap X = H’ \cap H = \emptyset$,
  • $x \in U_{H’,H}$
  • $\pi_{H’,H}(L\setminus H’) \cap \pi_{H’,H}(X) \setminus \pi(U_{H’,H}) = \emptyset$

(Here is a bit of hand waving:) $S’$ can be defined as a subset of $S \times X$ by certain polynomial equalities and inequalities, and consequently, $S’$ is a constructible subset of $S \times X$.

Therefore, by Chevalley’s theorem on images of morphisms, the projection $S\dprime$ of $S’$ to $S$ is also constructible. Consider the projection
\[\psi: S \subseteq \scrL_{n-m} \times \scrL_{n-m-2} \times \scrL_{m+1}
\to \scrL_{n-m-2} \times \scrL_{m+1}\]
The “observation” above says that for all $(H’, H)$ in a nonempty Zariski open subset of $\scrL_{n-m-2} \times \scrL_{m+1}$, a nonempty Zariski open subset of $\psi^{-1}(H’,H)$ is included in $S\dprime$. Since $S\dprime$ is constructible, this can only be possible if $S\dprime$ contains a nonempty Zariski open subset of $S$. Consequently, for a generic $n-m$ dimensional linear subspace $L$ of $\pp^n$ one can pick an $n-m-2$ dimensional linear subspace $H’$ of $L$ which satisfies the second assertion of Lemma 2, and in addition, $L \cap (X \setminus U_{H’, H}) = \emptyset$. Then Proposition 1 and Lemma 2 together imply that the number of points in the intersection of $L \cap X$ does not depend on $L$. Consequently, we have proved the following result in the case that $X$ is irreducible:

Theorem 3. The degree $\deg(X)$ of a “pure dimensional” subvariety $X$ of $\pp^n$ is well defined (recall that a variety is “pure dimensional” if each of its irreducible components has the same dimension). In addition, $\deg(X) = \deg(\pi(X))$ for a generic linear projection $\pi$ onto a linear subspace of dimension $\dim(X) + 1$ in $\pp^n$.

Proof. As noted above, we already proved the theorem in the case that $X$ is irreducible. In general, if $\dim(X) = m$, we can take an $n-m$ dimensional linear subspace of $\pp^n$ which is generic for each irreducible component of $X$ (since “generic” properties hold for nonempty Zariski open subsets of $\scrL_{n-m}$ and since $\scrL_{n-m}$ itself is irreducible, the intersection of finitely many generic conditions is also generic). This completes the proof of the theorem.

Recall that the intersection multiplicity of $n$ hypersurfaces $H_i := \{f_i = 0\}$, $i = 1, \ldots, n$, on an $n$ dimensional variety $X$ (where $f_i$ are regular functions on $X$) at a nonsingular point $x \in X$ is the dimension (as a vector space over $\kk$) of
\[\local{X}{x}/\langle f_1, \ldots, f_n \rangle\]

Theorem 4. Let $X$ be a subvariety of $\pp^n$ of pure dimension $m$ and degree $d$. Then for generic linear homogeneous polynomials $l_1, \ldots, l_m$, the hypersurfaces of $X$ defined by $l_i = 0$ intersect at precisely $d$ points, each of which is a nonsingular point of $X$, and the intersection multiplicity of $\{l_1|_X = 0\}, \ldots, \{l_m|_X = 0\}$ at each point of intersection is one.

Proof. If $l_1, \ldots, l_m$ are generic, we already know that $L := \{l_1 = \cdots = l_m = 0\}$ intersects $X$ in precisely $d$ points (Theorem 3), and since the set $\sing(X)$ of singular points of $X$ has dimension $< m -1$, in the generic case $L$ does not intersect $\sing(X)$. It remains to show that at each point of intersection, the intersection multiplicity is one in the generic case. However, the arguments from the proof of Proposition 1 show that this is true when $X$ is an hypersurface, and then the general case follows from choosing $(L,H’,H)$ from the constructible subset $S\dprime$ defined above, since in that case the projection $X \to \pi_{H’,H}(X)$ is an isomorphism near every point of $L \cap X$. This completes the proof of the theorem.

Sketch of a Proof of Lemma 2. Since $\dim(X) = m$, if $u_0, \ldots, u_m$ are generic (homogeneous) linear forms in $(x_0, \ldots, x_n)$, then $u_i/u_0$, $i = 1, \ldots, m$, are algebraically independent over $\kk$. Consequently, a generic $m$-dimensional linear subspace $H_0$ of $\pp^n$ satisfies assertion 1 of Lemma 2. For the second assertion, choose homogeneous coordinates $[x_0: \cdots :x_n]$ on $\pp^n$ such that $X \not\subseteq V(x_0)$. Write $x’_i := x_i/x_0$, $i = 1, \ldots, n$. Arguments from standard proofs of the primitive element theorem and Schmidt’s “separable extension theorem” (see e.g. How Many Zeroes?, Theorem B.35 and Corollary B.37) show that

  1. There are linear combinations $v_i := \sum_j \lambda_{i,j} x’_i$ with $\lambda_{i,j} \in \kk$ such that $v_1|_X, \ldots, v_m|_X$ are algebraically independent over $\kk$ and $\kk(X) = \kk(v_1, \ldots, v_m)(v_{m+1})$ (here one needs to follow the arguments of the proofs of Theorem B.35 and Corollary B.37 in How Many Zeroes? and observe that one can choose the $\lambda_{i,j}$ from $\kk$ since $|\kk| = \infty$).
  2. To see that the above can be achieved with generic $\lambda_1, \ldots, \lambda_m$, observe the following from the proof of Theorem B.35 in How Many Zeroes?: the requirement on $\lambda_1, \ldots, \lambda_m$ boils down to a finite sequence of conditions of the following form: given a certain field $F$ containing $\kk$ and $\alpha_1, \alpha_2 \in F$ and $f_1, f_2 \in F(t)$, where $t$ is an indeterminate, such that $f_1(\alpha_1) = f_2(\alpha_2) = 0$, there is $\lambda \in \kk$ such that
    \[\gcd(f_1, f_2(\lambda \alpha_1 + \alpha_2 – \lambda t)) = t – \alpha_1\]
    (where $\gcd$ is computed in $F[t]$). Since $t – \alpha_1$ divides both of these polynomials, the above condition is equivalent to requiring that
    \[\res(f’_1, f’_2) \neq 0\]
    where
    \[
    \begin{align*}
    f’_1 &:= \frac{f_1}{t-\alpha_1} \\
    f’_2 &:= \frac{f_2(\lambda \alpha_1 + \alpha_2 – \lambda t)}{t – \alpha_1}
    \end{align*}
    \]
    and $\res$ denotes the resultant. Since the resultant is a polynomial in the coefficients, its non-vanishing is a Zariski open condition on the coefficients. Since the condition is satisfied for at least one $\lambda \in \kk$ (that’s the first observation above), it is satisfied for all $\lambda$ in a non-empty Zariski open subset of $\kk$. It follows that the full set of conditions hold for all $\lambda_{i,j}$ in a non-empty Zariski open subset of $\kk^{n^2}$.

Polynomial division over valued fields – Part I (Chan-Maclagan’s Algorithm for Homogeneous Divisors)

$\DeclareMathOperator{\In}{In} \DeclareMathOperator{\Incoeff}{In_coeff} \DeclareMathOperator{\Inexp}{In_exp} \DeclareMathOperator{\ld}{Ld} \newcommand{\qq}{\mathbb{Q}} \newcommand{\kk}{\mathbb{K}} \DeclareMathOperator{\ord}{ord} \newcommand{\preceqeq}{\preceq_{\equiv}} \newcommand{\rnpos}{\mathbb{R}^n_{> 0}} \newcommand{\rnzero}{\mathbb{R}^n_{\geq 0}} \newcommand{\rr}{\mathbb{R}} \newcommand{\scrB}{\mathcal{B}} \newcommand{\scrK}{\mathcal{K}} \newcommand{\scrM}{\mathcal{M}} \DeclareMathOperator{\supp}{Supp} \newcommand{\znplusonezero}{\mathbb{Z}^{n+1}_{\geq 0}} \newcommand{\znplusonepos}{\mathbb{Z}^{n+1}_{> 0}} \newcommand{\znpos}{\mathbb{Z}^n_{> 0}} \newcommand{\znzero}{\mathbb{Z}^n_{\geq 0}} \newcommand{\zz}{\mathbb{Z}}$
Today we discuss division of polynomials over valued fields, i.e. fields equipped with a (real) valuation, and consider orderings on monomials which incorporate that valuation. The goal is to arrive at the “universal basis theorem” which states that for every ideal there is a finite set of polynomials which determine the initial ideal with respect to all linear orders. As we have seen, the universal basis theorem for linear orders is harder to establish than that for monomial orders. The trend continues, and the case of linear orders over valued fields is even harder.

Example 1 [Chan-Maclagan2013]. Consider the \(p\)-adic valuation \(\nu_p\) on \(\qq,\) where \(p\) is a prime number, and \(\nu_p\) is defined on \(\qq\) as: \[\nu_p(p^qa/b) := q\] where \(a, b\) are integers relatively prime to \(p\), and \(q\) is an arbitrary integer. This defines a \(\zz^3\)-grading on \(S := \qq[x,y]\) with homogeneous components \[S_{(q,\alpha, \beta)} := \{ax^\alpha y^\beta: a \in \qq,\ \nu_p(a) = q\}\]Take any group order on \(\zz^2\) and define an ordering \(\preceq’\) on monomial terms of \(S\) as follows: \(ax^\alpha y^\beta \preceq’ a’x^{\alpha’}y^{\beta’}\) if and only if

  • either \(\nu_p(a) < \nu_p(b),\) or
  • \(\nu_p(a) = \nu_p(b)\) and \((\alpha, \beta) \preceq (\alpha’, \beta’)\)

Note that \(\preceq’\) is not a total order on the set of monomial terms on \(S,\) since there are monomial terms \(T \neq T’\) such that \(T \preceq’ T’\) and \(T’ \preceq’ T,\) e.g. take \(T := x\) and \(T’ := qx\) where \(q\) is any integer relatively prime to \(p.\) Nonetheless, we can try to follow the naive initial-term-cancelling division from the preceding post on \(S\) with respect to \(\preceq’.\) So take \(g_1 := x – py,\) \(g_2 := y – px\) and try dividing \(f := x\) by \(g_1, g_2\) following the naive initial-term-cancelling division strategy. The initial terms of \(g_1, g_2\) with respect to \(\preceq’\) are respectively \(x\) and \(y.\) It follows that

  • after the first step one has: \(f = g_1 + py,\)
  • after the second step one has: \(f = g_1 + pg_2 + p^2x,\)
  • after the third step one has: \(f = (1+p^2)g_1 + pg_2 + p^3y,\)
  • after the fourth step one has: \(f = (1+p^2)g_1 + (p+p^3)g_2 + p^4x,\)

and so on. It is clear that the process does not terminate. On the other hand, if we recognize the remainder \(p^2x\) in the second step as \(p^2f,\) then rearranging the terms in the second step one has: \[f = \frac{1}{1-p^2}g_1 + \frac{p}{1-p^2}g_2\]which seems to be a reasonable outcome for a division process. Chan and Maclagan [Chan-Maclagan2013] showed that this always works.

Almost linear orders

An almost linear order on the set \(\scrM := \{ax^\alpha: a \in \kk \setminus \{0\},\ \alpha := (\alpha_1, \ldots, \alpha_n) \in \znzero\}\) of monomial terms is a reflexive and transitive binary relation on \(\scrM\) which is a linear order on all monomials with different exponents, i.e. every finite set of monomials with pairwise distinct exponents has a unique minimal element. Every linear order on \(\znzero\) trivially lifts to a almost linear order on monomial terms which simply ignores the coefficients and applies the original linear order on the exponents. The relation \(\preceq’\) from Example 1 is an example of a nontrivial almost linear order.

The notions of initial and leading monomial terms make sense for an almost linear order \(\preceq\) on monomial terms, and therefore every step of the “initial term cancelling division” also makes sense. Recall that the initial term cancelling division of a polynomial \(f\) by a finite ordered collection \(g_1, \ldots, g_N\) of polynomials proceeds as follows:

  1. Set \(h_0 := f\)
  2. If \(h_j = 0\) for some \(j \geq 0,\) then stop.
  3. Otherwise if there is \(i\) such that \(\In_\preceq(g_i)\) divides \(\In_\preceq(h_j)\), then pick the smallest such \(i\), and set \(h_{j+1} := h_j – qg_i\), where \(q := \In_\preceq(h_j)/\In_\preceq(g_i)\).
  4. Otherwise \(\In_\preceq(h_j)\) is not divisible by \(\In_\preceq(g_i)\) for any \(i\); in this case set \(h_{j+1} := h_j – \In_\preceq(h_j).\)
  5. Repeat steps 2-4 with \(j + 1\) in place of \(j.\)

Example 1 shows that for a general almost linear order \(\preceq\) the above algorithm may not end even if the divisors are homogeneous with respect to a shallow monomial grading. However, it also suggests a way forward, namely to use remainders from earlier steps of the division. To fix notation, write \(r_j\) for the remainder and \(q_{j,i}\) to be the multiplier of \(g_i\) after \(j\) iterations of the algorithm. After the \(j\)-th iteration one has \[f = \sum_i q_{j,i}g_i + r_j = \sum_i q_{j,i}g_i + r_{j,0} + r_{j,1}\]where

  • \(r_{j,1}\) is the part of remainder that has already been computed; in particular, none of the monomials in the support of \(r_{j,1}\) is divisible by \(\In(g_i)\) for any \(i = 1, \ldots, s.\)
  • \(r_{j,0}\) is the part of the remainder “yet to be explored,” i.e. it is the \(h_j\) from the above algorithm.

Let \(a_j \in \kk\) and \(\alpha_j \in \znzero\) be respectively the coefficient and the exponent of \(\In_\preceq(r_{j,0}),\) i.e. \[\In_\preceq(r_{j,0}) = a_j x^{\alpha_j}\](in the sequel we use \(\Incoeff(r_{j,0})\) and \(\Inexp(r_{j,0})\) to denote respectively \(a_j\) and \(\alpha_j.\)) Let \(r’_{j,0} := r_{j,0} – \In_\preceq(r_{j,0}).\) For the \((j+1)\)-th iteration, Example 1 suggests that it is useful to look out for \(k < j\) such that \(\alpha_k = \alpha_j.\) Indeed, if such \(k\) exists, then writing \(c_j := a_j/a_k,\) the above equation for \(f\) can be rearranged as
\[\begin{align*}
f &= \sum_i q_{j,i}g_i + a_jx^{\alpha_j} + r’_{j,0} + r_{j,1}
= \sum_i q_{j,i}g_i + c_j\In_\preceq(r_{k,0}) + r’_{j,0} + r_{j,1} \\
&= \sum_i q_{j,i}g_i + c_j(r_{k,0} – r’_{k,0}) + r’_{j,0} + r_{j,1}
= \sum_i q_{j,i}g_i + c_j(f – \sum_i q_{k,i}g_i – r_{k,1} – r’_{k,0}) + r’_{j,0} + r_{j,1} \\
&= c_jf + \sum_i (q_{j,i} – c_j q_{k,i})g_i + (r’_{j,0} – c_jr’_{k,0}) + (r_{j,1} – c_jr_{k,1})
\end{align*}\]
If \(c_j \neq 1,\) then it follows that
\[\begin{align}
f
&= \sum_i \frac{q_{j,i} – c_jq_{k,i}}{1-c_j}g_i + \frac{r’_{j,0} – c_jr’_{k,0}}{1-c_j} + \frac{r_{j,1} – c_jr_{k,1}}{1-c_j}
\end{align}\]

Almost linear monomial orders

We say that \(\preceq\) is an almost linear group order or an almost linear monomial order if it is an almost linear order on the set of monomial terms which is also compatible with respect to product and sum of monomial terms in the following way:

  1. if \(ax^\alpha \preceq bx^\beta,\) then \(acx^{\alpha+\gamma} \preceq bcx^{\beta+\gamma}\) for all \(c \in \kk\setminus\{0\}\) and \(\gamma \in \znzero,\)
  2. \(1 \preceq -1\) and \(-1 \preceq 1,\)
  3. for all \(a, b \in \kk,\) either \(a \preceq (a+b)\) or \(b \preceq (a+b).\)

At this point it is useful to introduce some notation: we will use \(\prec\) to denote “strict ordering” with respect to \(\preceq,\) and write \(\preceqeq\) to denote “equivalence” with respect to \(\preceq,\) i.e.

  • \(a \prec b\) if and only if \(a \preceq b\) and \(b \not\preceq a\).
  • \(a \preceqeq b\) if and only if \(a \preceq b\) and \(b \preceq a\),

Property 2 of almost linear group orders then can be rewritten as

  • \(-1 \preceqeq 1.\)

Note that Property 1 implies the following seemingly stronger statement:

  • if \(ax^\alpha \preceq bx^\beta,\) and \(cx^\gamma \preceq dx^\delta,\) then \(acx^{\alpha+\gamma} \preceq bdx^{\beta+\delta}\) for all \(c,d \in \kk\setminus\{0\}\) and \(\gamma, \delta \in \znpos. \)

Property 1 and 2 then imply that

  • \(ax^\alpha \preceq bx^\beta\) if and only if \(-ax^\alpha \preceq bx^\beta\) if and only if \(ax^\alpha \preceq -bx^\beta\) if and only if \(-ax^\alpha \preceq -bx^\beta.\)

Also due to Property 2 and 3, \(\preceq\) can be consistently extended to \(\scrM \cup \{0\}\) via declaring

  • \(ax^\alpha \preceq 0\) for all \(a \in \kk\) and \(\alpha \in \znzero.\)

In presence of Properties 1 and 2, Property 3 implies that

  • \(\preceq\) restricts to a total preorder on \(\kk,\) i.e. for all \(a, b \in \kk,\) either \(a \preceq b\) or \(b \preceq a\) (or both). Indeed, assume without loss of generality that \(a\preceq a+b.\) Since \(b = (a+b) – a,\) it follows that \(a+b \preceq b\) or \(-a \preceq b.\) Both of these possibilities imply that \(a \preceq b.\)
  • If \(a \prec b,\) then \(a \preceqeq a+b.\) Indeed, Property 3 immediately implies that \(a \preceq a+b.\) On the other hand, since \(a = (a+b) – b,\) and \(b \not\preceq a,\) we must have that \(a+b \preceq a.\)

If \(\preceq\) is indeed an almost linear group order, then at each step of the initial-term-cancelling division algorithm, the initial term of \(h_j\) strictly increases with respect to \(\preceq\), i.e.
\[\In_\preceq(h_j) \prec \In_\preceq(h_{j+1})\]
It follows that in the last displayed equation for \(f\) above \(c_j\) can not be \(1,\) so that the equation is well defined. In fact, it is straightforward to check that \(1 \prec c_j\). The defining properties of almost linear group orders then imply that
\[1 – c_j \preceqeq 1\]
which in turn implies that
\[
\In_\preceq(r_{j,0})
\prec \In_\preceq(r’_{j,0} – c_jr’_{k,0})
\preceqeq \In_\preceq\left(\frac{r’_{j,0} – c_jr’_{k,0}}{1-c_j}\right)
\]
Consequently, if we set
\[r_{j+1,0} := \frac{r’_{j,0} – c_jr’_{k,0}}{1-c_j}\]
then
\[\In_\preceq(r_{j,0}) \prec \In_\preceq(r_{j+1,0})\]
We now rewrite the initial-term-cancelling-division algorithm for almost linear group orders to incorporate these observations. Recall that the aim of this algorithm is to find an expression\[f = \sum_i q_ig_i + r\]such that

  • either \(r = 0\) or no monomial term in \(r\) is divisible by \(\In_\preceq(g_i)\) for any \(i\), and
  • there is a reordering \(i_1, \ldots, i_s\) of \(\{i: q_i \neq 0\}\) such that \[\In_\preceq(\sum_j q_jg_j) = \In_\preceq(q_{i_1}g_{i_1}) \prec \In_\preceq(q_{i_2}g_{i_2}) \prec \cdots \prec \In_\preceq(q_{i_s}g_{i_s})\]

Each step of the algorithm below expresses \(f\) as\[f = \sum_j q_{j,i}g_i + r_{j,0} + r_{j,1}\]where \(j\) is the step, \(r_{j,0}\) is the “part” of \(f\) “still to be explored” and \(r_{j,1}\) is the part of the remainder that has been computed. In particular, either \(r_{j,1} = 0\) or no monomial term in \(r_{j,1}\) is divisible by \(\In_\preceq(g_i)\) for any \(i\). The algorithm stops when \(r_{j,0} = 0.\)

Initial term cancelling division, version 2: Given a polynomial \(f\) (the dividend), a finite ordered collection \(g_1, \ldots, g_N\) of polynomials (the divisors) and an almost linear monomial order \(\preceq,\) proceed as follows: start with \(j = 0.\) Set \(q_{0,i} := 0\) for all \(i,\) \(r_{0, 0} := f\) and \(r_{0, 1} := 0.\) Now proceed as follows:

  1. If \(r_{j,0} = 0,\) then stop.
  2. Otherwise check if there is \(k < j\) such that
    \[\Inexp(r_{k,0}) = \Inexp(r_{j,0})\]
    If such \(k\) exists, then necessarily one has
    \[c_j := \Incoeff(r_{j,0})/\Incoeff(r_{k,0}) \succ 1\]
    Then define
    \[\begin{align*}
    q_{j+1,i} &:= \frac{q_{j,i} – c_jq_{k,i}}{1-c_j},\ i = 1, \ldots, N, \\
    r_{j+1,0} &:= \frac{r’_{j, 0} – c_jr’_{k,0}}{1-c_j}, \\
    r_{j+1,1} &:= \frac{r_{j, 1} – c_jr_{k,1}}{1-c_j}
    \end{align*}\]
    where
    \[r’_{j,0} := r_{j,0} – \In_\preceq(r_{j,0})\]
    and \(r’_{k,0}\) is defined similarly. Note that there is an ambiguity at this step in choosing \(k\) if there are more than one choices. Below we will define a criterion for choosing \(k\).
  3. Otherwise if there is \(i\) such that \(\In_\preceq(g_i)\) divides \(\In_\preceq(r_{j,0})\), then pick the smallest such \(i\), and with \(q := \In_\preceq(r_{j,0})/\In_\preceq(g_i),\) define
    \[\begin{align*}
    r_{j+1,0} &:= r_{j,0} – qg_i, \\
    r_{j+1,1} &:= r_{j,1}, \\
    q_{j+1,k} &:=
    \begin{cases}
    q_{j,i} + q &\text{if}\ k = i,\\
    q_{j,i} & \text{otherwise.}
    \end{cases}
    \end{align*}\]
  4. Otherwise set
    \[\begin{align*}
    r_{j+1,0} &:= r_{j,0}, \\
    r_{j+1,1} &:= r_{j,1} + \In_\preceq(r_{j,0}), \\
    q_{j+1,i} &:= q_{j,i},\ i = 1, \ldots, N.
    \end{align*}\]
  5. Repeat steps 2-4 with \(j + 1\) in place of \(j.\)

Earlier discussions on division via initial terms suggest that it might be prudent to consider the special case that the divisors \(g_1, \ldots, g_N\) are homogeneous with respect to a shallow monomial grading \(\Sigma.\) In that case, if the condition is step 2 is not satisfied at any iteration, then we claim that the algorithm must stop in finitely many steps. Indeed, if the condition in step 2 is not satisfied, then each time \(r_{j,0}\) gets updated in step 3, a monomial with different exponent is added to the support of \(r_{j,0}.\) Since \(g_i\) are homogeneous with respect to \(\Sigma\) the \(\Sigma\)-supports of all \(r_{j,k}\) are contained in the \(\Sigma\)-support of \(f\) and therefore due to the shallowness, the number of monomials in \(r_{j,0}\) can grow indefinitely. Consequently \(r_{j,0}\) must remain unchanged after a finite number of steps. But then step 4 can not repeat infinitely many times, since that would result in the number of monomials in \(r_{j,1}\) getting indefinitely larger. Therefore, if the algorithm is to continue without stopping, the condition in step 2 must be satisfied at infinitely many iterations. Now comes a crucial observation of Chan and Maclagan [Chan-Maclagan2013]:

Criterion to pick \(k\) in Step 2: In Step 2, if there are more than one choice for \(k < j\) such that \(\Inexp(r_{k,0}) = \Inexp(r_{j,0}),\) then choose any \(k\) such that the number of monomials in \(r_{k,0}\) which are not in \(r_{j,0}\) is the smallest possible. Actually the only thing you really need to ensure stoppage in finitely many steps is: if it is possible, then \(k\) should be chosen such that \(\supp(r_{k,0}) \subseteq \supp(r_{j,0}).\)

Theorem 1. If \(g_1, \ldots, g_N\) are homogeneous with respect to a shallow monomial grading \(\Sigma\) and the above criterion is used to pick \(k\) in Step 2 of the initial term cancelling division by \(g_1, \ldots, g_N\) with respect to an almost linear monomial order \(\preceq,\) then the division algorithm stops in finitely many steps. The outcome of the algorithm satisfies the “aim” of the algorithm (described in the paragraphs preceding the version 2 of the division algorithm).

Proof. Since the \(g_i\) are homogeneous, \(\supp_\Sigma(r_{j,0}) \subseteq \supp_\Sigma(f)\) for each \(j.\) Since the grading is shallow, there are finitely many possible choices for the usual support \(\supp(r_{j,0}) \subseteq \znzero\) of \(r_{j,0}\) consisting of the exponents of the monomials appearing in \(r_{j,0}.\) Therefore there must be \(j_0\) such that for each \(j \geq j_0,\) if \(r_{j,0} \neq 0,\) then there is already \(k < j\) such that \(\supp(r_{k,0}) = \supp(r_{j,0}).\) But then for each \(j \geq j_0,\) if \(r_{j,0} \neq 0,\) the Step 2 condition is satisfied at each step, and the above criterion for picking \(k\) ensures that \(\supp(r_{k,0}) \subseteq \supp(r_{j,0})\) so that the size of the support of \(r_{j+1,0}\) decreases compared to that of \(r_{j,0}.\) Since the support of a polynomial is finite, \(r_{j,0}\) must become zero in finitely many steps beyond \(j_0.\) To see that the “aim” is satisfied, note that the condition regarding the remainder is clear. For the condition regarding the quotients, it suffices to show that for each \(i, j,\) either \(q_{j,i} = 0\) or \(\In_\preceq(q_{j+1,i}) = \In_\preceq(q_{j,i}).\) This is clear in Steps 3 and 4. For Step 2, this follows from the observation that \(c_j \succ 1\) and, either \(q_{k,i} = 0\) or \(\In_\preceq(q_{j,i}) = \In_\preceq(q_{k,i}).\) This completes the proof of Theorem 1.

Almost linearity vs valuation on $\kk$

Let $\preceq$ be an almost linear monomial order on $\scrM := \{ax^\alpha: a \in \kk \setminus \{0\},$ $\alpha := (\alpha_1, \ldots, \alpha_n)$ $\in \znzero\}.$ We have seen that $\preceq$ restricts to a total binary relation on $\kk,$ and it is straightforward to check that $\preceqeq$ is an equivalence relation on $\kk$. Let $[\kk]$ be the set of equivalence classes of $\preceqeq$.

Proposition 1. $[\kk \setminus \{0\}]$ is a totally ordered abelian group with identity $[1]$ and addition given by $[a] + [b] := [ab]$. Moreover, the map $a \mapsto [a]$ is a valuation.

Proof. Indeed, it is easy to check that the addition is well defined, and $-[a] = [a^{-1}]$ with respect to this product, and it makes $[\kk \setminus \{0\}]$ an abelian group with identity $[1]$. Now $\preceq$ induces an order on $[\kk \setminus \{0\}]$ defined as follows: $[a] \preceq [b]$ if and only if $a \preceq b$. It is then clear that $\preceq$ is well defined, and it is a total binary relation which is reflexive, transitive, and anti-symmetric; in other words, $\preceq$ induces a total order on $[\kk \setminus \{0\}]$. This proves the first assertion. That $[\cdot]$ is a valuation on $\kk \setminus \{0\}$ follows from the definition of the addition on $[\kk \setminus \{0\}]$ and the definition Property 3 of almost linear monomial orders.

Linear monomial preorders

A linear monomial preorder $\leq$ on $\scrM := \{ax^\alpha: a \in \kk \setminus \{0\},\ \alpha := (\alpha_1, \ldots, \alpha_n) \in \znzero\}$ is a binary relation on $\scrM$ which satisfies all properties of an almost linear monomial order except for the following: it is allowed to be a linear preorder (instead of a linear order) on all monomials with different exponents. More precisely, it satisfies the following properties:

  1. $\leq$ is reflexive and transitive,
  2. $\leq$ is linear, i.e. for any two elements $a_1x^{\alpha_1}, a_2x^{\alpha_2} \in \scrM$, either $a_1x^{\alpha_1} \leq a_2x^{\alpha_2}$, or $a_2x^{\alpha_2} \leq a_1x^{\alpha_1}$, or $a_1x^{\alpha_1} \leq a_2x^{\alpha_2} \leq a_1x^{\alpha_1}$.
  3. if $ax^\alpha \leq bx^\beta,$ then $acx^{\alpha+\gamma} \leq bcx^{\beta+\gamma}$ for all $c \in \kk\setminus\{0\}$ and $\gamma \in \znzero,$
  4. $\leq$ restricts to a valuation on $\kk \setminus \{0\}$.

The strict variant $\lt$ of $\leq$ is defined as follows: $a_1x^{\alpha_1} \lt a_2x^{\alpha_2}$ if and only if $a_1x^{\alpha_1} \leq a_2x^{\alpha_2}$ and $a_2x^{\alpha_2} \not\leq a_1x^{\alpha_1}.$

Example 2. Property 3, i.e. compatibility with addition, may not be true for the strict variant $\lt$. Indeed, in the case that $n = 1$, consider the binary relation $\lt$ on $\scrM$ such that $\lt$ is trivial on $\kk$, and $1 \lt x \leq x^2$, and $x^2 \leq x$ (which implies that $\lt$ is an equivalence relation on $x^k$, $k \geq 1$). Then $\lt$ is a linear monomial preorder, and $1 \lt x$, but $x \not\lt x^2$.

The notions of initial and leading monomial forms make sense for a linear monomial preorder $\leq$: for $f = \sum_\alpha a_\alpha x^\alpha$, $\In_\leq(f)$ is simply the sum of all $a_\alpha x^{\alpha}$ such that $a_\alpha x^{\alpha} = \min_{\alpha’} \{a_{\alpha’}x^{\alpha’}\}$. The initial or leading forms corresponding to linear monomial preorders generalize the initial or leading forms corresponding to totally ordered gradings introduced in the previous post.

Finally, note that if $\leq$ is a linear monomial preorder and $\preceq$ is an almost linear monomial order, then one can “refine” $\leq$ by an almost linear monomial order $\preceq’$ defined as follows: $ax^\alpha \preceq’ bx^\beta$ if and only if

  • either $ax^\alpha \leq bx^\beta$ and $bx^\beta \not\leq ax^\alpha$,
  • or $ax^\alpha \leq bx^\beta$ and $bx^\beta \leq ax^\alpha$, and $ax^\alpha \preceq bx^\beta$.

It is straightforward to check that $\preceq’$ is an almost linear monomial order on $\scrM$. Note that it is indeed possible that $\preceq’$ is “almost linear” instead of “linear”: this is the case if there are $a \neq b \in \kk\setminus \{0\}$ and $\alpha \in \znzero$ such that $ax^\alpha \leq bx^\alpha$, $bx^\alpha \leq ax^\alpha$, but neither $ax^\alpha \preceq bx^\alpha$ nor $bx^\alpha \preceq ax^\alpha$; in that case $ax^\alpha$ and $bx^\alpha$ are not related via $\preceq$.

Theorem 1*. Let $I$ be an ideal of $S := \kk[x_1, \ldots, x_n]$ which is homogeneous with respect to a shallow monomial grading on $S$ by a group $\Sigma.$ Given a linear monomial preorder $\leq$ and an almost linear monomial order $\preceq$ on $\znpos,$ let $\preceq’$ be the refinement of $\leq$ by $\preceq.$ If $g_1, \ldots, g_N$ are $\Sigma$-homogeneous elements of $I$ such that $\In_{\preceq’}(g_1), \ldots, \In_{\preceq’}(g_N)$ generate $\In_{\preceq’}(I),$ then $\In_{\leq}(g_1), \ldots, \In_{\leq}(g_N)$ generate $\In_{\leq}(I).$

Proof. First note that $\In_{\leq}(I)$ is also $\Sigma$-homogeneous. Indeed, if $f \in I$, then each $\Sigma$-homogeneous component $f_i$ of $f$ is also in $I$. Since the supports of the $f_i$ are mutually disjoint, $\In_{\leq}(I)$ must be the sum of some of the $\In_{\leq}(f_i)$, each of which is $\Sigma$-homogeneous. This shows that $\In_{\leq}(I)$ is $\Sigma$-homogeneous. Now pick a $\Sigma$-homogeneous $f \in I.$ Since $\In_{\preceq’}(g_1), \ldots, \In_{\preceq’}(g_N)$ generate $\In_{\preceq’}(I),$ due to Theorem 1, after reordering the $g_i$ if necessary, there is an expression
\[f = \sum_{i=1}^s q_ig_i \]
such that
\[\In_{\preceq’}(f) = \In_{\preceq’}(q_1g_1) \prec’ \In_{\preceq’}(q_2g_2) \prec’ \cdots \prec’ \In_{\preceq’}(q_sg_s)\]
This implies that
\[\In_{\preceq’}(q_1g_1) \leq \In_{\preceq’}(q_2g_2) \leq \cdots \leq \In_{\preceq’}(q_sg_s)\]
Pick the smallest $i$ such that
\[\In_{\preceq’}(q_ig_i) \lt \In_{\preceq’}(q_{i+1}g_{i+1})\]
We claim that
\[\In_{\leq}(f) = \sum_{i’=1}^i \In_{\preceq’}(q_{i’}g_{i’})\]

Proof. Pick $f \in I.$ Since $\In_{\preceq’}(g_1), \ldots, \In_{\preceq’}(g_N)$ generate $\In_{\preceq’}(I),$ there is $i_1$ such that $\In_{\preceq’}(g_{i_1})$ divides $\In_{\preceq’}(f),$ i.e. there is $a_{i_1} \in \kk$ and $\alpha_{i_1} \in \znzero$ such that
\[\In_{\preceq’}(f) = a_{i_1}x^{\alpha_{i_1}}\In_{\preceq’}(g_{i_1})\]
Let
\[h_1 := a_{i_1}x^{\alpha_{i_1}}g_{i_1},\quad f_1 := f – h_1 = (f – \In_{\preceq’}(f)) – (h_1 – \In_{\preceq’}(f))\]
Since $\preceq’$ is an almost linear order, and $\In_{\preceq’}(f) = \In_{\preceq’}(h_1)$, it follows that

  • Every monomial term $ax^\alpha$ of $f – \In_{\preceq’}(f)$ satisfies: $ax^\alpha \succ’ \In_{\preceq’}(f)$.
  • Every monomial term $ax^\alpha$ of $h_1 – \In_{\preceq’}(f)$ satisfies: $ax^\alpha \succ’ \In_{\preceq’}(f)$.
  • Since $\preceq’$ restricts to a valuation on $\kk \setminus \{0\}$, and is compatible with multiplication by monomials, it follows that
    \[\In_{\preceq’}(f) \prec’ \In_{\preceq’}(f_1)\]
  • In addition, (the monomial in) $\In_{\preceq’}(f)$ is not in the support of $f_1$.
    Repeating this process to $f_2$ from $f_1$ and $f_3$ from $f_2$ and so on, then from a step onward $f_0 := f, f_1, f_2, \ldots, $ would be linearly dependent over \(\kk.\) On the other hand, since
    \[\nu_{\preceq’}(f_0) \prec’ \nu_{\preceq’}(f_1) \prec’ \cdots,\]

Since the \(\Omega\)-grading is monomial compatible, it follows that \[\ord_\Omega(h_1) = \ord_\Omega(x^{\alpha_{i_1}}) + \ord_\Omega(g_{i_1}) = \ord_\Omega(x^{\alpha_{i_1}}) + \ord_\Omega(\In_{\preceq’}(g_{i_1})) = \ord_\Omega(\In_{\preceq’}(f)) = \ord_\Omega(f)\]
Note that

  • The monomial compatibility of the \(\Omega\)-grading was used in the second and the last of the above equalities.
  • we used “\(+\)” to denote the group operation on \(\Omega,\) since \(S\) is commutative and therefore every grading on \(S\) can also be induced by a grading by an abelian group.

It follows from the above display that
\[\ord_\Omega(f) \preceq_\Omega \ord_\Omega(f_1)\]
where “\(\preceq_\Omega\)” is the ordering on \(\Omega.\) Now, since the \(\Sigma\)-grading is monomial compatible, and since \(g_{i_1}\) is \(\Sigma\)-homogeneous, it follows that
\[\supp_\Sigma(f_1) \subseteq \supp_\Sigma(f)\]
The shallowness of the \(\Sigma\)-grading then implies that if we keep repeating this process to construct \(f_2\) from \(f_1\) and \(f_3\) from \(f_2\) and so on, then from a step onward \(f_0 := f, f_1, f_2, \ldots, \) would be linearly dependent over \(\kk.\) On the other hand, since
\[\nu_{\preceq’}(f_0) \prec’ \nu_{\preceq’}(f_1) \prec’ \cdots,\]
Lemma 1 implies that as long as \(f_k\) is nonzero, it would be linearly independent of \(f_0, \ldots, f_{k-1}.\) Consequently, the process must stop, and there is \(k\) such that \(f_k = 0.\) In particular, there is \(m \geq 0\) such that \[\ord_\Omega(f_0) = \cdots = \ord_\Omega(f_m) \prec_\Omega \ord_\Omega(f_{m+1})\]where \(\prec_\Omega\) denotes “strictly smaller” with respect to \(\preceq_\Omega.\) This means \[\ord_\Omega(f) = \ord_\Omega(h_1) = \cdots = \ord_\Omega(h_{m+1})\]and\[\ord_\Omega(f – h_1 – \cdots – h_{m+1}) \succ_\Omega \ord_\Omega(f)\]It follows that \[\In_\Omega(f) = \In_\Omega(h_1 + \cdots + h_{m+1}) = \In_\Omega(h_1) + \cdots + \In_\Omega(h_{m+1})\] Since \(\In_\Omega(h_1) + \cdots + \In_\Omega(h_{m+1})\) is in the ideal generated by \(\In_\Omega(g_{i_1}), \ldots, \In_\Omega(g_{i_{m+1}})\), we are done!

We also have analogues of the corollaries to Theorem 1 from the preceding post. Corollary 1 below fails if we allow \(J\) to be non-homogeneous, as evidenced by taking \(I := \langle x \rangle\) and \(J := \langle x + x^2 \rangle\).

Corollary 1. Let \(I \supseteq J\) be ideals of \(S := \kk[x_1, \ldots, x_n]\) such that \(J\) is homogeneous with respect to a shallow monomial grading on \(S.\) If \(I \supsetneqq J\), then \(\In_\preceq(I) \supsetneqq \In_\preceq(J)\) for every almost linear monomial order \(\preceq.\)

Proof. Pick homogeneous \(g_1, \ldots, g_N \in J\) such that \(\In_\preceq(g_i)\), \(i = 1, \ldots, N\), generate \(\In_\preceq(I)\). Then Theorem 1 implies that \(g_1, \ldots, g_N\) generate \(I\), so that \(I = J\).

Corollary 2. Let \(I\) be an ideal of \(S := \kk[x_1, \ldots, x_n]\) which is homogeneous with respect to a shallow monomial grading on \(S.\) There can not exist almost linear monomial orders \(\preceq_1, \preceq_2\) such that \(\In_{\preceq_1}(I) \subsetneqq \In_{\preceq_2}(I)\).

Proof. Write \(J_k := \In_{\preceq_k}(I)\), \(k = 1, 2\). Assume to the contrary that \(J_1 \subsetneqq J_2\). Then we can choose homogeneous \(g_1, \ldots, g_N \in I\) such that \(\In_{\preceq_2}(g_i)\), \(i = 1, \ldots, N\), generate \(J_1\). Choose \(f \in I\) such that \(\In_{\preceq_2}(f) \not\in J_1.\) Let \(r\) be the remainder of the Chan-Maclagan division of \(f\) by \(g_1, \ldots, g_N\) with respect to \(\preceq_2\). Then \(r\) is a nonzero element of \(I\) such that \(\supp(r) \cap \supp(J_1) = \emptyset\). On the other hand, if \(h_1, \ldots, h_M\) are homogeneous elements in \(I\) such that \(\In_{\preceq_1}(h_1), \ldots, \In_{\preceq_1}(h_M)\) generate \(J_1,\) then the Chan-Maclagan division of \(r\) by \(h_1, \ldots, h_M\) with respect to \(\preceq_1\) implies that \(r \not\in I,\) which is a contradiction. This completes the proof.

The notion of reduced initial basis described in the earlier post on division of power series also applies to an almost linear monomial order: a finite ordered collection of elements \(g_1, \ldots, g_N\) from an ideal \(I\) of \(S\) is called a reduced initial basis with respect to an almost linear monomial order \(\preceq\) if the following hold:

  1. the initial terms of the \(g_i\) generate \(\In_\preceq(I),\)
  2. no monomial term of any \(g_i\) is divisible by the initial term of any \(g_j\) for \(j \neq i\), and
  3. the initial coefficient, i.e. the coefficient of the initial term, of each \(g_i\) is \(1.\)

If \(I\) is homogeneous with respect to a shallow monomial \(\Sigma\)-grading on \(S\), then a reduced initial basis for \(I\) can be produced by taking a minimal set of \(\Sigma\)-homogeneous elements of \(I\) whose initial terms generate \(\In_\preceq(I)\) and then replacing each element by the remainder of the Chan-Maclagan division by other elements.

Corollary 3. Let \(I\) be an ideal of \(S := \kk[x_1, \ldots, x_n]\) which is homogeneous with respect to a shallow monomial \(\Sigma\)-grading on \(S\), and \(\preceq, \preceq’\) be almost linear monomial orders such that
\[\In_\preceq(I) = \In_{\preceq’}(I)\]
Then any reduced \(\Sigma\)-homogeneous initial basis of \(I\) with respect to one of \(\preceq, \preceq’\) is also a reduced initial basis with respect to the other one. Moreover, the number of elements on a reduced $\Sigma$-homogeneous initial basis of $I$ is also uniquely determined by $I$.

Proof. Indeed, pick a reduced initial basis \(g_1, \ldots, g_N\) of \(I\) with respect to \(\preceq\). Let \(\alpha_i := \nu_{\preceq}(g_i),\) \(i = 1, \ldots, n.\) Fix \(i.\) Since \(\In_{\preceq’}(I)\) is also generated by \(x^{\alpha_1}, \ldots, x^{\alpha_N}\), it follows that \(\In_{\preceq’}(g_i)\) is divisible by some \(x^{\alpha_j}.\) Since no monomial in the support of \(g_i\) is divisible by \(x^{\alpha_j}\) for \(j \neq i\) it must be divisible by \(x^{\alpha_i}.\) Since \(\alpha_i\) is also in the support of \(g_i,\) and \(g_i\) is \(\Sigma\)-homogeneous and the grading by \(\Sigma\) is shallow, Remark 1 in the first post on polynomial divisions via initial terms implies that the only monomial in \(g_i\) divisible by \(x^{\alpha_i}\) is \(x^{\alpha_i}\) itself, so that \(\nu_{\preceq’}(g_i) = \alpha_i.\) It follows that that \(\In_{\preceq’}(g_i),\) \(i = 1, \ldots, N,\) form a reduced initial basis of \(I\) with respect to \(\preceq’\) as well. The proof of the last assertion is similar to that in the proof of Corollary 3 in the preceding post.

Theorem 2 (Universal Bases – version 1). Let \(I\) be an ideal of \(S := \kk[x_1, \ldots, x_n]\) which is homogeneous with respect to a shallow monomial grading on \(S\) by a group \(\Sigma.\) There are finitely many distinct initial ideals of \(I\) with respect to almost linear monomial orders. In particular, there is a finite subset of \(\Sigma\)-homogeneous elements of \(I\) such that for every almost linear monomial order, the initial terms of polynomials in that subset generate the initial ideal of \(I.\)

Proof. Regarding the first assertion, Logar’s argument for the leading ideal case from the proof of Theorem 2 in the earlier post on polynomial division applies almost immediately to the present situation – one only needs to choose homogeneous elements in every case and replace “\(\ld\)” with “\(\In\)”. The second assertion then follows from Corollary 3 to Theorem 1.

References

Polynomial division via initial terms – Part I (Homogenization)

\(\DeclareMathOperator{\In}{In} \DeclareMathOperator{\ld}{Ld} \DeclareMathOperator{\kk}{\mathbb{K}} \DeclareMathOperator{\ord}{ord} \DeclareMathOperator{\rnpos}{\mathbb{R}^n_{> 0}} \DeclareMathOperator{\rnzero}{\mathbb{R}^n_{\geq 0}} \DeclareMathOperator{\rr}{\mathbb{R}} \DeclareMathOperator{\scrB}{\mathcal{B}} \DeclareMathOperator{\scrI}{\mathcal{I}} \DeclareMathOperator{\scrJ}{\mathcal{J}} \DeclareMathOperator{\supp}{Supp} \DeclareMathOperator{\znplusonezero}{\mathbb{Z}^{n+1}_{\geq 0}} \DeclareMathOperator{\znplusonepos}{\mathbb{Z}^{n+1}_{> 0}} \DeclareMathOperator{\znpos}{\mathbb{Z}^n_{> 0}} \DeclareMathOperator{\znzero}{\mathbb{Z}^n_{\geq 0}} \DeclareMathOperator{\zz}{\mathbb{Z}}\)This is a continuation of an earlier post on polynomial division where we looked at division of polynomials via cancelling the leading term with respect to a monomial order. Here we look at “initial terms” of polynomials with respect to monomial orders, and show that, similar to the case of leading ideals, there can be only finitely many “initial ideals” for a given ideal of a polynomial ring. The problem with division via cancelling the initial term, as opposed to the cancellation of the leading term in (Step 1 of) the division algorithm, is that the initial exponent can indefinitely increase (whereas the termination of the original division algorithm is guaranteed by the fact that the leading exponent can not indefinitely decrease). This is the reason the initial-term-cancelling-division works for power series, as we have seen in the preceding post, but not for polynomials. There are (at least) two natural ways to proceed:

  1. To treat the polynomials as power series, and apply the power series division algorithm even when the divisors and the dividend are polynomials. In this case the quotient and the remainder may not be polynomials, but it can be shown that the initial ideal of a polynomial ideal \(I\) agrees with the initial ideal of the ideal of the ring of power series generated by \(I\), and from this it follows that every ideal can have at most finitely many initial ideals, and every initial ideal comes from a weight.
  2. To homogenize the original ideal with respect to a new variable, and consider initial-term-cancelling-division only by homogeneous polynomials. In this case for each homogeneous term, say of degree \(d\), of the dividend, the corresponding term of the remainder can not escape the finite dimensional vector space of homogeneous polynomials of degree \(d\), which ensures that the algorithm terminates in finitely many steps.

The first one will be pursued in a subsequent post. Here we take the second approach. Its shortcoming of the second approach is that, at least in the form described above, it applies only to homogeneous ideals. However, this division algorithm actually works whenever the divisors are homogeneous with respect to any grading on the polynomial ring which is “compatible” with monomials and has finite dimensional graded components. Moreover, the division can be adapted to any linear, but not necessarily well, order on \(\znpos\) which is compatible with the addition on \(\znpos.\)

Shallow monomial gradings on the polynomial Ring

A grading on a ring \(R\) by a group \(\Sigma\) (or in short, a \(\Sigma\)-grading) is a family \((R_\sigma)_{\sigma \in \Sigma}\) of additive subgroups of \(R\) indexed by \(\Sigma\) such that

  • \(R = \oplus_{\sigma \in \Sigma} R_\sigma,\) and
  • for each \(\sigma, \tau \in \Sigma,\) if \(f \in R_\sigma\) and \(g \in R_\tau,\) then \(fg \in R_{\sigma\tau}\)

We will work with rings containing a field \(\kk\), and require our gradings to satisfy the additional property that

  • each \(R_\sigma\) is a vector space over \(\kk\), or equivalently, \(\kk \subseteq R_\epsilon,\) where \(\epsilon\) is the unit of \(\Sigma.\)

The elements of \(R_\sigma,\) \(\sigma \in \Sigma,\) are called homogeneous of degree \(\sigma,\) and given \(f \in R,\) its homogeneous components are homogeneous elements \(f_\sigma\) such that \[f = \sum_{\sigma \in \Sigma} f_\sigma\]We write \(\supp_\Sigma(f)\) for the set of all elements in \(\Sigma\) that “support” \(f,\) i.e. \[\supp_\Sigma(f) := \{\sigma \in \Sigma: f_\sigma \neq 0\}\]A homogeneous ideal of \(R\) is an ideal generated by homogeneous elements; equivalently, an ideal is homogeneous if and only if it contains all homogeneous components of each of its elements.

A grading on \(S := \kk[x_1, \ldots, x_n]\) is monomial compatible, or simply a monomial grading, if every monomial in \((x_1, \ldots, x_n)\) is homogeneous, and it is shallow if for all \(\sigma\) the dimension of \(S_\sigma\) (as a vector space over \(\kk\)) is finite. Given any \(n\)-tuple of weights \((w_1, \ldots, w_n)\in \rr^n,\) the corresponding weighted degree defines a monomial compatible \(\rr\)-grading on \(S\) which is shallow if every \(w_i\) is positive. More generally, any linear map \(\pi: \rr^n \to \rr^m\) induces a monomial compatible \(\rr^m\)-grading on \(S\) which is shallow if and only if the restriction of \(\pi\) to \(\zz^n\) is injective. In particular, every monomial order on \(S\) induces a shallow monomial compatible \(\zz^n\)-grading on \(S\) such that \(\dim_\kk(S_\sigma) = 1\) for each \(\sigma \in \zz^n.\)

Remark 1. If a grading on \(S := \kk[x_1, \ldots, x_n]\) by a group \(\Sigma\) is shallow, then the graded component of \(S\) corresponding to the unit in \(\Sigma\) must consist only of \(\kk.\)

Initial-term-cancelling division by homogeneous elements with respect to shallow monomial gradings

Let \(\preceq\) be a group order (i.e. an order which is compatible with addition) on \(\znpos\) which is linear (but not necessarily well-) order. The notions of initial exponent/term/ideal with respect to \(\preceq\) still makes sense for polynomials. The following observation, made in the previous post for monomial orders, also holds for linear orders with the same proof:

Lemma 1. If \(f_1, \ldots, f_N\) are polynomials with mutually distinct values of \(\nu_\preceq\), then the \(f_i\) are linearly independent over \(\kk\).

Fix an arbitrary shallow monomial compatible \(\Sigma\)-grading on \(S := \kk[x_1, \ldots, x_n].\) The initial-term-cancelling-division algorithm in \(S\) is the procedure for reducing a polynomial \(f\) by a finite ordered collection of homogeneous (with respect to \(\Sigma\)) polynomials \(g_1, \ldots, g_N\) as follows:

  1. Set \(h := f\)
  2. If there is \(i\) such that \(\In_\preceq(g_i)\) divides \(\In_\preceq(h)\), then pick the smallest such \(i\), and replace \(h\) by \(h – qg_i\), where \(q := \In_\preceq(h)/\In_\preceq(g_i)\).
  3. Otherwise \(\In_\preceq(h)\) is not divisible by \(\In_\preceq(g_i)\) for any \(i\); in this case replace \(h\) by \(h – \In_\preceq(h).\)
  4. Repeat steps 2-3 until \(h\) reduces to the zero polynomial.

Note that the grading and the linear order are not related – they are chosen completely independently of one another. Nevertheless, the division algorithm terminates:

Theorem 1. The above algorithm (of initial-term-cancelling-division by polynomials which are homogeneous with respect to shallow monomial gradings) stops in finitely many steps, and after the final step one has: \[f = \sum_i q_ig_i + r\]where

  • either \(r = 0\) or no monomial term in \(r\) is divisible by \(\In_\preceq(g_i)\) for any \(i\), and
  • \(\In_\preceq(\sum_i q_ig_i) \in \langle \In_\preceq(g_i): i = 1, \ldots, N \rangle\); more precisely, there is a reordering \(i_1, \ldots, i_s\) of \(\{i: q_i \neq 0\}\) such that \[\In_\preceq(\sum_j q_jg_j) = \In_\preceq(q_{i_1}g_{i_1}) \prec \In_\preceq(q_{i_2}g_{i_2}) \prec \cdots \prec \In_\preceq(q_{i_s}g_{i_s})\]where all the \(\prec\) orders on the right hand side are strict.

If \(g_1, \ldots, g_N\) are elements of an ideal \(I\) of \(\kk[x_1, \ldots, x_n]\) such that \(\In_\preceq(g_1), \ldots, \In_\preceq(g_N)\) generate \(\In_\preceq(I)\), then the following are equivalent:

  1. \(f \in I\),
  2. \(r\) is the zero polynomial.

Proof. Let \(\sigma_1, \ldots, \sigma_s\) be the elements of \(\supp_\Sigma(f).\) Since the grading is monomial compatible, \(\In_\preceq(f) \in S_{\sigma_j}\) for some \(j,\) and if \(\In_\preceq(g_i)\) divides \(\In_\preceq(f),\) then since \(g_i\) is homogeneous with respect to \(\Sigma,\) both \(qg_i\) and \(f – qg_i\) are also in \(S_{\sigma_j},\) where \(q := \In_\preceq(f)/\In_\preceq(g_i).\) It follows that after every repetition of steps 2 and 3,

  • each nonzero homogeneous component of \(h\) has degree \(\sigma_k\) for some \(k = 1, \ldots, s,\) i.e. \[\supp_\Sigma(h) \subseteq \supp_\Sigma(f)\]
  • there is at least one \(j\) such that \(\nu_\preceq(h_{\sigma_j})\) increases from the previous iteration.

Since the grading is shallow, each \(S_{\sigma_j}\) is finite dimensional over \(\kk\), and therefore Lemma 1 implies that the algorithm stops in finitely many steps. The other assertions of Theorem 1 are immediate.

A totally ordered grading on \(S := \kk[x_1, \ldots, x_n]\) is a grading by a totally ordered group \(\Omega\) (which in particular means that the ordering is compatible with the group law, i.e. if \(a \preceq b\) then \(ac \preceq bc\) and \(ca \preceq cb\) for all \(c \in \Omega\)). If a grading is totally ordered, then every polynomial has a well defined initial/leading form which is simply the homogeneous component of respectively the lowest/highest degree. For an ideal of \(S,\) its initial/leading ideal is the ideal generated by respectively the initial/leading forms of its elements. Every total group order \(\preceq\) on \(\znpos\) induces a “refinement” of the grading by \(\Omega\) in the following way: define \(\alpha \preceq’ \beta\) if and only if

  • either \(\ord_\Omega(x^\alpha)\) is strictly smaller than \(\ord_\Omega(x^\beta)\) with respect to the ordering on \(\Omega\) (where \(\ord_\Omega(f)\) is the smallest element (with respect to the ordering on \(\Omega\)) of \(\supp_\Omega(f)\)), or
  • \(\ord_\Omega(x^\alpha) = \ord_\Omega(x^\beta)\) and \(\alpha \preceq \beta\)

It is straightforward to check that \(\preceq’\) is also a total group order on \(\znpos.\) It is a “weak refinement” of the grading by \(\Omega\) in the sense that if \(\ord_\Omega(x^\alpha)\) is less than \(\ord_\Omega(x^\beta)\) with respect to the ordering on \(\Omega\) then \(\alpha \prec’ \beta.\) Note that if the grading by \(\Omega\) is not monomial compatible, then there may exist polynomials \(f, g\) such that \(\ord_\Omega(f)\) is less than or equal to \(\ord_\Omega(g)\) but \(\In_{\preceq’}(f)\) is greater than \(\In_{\preceq’}(g)\) with respect to \(\preceq’.\)

Theorem 1*. Let \(I\) be an ideal of \(S\) which is homogeneous with respect to a shallow monomial grading on \(S\) by a group \(\Sigma.\) Given a monomial grading of \(S\) by a totally ordered group \(\Omega\) and a total group order \(\preceq\) on \(\znpos,\) let \(\preceq’\) be the above refinement of the \(\Omega\)-grading by \(\preceq.\) If \(g_1, \ldots, g_N\) are \(\Sigma\)-homogeneous elements of \(I\) such that \(\In_{\preceq’}(g_1), \ldots, \In_{\preceq’}(g_N)\) generate \(\In_{\preceq’}(I),\) then \(\In_{\Omega}(g_1), \ldots, \In_{\Omega}(g_N)\) generate \(\In_{\Omega}(I).\)

Proof. Pick \(f \in I.\) Since \(\In_{\preceq’}(g_1), \ldots, \In_{\preceq’}(g_N)\) generate \(\In_{\preceq’}(I),\) there is \(i_1\) such that \(\In_{\preceq’}(g_{i_1})\) divides \(\In_{\preceq’}(f),\) i.e. there is \(a_{i_1} \in \kk\) and \(\alpha_{i_1} \in \znzero\) such that \[\In_{\preceq’}(f) = a_{i_1}x^{\alpha_{i_1}}\In_{\preceq’}(g_{i_1})\]
Let
\[h_1 := a_{i_1}x^{\alpha_{i_1}}g_{i_1},\quad f_1 := f – h_1\]
Since the \(\Omega\)-grading is monomial compatible, it follows that \[\ord_\Omega(h_1) = \ord_\Omega(x^{\alpha_{i_1}}) + \ord_\Omega(g_{i_1}) = \ord_\Omega(x^{\alpha_{i_1}}) + \ord_\Omega(\In_{\preceq’}(g_{i_1})) = \ord_\Omega(\In_{\preceq’}(f)) = \ord_\Omega(f)\]
Note that

  • The monomial compatibility of the \(\Omega\)-grading was used in the second and the last of the above equalities.
  • we used “\(+\)” to denote the group operation on \(\Omega,\) since \(S\) is commutative and therefore every grading on \(S\) can also be induced by a grading by an abelian group.

It follows from the above display that
\[\ord_\Omega(f) \preceq_\Omega \ord_\Omega(f_1)\]
where “\(\preceq_\Omega\)” is the ordering on \(\Omega.\) Now, since the \(\Sigma\)-grading is monomial compatible, and since \(g_{i_1}\) is \(\Sigma\)-homogeneous, it follows that
\[\supp_\Sigma(f_1) \subseteq \supp_\Sigma(f)\]
The shallowness of the \(\Sigma\)-grading then implies that if we keep repeating this process to construct \(f_2\) from \(f_1\) and \(f_3\) from \(f_2\) and so on, then from a step onward \(f_0 := f, f_1, f_2, \ldots, \) would be linearly dependent over \(\kk.\) On the other hand, since
\[\nu_{\preceq’}(f_0) \prec’ \nu_{\preceq’}(f_1) \prec’ \cdots,\]
Lemma 1 implies that as long as \(f_k\) is nonzero, it would be linearly independent of \(f_0, \ldots, f_{k-1}.\) Consequently, the process must stop, and there is \(k\) such that \(f_k = 0.\) In particular, there is \(m \geq 0\) such that \[\ord_\Omega(f_0) = \cdots = \ord_\Omega(f_m) \prec_\Omega \ord_\Omega(f_{m+1})\]where \(\prec_\Omega\) denotes “strictly smaller” with respect to \(\preceq_\Omega.\) This means \[\ord_\Omega(f) = \ord_\Omega(h_1) = \cdots = \ord_\Omega(h_{m+1})\]and\[\ord_\Omega(f – h_1 – \cdots – h_{m+1}) \succ_\Omega \ord_\Omega(f)\]It follows that \[\In_\Omega(f) = \In_\Omega(h_1 + \cdots + h_{m+1}) = \In_\Omega(h_1) + \cdots + \In_\Omega(h_{m+1})\] Since \(\In_\Omega(h_1) + \cdots + \In_\Omega(h_{m+1})\) is in the ideal generated by \(\In_\Omega(g_{i_1}), \ldots, \In_\Omega(g_{i_{m+1}})\), we are done!

We also have analogues of the corollaries to Theorem 1 from the preceding post. Corollary 1 below fails if we allow \(J\) to be non-homogeneous, as evidenced by taking \(I := \langle x \rangle\) and \(J := \langle x + x^2 \rangle\). I suspect Corollary 2 remains true for non-homogeneous ideals, but need to think harder.

Corollary 1. Let \(I \supseteq J\) be ideals of \(S := \kk[x_1, \ldots, x_n]\) such that \(J\) is homogeneous with respect to a shallow monomial grading on \(S.\) If \(I \supsetneqq J\), then \(\In_\preceq(I) \supsetneqq \In_\preceq(J)\) for every linear group order \(\preceq\) on \(\znpos.\)

Proof. Pick homogeneous \(g_1, \ldots, g_N \in J\) such that \(\In_\preceq(g_i)\), \(i = 1, \ldots, N\), generate \(\In_\preceq(I)\). Then Theorem 1 implies that \(g_1, \ldots, g_N\) generate \(I\), so that \(I = J\).

Corollary 2. Let \(I\) be an ideal of \(S := \kk[x_1, \ldots, x_n]\) which is homogeneous with respect to a shallow monomial grading on \(S.\) There can not exist linear group orders \(\preceq_1, \preceq_2\) on \(\znpos\) such that \(\In_{\preceq_1}(I) \subsetneqq \In_{\preceq_2}(I)\).

Proof. Write \(J_k := \In_{\preceq_k}(I)\), \(k = 1, 2\). Assume to the contrary that \(J_1 \subsetneqq J_2\). Then we can choose homogeneous \(g_1, \ldots, g_N \in I\) such that \(\In_{\preceq_2}(g_i)\), \(i = 1, \ldots, N\), generate \(J_1\). Choose \(f \in I\) such that \(\In_{\preceq_2}(f) \not\in J_1.\) Let \(r\) be the remainder of the initial-term-cancelling-division of \(f\) by \(g_1, \ldots, g_N\) with respect to \(\preceq_2\). Then \(r\) is a nonzero element of \(I\) such that \(\supp(r) \cap \supp(J_1) = \emptyset\). On the other hand, if \(h_1, \ldots, h_M\) are homogeneous elements in \(I\) such that \(\In_{\preceq_1}(h_1), \ldots, \In_{\preceq_1}(h_M)\) generate \(J_1,\) then initial-term-cancelling-division of \(r\) by \(h_1, \ldots, h_M\) with respect to \(\preceq_1\) implies that \(r \not\in I,\) which is a contradiction. This completes the proof.

The notion of reduced initial basis described in the preceding post on division of power series also applies to an arbitrary linear group order on \(\znpos\): a finite ordered collection of elements \(g_1, \ldots, g_N\) from an ideal \(I\) of \(S\) is called a reduced initial basis with respect to a linear group order \(\preceq\) if the following hold:

  1. the initial terms of the \(g_i\) generate \(\In_\preceq(I),\)
  2. no monomial term of any \(g_i\) is divisible by the initial term of any \(g_j\) for \(j \neq i\), and
  3. the initial coefficient, i.e. the coefficient of the initial term, of each \(g_i\) is \(1.\)

If \(I\) is homogeneous with respect to a shallow monomial \(\Sigma\)-grading on \(S\), then a reduced initial basis for \(I\) can be produced by taking a minimal set of \(\Sigma\)-homogeneous elements of \(I\) whose initial terms generate \(\In_\preceq(I)\) and then replacing each element by the remainder of the initial-term-cancelling-division by other elements.

Corollary 3. Let \(I\) be an ideal of \(S := \kk[x_1, \ldots, x_n]\) which is homogeneous with respect to a shallow monomial \(\Sigma\)-grading on \(S\), and \(\preceq, \preceq’\) be linear group orders on \(\znpos\) such that
\[\In_\preceq(I) = \In_{\preceq’}(I)\]
Then any reduced \(\Sigma\)-homogeneous initial basis of \(I\) with respect to one of \(\preceq, \preceq’\) is also a reduced initial basis with respect to the other one. Moreover, the number of elements on a reduced $\Sigma$-homogeneous initial basis of $I$ is also uniquely determined by $I$.

Proof. Indeed, pick a reduced initial basis \(g_1, \ldots, g_N\) of \(I\) with respect to \(\preceq\). Let \(\alpha_i := \nu_{\preceq}(g_i),\) \(i = 1, \ldots, n.\) Fix \(i.\) Since \(\In_{\preceq’}(I)\) is also generated by \(x^{\alpha_1}, \ldots, x^{\alpha_N}\), it follows that \(\In_{\preceq’}(g_i)\) is divisible by some \(x^{\alpha_j}.\) Since no monomial in the support of \(g_i\) is divisible by \(x^{\alpha_j}\) for \(j \neq i\) it must be divisible by \(x^{\alpha_i}.\) Since \(\alpha_i\) is also in the support of \(g_i,\) and \(g_i\) is \(\Sigma\)-homogeneous and the grading by \(\Sigma\) is shallow, Remark 1 above implies that the only monomial in \(g_i\) divisible by \(x^{\alpha_i}\) is \(x^{\alpha_i}\) itself, so that \(\nu_{\preceq’}(g_i) = \alpha_i.\) It follows that that \(\In_{\preceq’}(g_i),\) \(i = 1, \ldots, N,\) form a reduced initial basis of \(I\) with respect to \(\preceq’\) as well. The last assertion follows via the same arguments as in the beginning of the proof of Corollary 3 of the earlier post on polynomial division.

Universal Initial Bases

Theorem 2. Let \(I\) be an ideal of \(S := \kk[x_1, \ldots, x_n]\) which is homogeneous with respect to a shallow monomial grading on \(S\) by a group \(\Sigma.\) There are finitely many distinct initial ideals of \(I\) with respect to linear group orders. In particular, there is a finite subset of \(\Sigma\)-homogeneous elements of \(I\) such that for every linear group order on \(\znpos,\) the initial terms of polynomials in that subset generate the initial ideal of \(I.\)

Proof. Regarding the first assertion, Logar’s argument for the leading ideal case from the proof of Theorem 2 in the previous post on polynomial division applies almost immediately to the present situation – one only needs to choose homogeneous elements in every case and replace “\(\ld\)” with “\(\In\)”. The second assertion then follows from Corollary 3 to Theorem 1.

We now present two successively stronger versions of Theorem 2.

Theorem 2*. Theorem 2 continues to hold if “linear group orders on \(\znpos\)” are substituted by “totally ordered monomial gradings on \(S := \kk[x_1, \ldots, x_n].\)” In other words, if \(I\) is a homogeneous ideal of \(S\) with respect to a shallow monomial grading on \(S,\) then there are finitely many distinct initial ideals of \(I\) with respect to totally ordered monomial gradings on \(S.\) In particular, there is a finite subset of \(\Sigma\)-homogeneous elements of \(I\) such that for every totally ordered monomial gradings on \(S,\) the initial forms of polynomials in that subset generate the initial ideal of \(I.\)

Proof. Theorem 2 implies that there is a finite subset of of \(\Sigma\)-homogeneous elements of \(I\) such that for every linear group order on \(\znpos,\) the initial terms of polynomials in that subset generate the initial ideal of \(I.\) Now apply Theorem 1*.

Theorem 2**. Theorem 2* holds for all (i.e. not necessarily homogeneous) ideals of \(S := \kk[x_1, \ldots, x_n].\) In other words, if \(I\) is an ideal of \(S,\) then there are finitely many distinct initial ideals of \(I\) with respect to totally ordered monomial gradings on \(S.\) In particular, there is a finite subset of elements of \(I\) such that for every totally ordered monomial gradings on \(S,\) the initial forms of polynomials in that subset generate the initial ideal of \(I.\)

Proof. Indeed, given an ideal \(I\) of \(S,\) take another variable \(x_0,\) and consider the homogenization (in the usual sense) \(I^h\) of \(I\) with respect to \(x_0, \ldots, x_n),\) i.e. \(I^h\) is the ideal of \(\kk[x_0, \ldots, x_n]\) generated by \[f^h := x_0^{\eta(f)}f(x_1/x_0^{\eta_1}, \ldots, x_n/x_0^{\eta_n})\]for all \(f \in I\). Given a monomial grading on \(S\) by a totally ordered group \(\Omega\), consider the induced \(\Omega\)-grading on \(S’ := \kk[x_0, \ldots, x_n]\) defined by: \[S’_\omega := \{f \in S’: f|_{x_0 = 1} \in S_\omega\}\]Since the \(\Omega\)-grading on \(S\) is monomial compatible, so is the grading on \(S’.\) For each \(f \in I,\) it is straightforward to check that \[(\In_\Omega(f^h))|_{x_0 = 1} = \In_\Omega(f)\] On the other hand, if \(f\) is a homogeneous element of \(I^h,\) then the monomials in the support of \(f\) are in one-to-one correspondence with the monomials in the support of \(f|_{x_0 = 1}\) and \[(\In_\Omega(f))|_{x_0 = 1} = \In_\Omega(f|_{x_0 = 1})\]By Theorem 2* there are homogeneous \(f_1, \ldots, f_N \in I^h\) independent of \(\Omega\) such that \(\In_\Omega(I^h)\) is generated by \(\In_\Omega(f_i),\) \(i = 1, \ldots, N.\) It then follows from the above discussion that \(\In_\Omega(I)\) is generated by \(\In_\Omega(f_i|_{x_0 = 1}),\) \(i = 1, \ldots, N.\) This proves Theorem 2**.

Every initial ideal comes from a weight

In this section we consider weighted orders as opposed to weighted degrees from the previous post. Every \(\omega \in \rr^n\) defines a weighted order, and for \(f \in \kk[x_1, \ldots, x_n]\) we write \(\In_\omega(f)\) for the corresponding initial term of \(f\). Given an ideal \(I\) of \(\kk[x_1, \ldots, x_n]\) we similarly write \(\In_\omega(I)\) for its leading ideal generated by \(\In_\omega(f)\) for all \(f \in I\).

Theorem 3. Let \(I\) be an ideal of \(S := \kk[x_1, \ldots, x_n]\) which is homogeneous with respect to a shallow monomial grading on \(S.\) For every linear group order \(\preceq\) on \(\znpos,\) there is \(\omega = (\omega_1, \ldots, \omega_n) \in \zz^n\) such that \(\In_\omega(I) = \In_\preceq(I)\) (in particular, this means that \(\In_\omega(I)\) is a monomial ideal). Moreover, one can ensure that

  1. \(\omega_i \neq 0\) for all \(i,\)
  2. \(\omega_i > 0\) if and only if \(e_i \succ 0\),
  3. \(\omega_i < 0\) if and only if \(e_i \prec 0,\)

where \(e_i\) is the standard \(i\)-th basis vector of \(\zz^n.\)

Proof. Fix \(g_1, \ldots, g_N \in I\) such that \(\In_\preceq(g_i)\), \(i = 1, \ldots, N\), generate \(\In_\preceq(I)\). Write \[g_i = c_{i0} x^{\alpha_{i0}} + \cdots + c_{iN_i}x^{\alpha_{iN_i}}\]with \(\alpha_{i0} = \nu_\preceq(g_i)\). Let \(C\) be the set of all \(\omega \in \rr^n\) such that

  1. \(\langle \omega, \alpha_{i0} – \alpha_{ij} \rangle \leq 0,\) \( i = 1, \ldots, N,\) \(j = 1, \ldots, N_i,\)
  2. \(\omega_i \geq 0\) for all \(i\) such that \(e_i \succ 0\),
  3. \(\omega_i \leq 0\) for all \(i\) such that \(e_i \prec 0.\)

Note that \(C\) is a convex polyhedral cone. We claim that \(\dim(C) = n\). Indeed, \(C\) is dual to the cone \(C’\) generated by \(\alpha_{ij} – \alpha_{i0},\ i = 1, \ldots, N,\ j = 1, \ldots, N_i\) and \(\epsilon_i e_i\), \(i = 1, \ldots, n,\) where \(\epsilon_i = \pm 1\) depending on wheter \(e_i \succ 0\) or \(e_i \prec 0.\) (Notice the switch from \(\alpha_{i0} – \alpha_{ij}\) from the first condition defining \(C\) to \(\alpha_{ij} – \alpha_{i0}\) in the definition of \(C’.\)) If \(\dim(C) < n\), it would follow that \(C’\) is not strongly convex. Since \(C’\) is also rational, there would be nonnegative integers \(\lambda_{ij}\), not all zero, such that \[\sum_{i,j}\lambda_{ij}(\alpha_{ij} – \alpha_{i0}) = -\sum_k r_k\epsilon_ke_k\]for some nonnegative integers \(r_1, \ldots, r_n\). Since \(\preceq\) is compatible with addition on \(\znpos,\) it would follow that \[\sum_{i,j}\lambda_{ij} \alpha_{ij} \preceq \sum_{i,j}\lambda_{ij} \alpha_{i0}\]On the other hand, by construction \(\alpha_{ij} \succ \alpha_{i0}\) for each \(i,j\), so that \[\sum_{i,j}\lambda_{ij} \alpha_{ij} \succ \sum_{i,j}\lambda_{ij} \alpha_{i0}\]This contradiction shows that \(\dim(C) = n\), as claimed. In particular this implies (see e.g. [Mon21, Proposition V.15]) that the topological interior \(C^0\) of \(C\) has a nonempty intersection with \(\zz^n\), and if \(\omega \in C^0 \cap \zz^n\), then \(\In_\omega(g_i) = c_{i0}x^{\alpha_{i0}}\). Since \(\In_\preceq(I)\) is generated by \(c_{i0}x^{\alpha_{i0}}\), \(i = 1, \ldots, N,\) it follows that \(\In_\omega(I) \supseteq \In_\preceq(I)\). If this containment were proper, then since \(\In_\preceq(I)\) is a monomial (and therefore, homogeneous in the usual sense) ideal, Corollary 1 to Theorem 1 would imply that \[\In_\preceq(\In_\omega(I)) \supsetneqq \In_\preceq(\In_\preceq(I)) = \In_\preceq(I)\]On the other hand, it is straightforward to check that \[\In_\preceq(\In_\omega(I)) = \In_{\preceq_\omega}(I)\] where \(\preceq_\omega\) is the binary relation on \(\znzero\) defined as follows: \(\alpha \preceq_\omega \beta\) if and only if either \(\langle \omega, \alpha \rangle < \langle \omega, \beta \rangle\), or \(\langle \omega, \alpha \rangle = \langle \omega, \beta \rangle\) and \(\alpha \preceq \beta\). It is straightforward to check that \(\preceq_\omega\) is a linear group order on \(\znpos,\) and consequently, the two preceding displays would contradict Corollary 2 to Theorem 1. It follows that \(\In_\omega(I) = \In_\preceq(I),\) as required.

Remark. I think there are stronger versions of Theorem 3 analogous to the stronger versions of Theorem 2 above. In particular, I think the following is true, although I am not able to prove it.

Theorem 3** (I don’t know whether it is true). Theorem 3 continues to hold even if \(I\) is not homogeneous and “linear group orders on \(\znpos\)” are substituted by “totally ordered monomial gradings on \(S := \kk[x_1, \ldots, x_n].\)” In other words, if \(I\) is any ideal of \(S,\) then for every monomial grading on \(S\) by a totally ordered group \(\Omega,\) there is \(\omega = (\omega_1, \ldots, \omega_n) \in \zz^n\) such that \(\In_\omega(I) = \In_\Omega(I).\) Moreover, one can ensure that

  1. \(\omega_i = 0\) if and only if \(\deg_\Omega(x_i) = 0_\Omega,\) where \(0_\Omega\) is the identity element of \(\Omega,\)
  2. \(\omega_i > 0\) if and only if \(\deg_\Omega(x_i) \succ_\Omega 0_\Omega\),
  3. \(\omega_i < 0\) if and only if \(\deg_\Omega(x_i) \prec_\Omega 0_\Omega,\)

where \(e_i\) is the standard \(i\)-th basis vector of \(\zz^n.\)

Division with power series

\(\DeclareMathOperator{\lcm}{lcm} \DeclareMathOperator{\In}{In} \DeclareMathOperator{\ld}{Ld} \DeclareMathOperator{\kk}{\mathbb{K}} \DeclareMathOperator{\nd}{ND} \DeclareMathOperator{\rnpos}{\mathbb{R}^n_{> 0}} \DeclareMathOperator{\rnzero}{\mathbb{R}^n_{\geq 0}} \DeclareMathOperator{\rr}{\mathbb{R}} \DeclareMathOperator{\scrB}{\mathcal{B}} \DeclareMathOperator{\scrI}{\mathcal{I}} \DeclareMathOperator{\scrJ}{\mathcal{J}} \DeclareMathOperator{\supp}{Supp} \DeclareMathOperator{\znplusonezero}{\mathbb{Z}^{n+1}_{\geq 0}} \DeclareMathOperator{\znplusonepos}{\mathbb{Z}^{n+1}_{> 0}} \DeclareMathOperator{\znpos}{\mathbb{Z}^n_{> 0}} \DeclareMathOperator{\znzero}{\mathbb{Z}^n_{\geq 0}} \DeclareMathOperator{\zz}{\mathbb{Z}}\)In this post we look at division in the ring \(\hat R := \kk[[x_1, \ldots, x_n]]\) of formal power series over a field \(\kk\). In contrast to division in polynomial rings discussed in the preceding post, while trying to divide by power series, one is forced to consider initial terms with respect to monomial orders.

The initial exponent \(\nu_\preceq(f)\) of \(f \in \hat R\) with respect to a monomial order \(\preceq\) is the minimal element of \(\supp(f)\) with respect to \(\preceq\) (note: the minimal element is guaranteed to exist due to the well ordering property of a monomial order). The initial term \(\In_\preceq(f)\) of \(f\) is the monomial term \(cx^\alpha\) where \(\alpha = \nu_\preceq(f)\) and \(c\) is the coefficient of \(x^\alpha\) in \(f\). Given an ideal \(I\) of \(\hat R\), its initial ideal \(\In_\preceq(I)\) is the ideal generated by \(\{\In_\preceq(f): f \in I\}\).

Lemma 1. If \(f_1, \ldots, f_N \in \hat R\) have mutually distinct values of \(\nu_\preceq\), then the \(f_i\) are linearly independent over \(\kk\).

Proof. Indeed, pick the unique \(i\) such that \(\nu_\preceq(f_i) = \min_\preceq\{\nu_\preceq(f_j): j = 1, \ldots, N\}\). Then \(\nu_\preceq(f_i)\) does not appear in the support of \(f_j\) for any \(j \neq i\), so it can not be cancelled by any \(\kk\)-linear combination of the \((f_j)_{j \neq i}\).

The division algorithm in \(\hat R\) is completely analogous to that for polynomials, the only twist being that the process may not stop in finitely many steps, the quotient and remainders are defined as the (well-defined) limits of the quotients and remainders computed in the finite steps. The division of \(f \in \hat R\) by an ordered collection \(g_1, \ldots, g_N\) of elements in \(\hat R\) goes as follows:

  1. Initially set \(f_0 := f\), and define \(r_0, q_{0,1}, \ldots, q_{0,N}\) to be zero. (\(r_i\) and the \(q_{i,j}\) would be the finite step approximations of respectively the remainder and the quotients, and they will be updated in every step.)
  2. Assuming \(f_i, r_i, q_{i,1}, \ldots, q_{i,N}\) have been computed, proceed to the next step as follows:
    1. If there is \(j\) such that \(\In_\preceq(g_j)\) divides \(\In_\preceq(f_i)\), then pick the smallest such \(j\), compute \(q’_{i,j} := \In_\preceq(f_i)/\In_\preceq(g_j)\), and set\[f_{i+1} := f_i – q’_{i,j}g_j,\]\[q_{i+1,j} := q_{i,j} + q’_{i,j},\]\[q_{i+1,k} := q_{i,k}\ \text{if}\ k \neq j,\] \[r_{i+1} := r_i\]
    2. Otherwise \(\In_\preceq(f_i)\) is not divisible by \(\In_\preceq(g_j)\) for any \(j\); in this case define\[f_{i+1} := f_i – \In_\preceq(f_i),\]\[q_{i+1,j} := q_{i,j},\ j = 1, \ldots, N,\]\[r_{i+1} := r_i + \In_\preceq(f_i)\]
  3. Set \(r := \lim_{i \to \infty}r_i\) and \(q_j := \lim_{i \to \infty}q_{i,j}\), \(j = 1, \ldots, N.\)

Theorem 1. \(r\) and the \(q_j\) are well defined elements in \(\hat R\) such that \[f = \sum_j q_jg_j + r\]where

  • either \(r = 0\) or no monomial term in \(r\) is divisible by \(\In_\preceq(g_j)\) for any \(j\);
  • \(\In_\preceq(\sum_j q_jg_j) \in \langle \In_\preceq(g_j): j = 1, \ldots, N \rangle\); more precisely, there is a reordering \(i_1, \ldots, i_s\) of \(\{i: q_i \neq 0\}\) such that \[\In_\preceq(\sum_j q_jg_j) = \In_\preceq(q_{i_1}g_{i_1}) \prec \In_\preceq(q_{i_2}g_{i_2}) \prec \cdots \prec \In_\preceq(q_{i_s}g_{i_s})\]where all the \(\prec\) orders on the right hand side are strict.

In addition, if \(g_1, \ldots, g_N\) are elements of an ideal \(I\) of \(\hat R\) such that \(\In_\preceq(g_1), \ldots, \In_\preceq(g_N)\) generate \(\In_\preceq(I)\), then the following are equivalent:

  1. \(f \in I\),
  2. \(r = 0\).

In particular, \(g_1, \ldots, g_N\) generate \(I\) as well.

Proof. By construction at every step \(i\), for each of \(r_i, q_{i,1}, \ldots, q_{i,N}\), either it remains unchanged from the previous step or its initial exponent increases from the previous step. Lemma 1 then implies that for each degree \(d\), if \(i\) is sufficiently large, then all terms (if any) added to \(r_i\) or \(q_{i,j}\) have degree greater than \(d\). This proves that \(r\) and the \(q_j\) are well defined power series, and similarly, it follows that \(\lim_{i \to \infty} f_i = 0.\) All the remaining assertions of Theorem 1 are then immediate.

All the results of polynomial division discussed in the previous post have corresponding analogues for power series:

Corollary 1. Let \(I \supseteq J\) be ideals of \(\hat R\). If \(I \supsetneqq J\), then \(\In_\preceq(I) \supsetneqq \In_\preceq(J)\) for every monomial order \(\preceq\).

Corollary 2. Let \(I\) be an ideal of \(\hat R\). There can not exist monomial orders \(\preceq_1, \preceq_2\) such that \(\In_{\preceq_1}(I) \subsetneqq \In_{\preceq_2}(I)\).

The proofs of Corollaries 1 and 2 are identical to that from the polynomial case, one only needs to replace “leading” (terms, ideals, etc) by the corresponding “initial” versions. We also have the analogue for a “reduced” Gröbner basis: a finite ordered collection of elements \(g_1, \ldots, g_N\) from an ideal \(I\) is called a reduced initial basis with respect to a monomial order \(\preceq\) if the following hold:

  1. the initial terms of the \(g_i\) generate \(\In_\preceq(I),\)
  2. no monomial term of any \(g_i\) is divisible by the initial term of any \(g_j\) for \(j \neq i\), and
  3. the initial coefficient, i.e. the coefficient of the initial term, of each \(g_i\) is \(1.\)

A reduced initial basis can be produced by taking a minimal set of elements of \(I\) whose initial terms generate \(\In_\preceq(I)\) and then replacing each element by the remainder of the division by other elements. Corollary 3 of the polynomial division is not completely true for power series, since even an ideal \(I\) generated by a single power series \(\phi\) has infinitely many distinct initial bases: for each polynomial \(f\) without constant term, \((1 + \psi)\phi\) is a reduced initial basis of \(I\) for any power series $\psi$ with zero constant term. However, some parts of the result still remain valid:

Corollary 3. For an ideal $I$ of $\hat R$, its initial ideal (with respect to a monomial order) uniquely determines the number of the elements and the initial terms of a reduced initial basis. Moreover, if $I$ has the same initial ideal for two monomial orders, then any reduced initial basis of $I$ with respect to one of the monomial orders is a reduced initial basis with respect to the other monomial order as well.

Proof. Indeed, let $g_1, \ldots, g_N$ be a reduced initial basis of $I$ with respect to a monomial order $\preceq$. If $h_1, \ldots, h_M$ is also a reduced initial basis of $I$ with respect to $\preceq$, then the same arguments as in the proof of Corollary 3 of the preceding post show that $M = N$, and after a reordering of the $h_i$ if necessary one has $\In_\preceq(h_i) = \In_\preceq(g_i)$, $i = 1, \ldots, N$. This proves the first assertion. For the second assertion, pick a monomial order $\preceq’$ such that$ \In_{\preceq’}(I) = \In_\preceq(I).$ Let $\alpha_i := \nu_{\preceq}(g_i),$ $i = 1, \ldots, N.$ Fix $i.$ Since $\In_{\preceq’}(I)$ is generated by $x^{\alpha_1}, \ldots, x^{\alpha_N}$, it follows that $\In_{\preceq’}(g_i)$ is divisible by some $x^{\alpha_j}.$ Since no monomial in the support of $g_i$ is divisible by $x^{\alpha_j}$ for $j \neq i$, $\In_{\preceq’}(g_i)$ must be divisible by $x^{\alpha_i}.$ Since $\alpha_i$ is also in the support of $g_i,$ we must have that $\nu_{\preceq’}(g_i) = \alpha_i.$ It follows that that $g_i,$ $i = 1, \ldots, N,$ form a reduced initial basis of $I$ with respect to $\preceq’$ as well.

The proof of the Universal Basis Theorem (i.e. Theorem 2 in the previous post) used the fact that the support of a polynomial is finite. To adapt this proof for power series we seek the help of Newton diagrams. Given a power series \(f\), it follows from basic convex geometry that the convex hull of \(\supp(f) + \rnzero\) is a convex rational polyhedron (see e.g. [Mon21, Corollary B.48 and Proposition V.29]. The Newton diagram \(\nd(f)\) of \(f\) is the union of compact faces of this polyhedron. We write \(f_{\nd}\) for the “part” of \(f\) supported at its Newton diagram; more precisely, if \(f = \sum_\alpha c_\alpha x^\alpha\), then \[f_{\nd} := \sum_{\alpha \in \nd(f)} c_\alpha x^\alpha\]Note in particular that \(\supp(f_{\nd})\) is finite.

Lemma 2. For any monomial order \(\preceq\) and power series \(f\), \(\In_\preceq(f) = \In_\preceq(f_{\nd}).\)

Proof. Every \(\alpha \in \supp(f) \setminus \nd(f)\) is a convex rational combination \[\alpha = \sum_{i=1}^N r_i(\alpha_i + \beta_i)\]such that

  • \(\alpha_i \in \nd(f) \), \(\beta_i \in \rnzero\),
  • each \(\alpha_i\) is a vertex of \(\nd(f)\),
  • the coordinates of all \(\beta_i\) are rational
  • not all \(\beta_i\) are zero

Then clearing out the denominators of all \(r_i\) and \(\beta_i\) leads to an identity of the form \[k\alpha = \alpha_{i_1} + \cdots + \alpha_{i_k} + \beta\]where \(k\) is a positive integer, \(\alpha_{i_j}\) are vertices of \(\nd(f)\), and \(\beta\) is a nonzero element of \(\znzero\). It is then clear from the defining properties of a monomial order that there is at least one \(j\) such that \(\alpha \succ \alpha_{i_j}\), concluding the proof.

Theorem 2 (Universal Basis Theorem). Every ideal \(I\) of \(\hat R\) has finitely many distinct initial ideals. In particular, there is a finite collection \(\scrB\) of elements in \(I\) such that for every monomial order \(\preceq\) on \(\znzero\), the initial ideal of \(I\) with respect to \(\preceq\) is generated by the initial terms of the elements in \(\scrB\).

Proof. Assume to the contrary that there is an infinite collection \(\scrJ_0\) of distinct initial ideals of \(I\). Pick an arbitrary nonzero element \(f_1 \in I\). Due to Lemma 2, out of the finitely many monomial terms of \(f_{1, \nd}\), there must be one, say \(m_1\), such that \(\scrJ_1 := \{J \in \scrJ_0: \langle m_1 \rangle \subsetneqq J\}\) is infinite. Pick \(J_1 \in \scrJ_1\) and a monomial order \(\preceq_1\) such that to \(J_1 = \In_{\preceq_1}(I)\). Since \(m_1 \in J_1\), there is \(g_1 \in I\) such that \(\In_{\preceq_1}(g_1) = m_1\), and since \(\langle m_1 \rangle \neq J_1\), there is \(h_1 \in I\) such that \(\In_{\preceq_1}(h_1) \not\in \langle m_1 \rangle.\) Let \(f_2\) be the remainder of \(h_1\) modulo division by \(g_1\). Then \(f_2\) is nonzero, and no monomial term in \(f_2\) belongs to \(\langle m_1 \rangle\). Since \(f_{2, \nd}\) has finitely many monomial terms, due to Lemma 2 there must be a monomial term \(m_2\) of \(f_{2, \nd}\) such that \(\scrJ_2 := \{J \in \scrJ_1: \langle m_1, m_2 \rangle \subsetneqq J\}\) is infinite. Now choose \(J_2 \in \scrJ_2\) and a monomial order \(\preceq_2\) such that to \(J_2 = \In_{\preceq_2}(I)\). As in the preceding case, there are \(g_{21}, g_{22}, h_2 \in I\) such that \(\In_{\preceq_2}(g_{21}) = m_1\), \(\In_{\preceq_2}(g_{22}) = m_2\), and \(\In_{\preceq_2}(h_2) \not\in \langle m_1, m_2 \rangle.\) Define \(f_3\) to be the remainder of \(h_2\) modulo division by \(g_{21}, g_{22}\). Then \(f_3\) is nonzero, and no monomial term in \(f_3\) belongs to \(\langle m_1, m_2 \rangle,\) and we can proceed to define \(\scrJ_3\) as above. In this way we get an infinite strictly increasing chain of monomial ideals: \[ \langle m_1 \rangle \subsetneqq \langle m_1, m_2 \rangle \subsetneqq \langle m_1, m_2, m_3 \rangle \subsetneqq \cdots \] which violates the Noetherianity of \(\kk[[x_1, \ldots, x_n]]\) and completes the proof of the first assertion. The second assertion then follows from Corollary 3 to Theorem 1.

Weighted orders in division

Every \(\omega \in \rr^n\) defines a weighted order on \(\hat R\), and for \(f \in \hat R\) we write (by an abuse of notation) \(\omega(f)\) and \(\In_\omega(f)\) respectively for the corresponding order and initial term of \(f\). Note that

  • There can be \(\omega\) and \(f\) such that \(\omega(f) = -\infty\), in which case \(\In_\omega(f)\) is not defined!
  • If \(\omega \in \rnzero,\) then \(\omega(f) \geq 0\) and \(\In_\omega(f)\) is defined for all \(f \in \hat R.\)

Given an ideal \(I\) of \(\hat R\) we write \(\In_\omega(I)\) for its initial ideal generated by \(\In_\omega(f)\) for all \(f \in I\) such that \(\In_\omega(f)\) is defined.

Theorem 3. Let \(\preceq\) be a monomial order and \(g_1, \ldots, g_N \in \hat R.\) Then there is \(\omega \in \znpos\) such that \[\In_\omega(g_i) = \In_\preceq(g_i),\ i = 1, \ldots, N.\]If in addition the \(g_i\) are elements of an ideal \(I\) such that \(\In_\preceq(g_i),\) \(i = 1, \ldots, N,\) generate \(\In_\preceq(I),\) then\[\In_\omega(I) = \langle \In_\omega(g_1), \ldots, \In_\omega(g_N)\rangle = \langle \In_\preceq(g_1), \ldots, \In_\preceq(g_N)\rangle = \In_\preceq(I)\]In particular \(\In_\omega(I)\) is a monomial ideal. Moreover, every \(f \in I\) has an expression of the form \(f = \sum_i f_ig_i\) such that

  1. \(\nu_\preceq(f) = \min_\preceq\{\nu_\preceq(f_ig_i)\},\) and
  2. \(\omega(f) = \min\{\omega(f_ig_i)\}.\)

Proof. Write \[g_{i,\nd} = c_{i0} x^{\alpha_{i0}} + \cdots + c_{iN_i}x^{\alpha_{iN_i}}\]with \(\alpha_{i0} = \nu_\preceq(g_i)\). Define \[C := \{\omega \in \rnzero: \langle \omega, \alpha_{i0} – \alpha_{ij} \rangle \leq 0,\ i = 1, \ldots, N,\ j = 1, \ldots, N_i\}\]Note that \(C\) is a convex polyhedral cone. We claim that \(\dim(C) = n\). Indeed, \(C\) is dual to the cone \(C’\) generated by \(\alpha_{ij} – \alpha_{i0},\ i = 1, \ldots, N,\ j = 1, \ldots, N_i\) (notice the switch from \(\alpha_{i0} – \alpha_{ij}\) in the formula for \(C\) to \(\alpha_{ij} – \alpha_{i0}\) in the formula for \(C’\)) and the standard basis vectors \(e_1, \ldots, e_n\) of \(\rr^n\). If \(\dim(C) < n\), it would follow that \(C’\) is not strongly convex. Since \(C’\) is also rational, there would be nonnegative integers \(\lambda_{ij}\), not all zero, such that \[\sum_{i,j}\lambda_{ij}(\alpha_{ij} – \alpha_{i0}) = -\sum_k r_ke_k\]for some nonnegative integers \(r_1, \ldots, r_n\). Due to the second and third properties of a monomial order, it would follow that \[\sum_{i,j}\lambda_{ij} \alpha_{ij} \preceq \sum_{i,j}\lambda_{ij} \alpha_{i0}\]On the other hand, by construction \(\alpha_{ij} \succ \alpha_{i0}\) for each \(i,j\), so that \[\sum_{i,j}\lambda_{ij} \alpha_{ij} \succ \sum_{i,j}\lambda_{ij} \alpha_{i0}\]This contradiction shows that \(\dim(C) = n\), as claimed. In particular this implies (see e.g. [Mon21, Proposition V.15]) that the topological interior \(C^0\) of \(C\) has a nonempty intersection with \(\znpos\), and if \(\omega \in C^0 \cap \znpos\), then \(\In_\omega(g_i) = c_{i0}x^{\alpha_{i0}}\). This proves the first assertion. The remaining part of the theorem follows immediately from Lemma 3 below.

Remark. Given \(\omega \in \znpos\) and elements \(g_1, \ldots, g_N\) of an ideal \(I\) such that \(\In_\omega(g_i) = \In_\preceq(g_i)\) for each \(i\) and \(\In_\preceq(g_i)\), \(i = 1, \ldots, N\), generate \(\In_\preceq(I)\), if one only wanted to prove that \(\In_\omega(I) = \In_\preceq(I),\) then one can give a much less involved argument than Lemma 3 following [Stu96, Chapter 1]. Indeed, since \(\In_\omega(g_i) = \In_\preceq(g_i)\) for each \(i\), it follows that \(\In_\omega(I) \supseteq \In_\preceq(I)\). If this containment were proper, then Corollary 1 to Theorem 1 would imply that \[\In_\preceq(\In_\omega(I)) \supsetneqq \In_\preceq(\In_\preceq(I)) = \In_\preceq(I)\]On the other hand, it is straightforward to check that \[\In_\preceq(\In_\omega(I)) = \In_{\preceq_\omega}(I)\] where \(\preceq_\omega\) is the binary relation on \(\znzero\) defined as follows: \(\alpha \preceq_\omega \beta\) if and only if either \(\langle \omega, \alpha \rangle < \langle \omega, \beta \rangle\), or \(\langle \omega, \alpha \rangle = \langle \omega, \beta \rangle\) and \(\alpha \preceq \beta\). Since \(\omega \in \znzero\), it follows that \(\preceq_\omega\) is a monomial order, so that the last two displays would contradict Corollary 2 to Theorem 1. It follows that \(\In_\omega(I) = \In_\preceq(I),\) as required.

Lemma 3. In the situation of Theorem 1 assume there is \(\omega\) such that \(\omega(\In_\preceq(g_j)) = \omega(g_j)\) (i.e. \(\omega(g_j)\) is finite and \(\In_\preceq(g_j)\) is a monomial with \(\omega\)-value equal to \(\omega(g_j)\)) for each \(j\). Then

  1. \(\omega(q_jg_j) \geq \omega(f)\) for each \(j\).
  2. \(\omega(r) \geq \omega(f)\).
  3. Assume in addition \(\omega \in \rnpos\), \(In_\preceq(g_j),\) \(j = 1, \ldots, N,\) generate \(\In_\preceq(I)\), where \(I\) is the ideal generated by \(g_1, \ldots, g_N,\) and \(I\) contains \(f.\) Then \(\In_\omega(f)\) is in the ideal generated by \(\In_\omega(g_j),\) \(j = 1, \ldots, N.\) (Note that the non-negativity of the coordinates of \(\omega\) ensures that \(\In_\omega(f)\) is well defined.) Moreover, \(f\) has an expression of the form \(f = \sum_i f_ig_i\) such that
    1. \(\nu_\preceq(f) = \min_\preceq\{\nu_\preceq(f_ig_i)\},\) and
    2. \(\omega(f) = \min\{\omega(f_ig_i)\}.\)

Proof. It is straightforward to check that the first two assertions are true for the first step of the division algorithm, and \(\omega(f_1) \geq \omega(f)\). Now substitute \(f\) by \(f_1\) and repeat to see that the first two assertions hold in general. It remains to prove the third assertion. Since \(f \in I,\) and the initial forms the \(g_i\) generate the inital ideal of \(I\), by the division algorithm \(f\) has an expression of the form \[f = \sum_i f_ig_i\]such that \(\nu_\preceq(f) = \min_\preceq\{\nu_\preceq(f_ig_i)\},\) i.e. property 3.1 of Theorem 2 is satisfied. Assume property 3.2 is not satisfied. Then there must be a cancellation of the form \[\sum_j \In_\omega(f_{i_j})\In_\omega(g_{i_j}) = 0\] where the sum is over all \(j\) such that \(\omega(f_{i_j}) + \omega(g_{i_j})\) is the smallest. We claim that \[\nu_\preceq(f) \prec \nu_\preceq(f_{i_j}g_{i_j})\]for each \(j\) (where “\(\prec\)” denotes “strictly smaller with respect to \(\preceq\)”). Indeed, by Theorem 1, there is only one \(i\) such that \(f_ig_i\) has a monomial term with exponent \(\nu_\preceq(f)\), so that this term can not cancel with any other monomial term in the support of \(f_kg_k\) for \(k \neq i,\) proving the claim. Now write \[f = \sum_{i \neq i_j}f_ig_i + \sum_j (f_{i_j} – \In_\omega(f_{i,j}))g_{i_j} + f’\]where \(f’ := \sum_j \In_\omega (f_{i_j})g_{i_j}.\) Since the initial terms of the \(g_{i_j}\) with respect to \(\omega\) and \(\preceq\) are “compatible”, and since \(\sum_j \In_\omega(f_{i_j})\In_\omega(g_{i_j}) = 0\), it follows that \[\nu_\preceq(f’) \succ \nu_\preceq (f_{i_j} g_{i_j})\]for each \(j\). Note in addition that \[\nu_\preceq(f’) \succ \nu_\preceq(f)\] due to the above claim. Since \(f’\) is in the ideal generated by the \(g_i\), we can write \(f’ = \sum_i f’_ig_i,\) where the \(f’_i\) are the quotients produced by the division algorithm. It follows that \[f = \sum_{i \neq i_j}f_ig_i + \sum_j (f_{i_j} – \In_\omega(f_{i,j}))g_{i_j} + \sum_i f’_ig_i\]This suggests a double induction to track the change in this expression for \(f\) in comparison to the original expression \(f = \sum_i f_ig_i\) as follows: given \(h = (h_1, \ldots, h_N)\) such that \(f = \sum_i h_ig_i\), define \[m(h) := \min_i\{\omega(h_ig_i)\}\]and\[\alpha(h) := \min_\preceq\{\nu_\preceq(\In_\omega(h_ig_i)): \omega(h_ig_i) = m(h)\}\]Our discussion above implies that if property 3.2 is not satisfied, then starting from an expression of \(f\) satisfying property 3.1, the value for the pair \((m, \alpha)\) can be increased (where the ordering on the pair is initially by \(m\) and then by \(\preceq\) on \(\alpha\)) while ensuring that property 3.1 remains true. Since each coordinate of \(\omega\) is positive, for each value of \(m\), the space of \(\omega\)-homogeneous elements of \(\omega\)-order (or degree) \(m\) is finite dimensional, and therefore due to Lemma 1, it is not possible for only \(\alpha\) to increase while keeping \(m\) unchanged. Consequently at some point we must have an expression \(f = \sum_i h_ig_i\) such that \(m(h) = \omega(f),\) as required to complete the proof.

(An adaptation of) Buchberger’s algorithm

Given two nonzero monomial terms \(ax^\alpha, bx^\beta\), we define their lowest common multiple as \[\lcm(ax^\alpha, bx^\beta) := x^{\max(\alpha, \beta)}\]where the maximum is taken componentwise. Note that the coefficient of the lowest common divisor is by definition \(1\), regardless of the coefficients of the original monomials. Buchberger’s algorithm for computing Gröbner bases of polynomial ideals adapts easily to the computation of initial ideals in the ring of power series (note that this “algorithm” is not really finite, since the division algorithm is defined only as the limit of an infinite process. For truly finite algorithms one needs something like Mora’s algorithm):

Input: a monomial order \(\preceq\) and a finite ordered collection \(g_1, \ldots, g_s\) of nonzero elements in \(\hat R\). Denote the ideal generated by \(g_1, \ldots, g_s\) in \(\hat R\) by \(I\).

Output: a finite ordered collection \(\scrB\) of nonzero elements in \(I\) whose initial terms generate \(\In_\preceq(I)\).

Procedure:

  1. Set \(\scrB_0 := (g_1, \ldots, g_s)\).
  2. Assuming \(\scrB_k = (h_1, \ldots, h_{s_k})\) has been computed, compute \(\scrB_{k+1}\) as follows: for each \(i, j\), \(1 \leq i < j \leq k\),
    • Define \[S(h_i, h_j) := \frac{\lcm(\In_\preceq(h_i), \In_\preceq(h_j))}{\In_\preceq(h_i)}h_i – \frac{\lcm(\In_\preceq(h_i), \In_\preceq(h_j))}{\In_\preceq(h_j)}h_j\]
    • Divide each \(S(h_i, h_j)\) by \(\scrB_k\). Define \(\scrB_{k+1}\) to be the union of \(\scrB_k\) and the set of all nonzero remainders of \(S(h_i, h_j)\) by \(\scrB_k\).
  3. Repeat until you get \(\scrB_{k+1} = \scrB_k\). Then \(\scrB := \scrB_k\) is the desired output.

Theorem 4. The above algorithm stops in finitely many steps and the output is a finite collections of elements of \(I\) whose initial terms generate \(\In_\preceq(I)\).

Proof. Since we start with elements in \(I\), it is clear that all new elements added to the original list are also from \(I\). The process also stops in finitely many steps, since whenever a new element is added to the list, the ideal generated by their initial terms gets larger (due to Theorem 1), and the Noetherianity of \(\hat R\) precludes infinite strictly increasing chains of ideals. It remains to show that \((\In_\preceq(g): g \in \scrB)\) generate \(\In_\preceq(I)\). Enumerate the elements in \(\scrB\) by \(g_1, \ldots, g_N.\) We may assume without loss of generality that the coefficient of \(\In_\preceq(g_i)\) is \(1\), i.e. \(\In_\preceq(g_i)\) is of the form \(x^{\alpha_i}\). Pick \(f \in I.\) Since the \(g_i\) generate \(I\), there is an expression \(f = \sum_i f_ig_i \) with \(f_i \in \hat R.\) Due to Theorem 3 there is \(\omega \in \znpos\) such that \(\In_\omega(f) = \In_\preceq(f)\) and \(\In_\omega(g_i) = \In_\preceq(g_i)\) for each \(i\). Define the pair \((m, \alpha)\) as in the proof of Lemma 3, i.e. \[m(f_1, \ldots, f_N) := \min_i\{\omega(f_ig_i)\}\]and\[\alpha(f_1, \ldots, f_N) := \min_\preceq\{\nu_\preceq(\In_\omega(f_ig_i)): \omega(f_ig_i) = m(f_1, \ldots, f_N)\}\]If \(m(f_1, \ldots, f_N) = \omega(f)\), then it would follow that \(\In_\preceq(f) = \In_\omega(f)\) is a sum of the form \(\sum_j \In_\omega(f_{i_j})\In_\omega(g_{i_j}) = \sum_j \In_\omega(f_{i_j})\In_\preceq(g_{i_j})\) and therefore would be in the ideal generated by the \(\In_\preceq(g_i)\). So assume \[m(f_1, \ldots, f_N) < \omega(f)\]Then there must be a cancellation of the form \[\sum_j \In_\omega(f_{i_j})\In_\omega(g_{i_j}) = \sum_j \In_\omega(f_{i_j})\In_\preceq(g_{i_j}) = 0\] where the sum is over all \(j\) such that \(\omega(f_{i_j}) + \omega(g_{i_j}) = m(f_1, \ldots, f_N).\) It follows that \[\sum_k \In_\preceq(\In_\omega(f_{i_{j,k}})\In_\preceq (g_{i_{j,k}})) = 0\] where the sum is over all \(k\) such that \(\nu_\preceq(f_{i_{j,k}}g_{i_{j,k}}) = \alpha(f_1, \ldots, f_N) =: \alpha.\) This implies that \(\In_\preceq(\In_\omega(f_{i_{j,k}})) = b_{i_{j,k}}x^{\alpha – \alpha_{i_{j,k}}}\) with \(\sum_k b_{i_{j,k}} = 0.\) Write \[f = \sum_{i \neq i_j}f_ig_i + \sum_j (f_{i_j} – \In_\omega(f_{i,j}))g_{i_j} + f’_1 + f’_2\]where \[f’_1 := \sum_{i_j \neq i_{j,k}} \In_\omega(f_{i,j}) g_{i_j} + \sum_k (\In_\omega(f_{i_{j,k}}) – \In_\preceq(\In_\omega(f_{i_{j,k}})))g_{i_{j,k}}\]and \[f’_2 := \sum_k \In_\preceq(\In_\omega (f_{i_{j,k}})) g_{i_{j,k}}\]Since \(b_{i_{j,1}} = -\sum_{k > 1} b_{i_{j,k}},\) it follows that \[f’_2 = \sum_k b_{i_{j,k}}x^{\alpha – \alpha_{i_{j,k}}}g_{i_{j,k}} = \sum_{k> 1} b_{i_{j,k}} (x^{\alpha – \alpha_{i_{j,k}}}g_{i_{j,k}} – x^{\alpha – \alpha_{i_{j,1}}}g_{i_{j,1}}) = \sum_{k > 1} b_{i_{j,k}}x^{\alpha – \beta_{i_{j,k}}}S(g_{i_{j,k}}, g_{i_{j,1}})\]where \(x^{\beta_{i_{j,k}}}:= \lcm(x^{\alpha_{i_{j,k}}}, x^{\alpha_{i_{j,1}}}).\) Due to Buchberger’s algorithm and Theorem 1, whenever \(S(g_{i_{j,k}}, g_{i_{j,1}})\) is nonzero, it can be expressed as \(\sum_i h_ig_i\) such that \[\min_i(\nu_\preceq(h_ig_i)) = \nu_\preceq(S(g_{i_{j,k}}, g_{i_{j,1}}))\]It is clear by construction of \(S(\cdot, \cdot)\) that \[\nu_\preceq (x^{\alpha- \beta_{i_{j,k}}}S(g_{i_{j,k}}, g_{i_{j,1}})) \succ \alpha\]It follows that \(f\) can be expressed as \(\sum_i \tilde f_i g_i\) such that

  • either \(m(\tilde f_1, \ldots, \tilde f_N) > m(f_1, \ldots, f_N)\),
  • or \(m(\tilde f_1, \ldots, \tilde f_N) = m(f_1, \ldots, f_N)\) and \(\alpha(\tilde f_1, \ldots, \tilde f_N) > \alpha(f_1, \ldots, f_N).\)

To summarize: as long as \(m(f_1, \ldots, f_N) < \omega(f)\), it is possible to find another expression \(f = \sum_i \tilde f_i g_i\) such that the value for the pair \((m, \alpha)\) increases (where the ordering on the pair is initially by \(m\) and then by \(\preceq\) on \(\alpha\)). Since each coordinate of \(\omega\) is positive, for each value of \(m\), the space of \(\omega\)-homogeneous elements of \(\omega\)-order (or degree) \(m\) is finite dimensional, and therefore due to Lemma 1, it is not possible for only \(\alpha\) to increase while keeping \(m\) unchanged. Consequently at some point we must arrive at an expression \(f = \sum_i h_ig_i\) such that \(m(h_1, \ldots, h_N) = \omega(f),\) which implies that \(\In_\preceq(f) = \In_\omega(f)\) is in the ideal generated by the \(\In_\omega(g_i) = \In_\preceq(g_i),\) \(i = 1, \ldots, N,\) and completes the proof.

References

Polynomial division and Universal bases

Polynomial division

\(\DeclareMathOperator{\In}{In} \DeclareMathOperator{\ld}{Ld} \DeclareMathOperator{\kk}{\mathbb{K}} \DeclareMathOperator{\rnpos}{\mathbb{R}^n_{> 0}} \DeclareMathOperator{\rnzero}{\mathbb{R}^n_{\geq 0}} \DeclareMathOperator{\rr}{\mathbb{R}} \DeclareMathOperator{\scrB}{\mathcal{B}} \DeclareMathOperator{\scrI}{\mathcal{I}} \DeclareMathOperator{\scrJ}{\mathcal{J}} \DeclareMathOperator{\supp}{Supp} \DeclareMathOperator{\znpos}{\mathbb{Z}^n_{> 0}} \DeclareMathOperator{\znzero}{\mathbb{Z}^n_{\geq 0}} \DeclareMathOperator{\zz}{\mathbb{Z}}\)In this post we talk about division with respect to polynomials in more than one variables, which is a pretty cute algorithm that changed the face of a big part of mathematics, via e.g. Gröbner bases which we will also talk about. Dividing by polynomials in a single variable is straightforward – to divide \(f\) by \(g\), you pick a monomial \(m\) such that the highest degree terms of \(f\) and \(mg\) are equal, then repeat the same process with \(f – mg\) in place of \(f\). Since the degree, as a nonnegative integer, can not drop infinitely many times, this algorithm stops after a finitely many steps, and you end up with polynomials \(q\) (the quotient) and \(r\) (the remainder) such that \(f = qg + r\) and \(\deg(r) < \deg(f)\). To divide by polynomials in more than one variables, you need an analogue of the “highest degree monomial term” of the single-variable case. A natural resolution comes via monomial orders.

A monomial order is a binary relation \(\preceq\) on \(\znzero\) such that

  1. \(\preceq\) is a total order,
  2. \(\preceq\) is compatible with the addition on \(\znzero\), i.e. \(\alpha \preceq \beta\) implies \(\alpha + \gamma \preceq \beta + \gamma\) for all \(\gamma \in \znzero\), and
  3. \(0 \preceq \alpha\) for all \(\alpha \in \znzero\).

It turns out that in the presence of the first two conditions, the third one is equivalent to the following condition:

  • \(\preceq\) is a well order on \(\znzero\)

(For one direction: if \(\alpha \preceq 0\) for some \(\alpha \in \znzero \setminus \{0\}\), then \(\{\alpha, 2\alpha, 3\alpha, \ldots \}\) is an infinite set with no minimal element with respect to \(\preceq\). See [Mon21, Corollary B.47] for an elementary proof for the opposite direction.)

The support \(\supp(f)\) of a polynomial \(f \in \kk[x_1, \ldots, x_n]\), where \(\kk\) is a field, is the set of all \(\alpha \in \znzero\) such that \(x^\alpha\) appears in \(f\) with a nonzero coefficient. We write \(\delta_\preceq(f)\) for the maximal element of \(\supp(f)\) with respect to \(\preceq\). The leading term \(\ld_\preceq(f)\) of \(f\) with respect to \(\preceq\) is the monomial term \(cx^\alpha\) where \(\alpha = \delta_\preceq(f)\) and \(c\) is the coefficient of \(x^\alpha\) in \(f\). The division algorithm in \(\kk[x_1, \ldots, x_n]\) is the procedure for reducing a polynomial \(f\) by a finite ordered collection of polynomials \(g_1, \ldots, g_N\) as follows:

  1. If there is \(i\) such that \(\ld_\preceq(g_i)\) divides \(\ld_\preceq(f)\), then pick the smallest such \(i\), and replace \(f\) by \(f – hg_i\), where \(h := \ld_\preceq(f)/\ld_\preceq(g_i)\).
  2. Otherwise \(\ld_\preceq(f)\) is not divisible by \(\ld_\preceq(g_i)\) for any \(i\); in this case replace \(f\) by \(f – \ld_\preceq(f)\).
  3. Repeat.

Since \(\delta_\preceq(f)\) decreases at every step of the division algorithm, the well ordering property of \(\preceq\) implies that the algorithm stops after finitely many steps. It is clear that after the algorithm stops, one has \[f = \sum_i h_i g_i + r\] where either \(r = 0\), or for every \(i\), \(\ld_\preceq(g_i)\) does not divide any term of \(r\); we say that \(r\) is the remainder of \(f\) modulo division by the \(g_i\). This immediately implies the following result:

Theorem 1. Let \(g_1, \ldots, g_N\) be elements of an ideal \(I\) of \(\kk[x_1, \ldots, x_n]\) such that \(\ld_\preceq(g_1), \ldots, \ld_\preceq(g_N)\) generate the leading ideal \(\ld_\preceq(I)\) generated by \(\{\ld_\preceq(f): f \in I\}\). Then for all \(f \in \kk[x_1, \ldots, x_n],\) the following are equivalent:

  • \(f \in I\),
  • the remainder of \(f\) modulo \(g_1, \ldots, g_N\) is zero.

A Gröbner basis of \(I\) with respect to a monomial ordering \(\preceq\) is simply a finite set of elements \(g_1, \ldots, g_N \in I\) such that \(\ld_\preceq(g_i),\) \(1 \leq i \leq N,\) generate \(\ld_\preceq(I)\).

For later use we record a few corollaries of the division algorithm.

Corollary 1. Let \(I \supseteq J\) be ideals of \(\kk[x_1, \ldots, x_n]\). If \(I \supsetneqq J\), then \(\ld_\preceq(I) \supsetneqq \ld_\preceq(J)\) for every monomial order \(\preceq\).

Proof. If there are \(g_1, \ldots, g_N \in J\) such that \(\ld_\preceq(g_i)\), \(i = 1, \ldots, N\), generate \(\ld_\preceq(I)\), then Theorem 1 implies that \(g_1, \ldots, g_N\) generate \(I\), so that \(I = J\).

Corollary 2. Let \(I\) be an ideal of \(\kk[x_1, \ldots, x_n]\). There can not exist monomial orders \(\preceq_1, \preceq_2\) such that \(\ld_{\preceq_1}(I) \subsetneqq \ld_{\preceq_2}(I)\).

Proof. Write \(J_k := \ld_{\preceq_k}(I)\), \(k = 1, 2\). Assume to the contrary that \(J_1 \subsetneqq J_2\). Choose \(g_1, \ldots, g_N \in I\) such that \(\ld_{\preceq_2}(g_i)\), \(i = 1, \ldots, N\), generate \(J_1\). Choose \(f \in I\) such that \(\ld_{\preceq_2}(f) \not\in J_1\). Let \(r\) be the remainder of the division of \(f\) by \(g_1, \ldots, g_N\) with respect to \(\preceq_2\). Then \(r\) is a nonzero element of \(I\) such that \(\supp(r) \cap \supp(J_1) = \emptyset\). On the other hand, if \(h_1, \ldots, h_M \in I\) are such that \(\ld_{\preceq_1}(h_1), \ldots, \ld_{\preceq_1}(h_M)\) generate \(J_1\), then division of \(r\) by \(h_1, \ldots, h_M\) with respect to \(\preceq_1\) implies that \(r \not\in I\), which is a contradiction.

Starting from a Gröbner basis of an ideal, one can discard “redundant” elements (whose leading terms are divisible by the leading term of some other basis element) and then replace each remaining element by the remainder of the division by the others to obtain a reduced Gröbner basis, i.e. a Gröbner basis such that

  1. no monomial term of any basis element is divisible by the leading term of any other basis element, and
  2. the leading coefficient, i.e. the coefficient of the leading term, of each basis element is \(1.\)

Corollary 3. The reduced Gröbner basis is uniquely determined up to a permutation by the leading ideal. In particular,

  1. up to a permutation of the basis elements there is a unique reduced Gröbner basis for every monomial order, and
  2. if an ideal has the same leading ideal for two monomial orders, then it also has the same reduced Gröbner basis (up to permutations) for those monomial orders.

Proof. Indeed, fix two reduced Gröbner bases \(g_1, \ldots, g_N\) and \(h_1, \ldots, h_M\) of an ideal \(I\) with respect to a monomial order \(\preceq.\) For each \(h_j\), its leading term is divisible by the leading term of at least one \(g_i,\) but there can not be more than one such \(i,\) since then

  • either the leading term of both such \(g_i\) would be divisible by \(h_j,\)
  • or there would be \(j’ \neq j\) such that the leading term of \(h_{j’}\) divides the leading term of one of the \(g_i\), and therefore in turn divides the leading term of \(h_j\)

Each of these possibilities contradicts the reducedness of one of the above Gröbner bases. It follows that for each \(j \in \{1, \ldots, M\},\) there is a unique \(i \in \{1, \ldots, N\}\) such that \(\ld_\preceq(g_i)\) divides \(\ld_\preceq(h_j).\) Similarly for each $i \in \{1, \ldots, N\},$ there is a unique $j \in \{1, \ldots, M\}$ such that \(\ld_\preceq(h_j)\) divides \(\ld_\preceq(g_i).\) It follows that \(M = N\) and after reordering the \(g_i\) and the \(h_j\) if necessary, one can ensure that \(\ld_\preceq(g_i) = \ld_\preceq(h_i),\) \(i = 1, \ldots, N.\) Now fix \(i\) and divide \(h_i\) by \(g_1, \ldots, g_N.\) Since for \(j \neq i\) the leading term of \(g_j\) does not divide any monomial term of \(h_i,\) after the first step the remainder is \(h_i – g_i,\) and if \(h_i – g_i \neq 0,\) then at the second step the leading term of \(h_i – g_i\) must be divisible by the leading term of \(g_i.\) But this is impossible, since the exponent of each monomial term of \(h_i – g_i\) is smaller than \(\delta_\preceq(g_i) = \delta_\preceq(h_i).\) It follows that \(h_i = g_i,\) which proves the first assertion. For the second assertion, pick a monomial order \(\preceq’\) such that \(\ld_{\preceq’}(I) = \ld_\preceq(I).\) Let \(\alpha_i := \delta_{\preceq}(g_i),\) \(i = 1, \ldots, n.\) Fix \(i.\) Since \(\ld_{\preceq’}(I)\) is generated by \(x^{\alpha_1}, \ldots, x^{\alpha_N}\), it follows that \(\ld_{\preceq’}(g_i)\) is divisible by some \(x^{\alpha_j}.\) Since no monomial in the support of \(g_i\) is divisible by \(x^{\alpha_j}\) for \(j \neq i\) it must be divisible by \(x^{\alpha_i}.\) Since \(\alpha_i\) is also in the support of \(g_i,\) we must have that \(\delta_{\preceq’}(g_i) = \alpha_i.\) It follows that \(g_1, \ldots, g_N\) form a Gröbner basis of \(I\) with respect to \(\preceq’\) as well. It is clear that this is a reduced Gröbner basis, as required.

Universal bases for Leading ideals

In this section we prove that as \(\preceq\) varies over all possible monomial orders, there can be only finitely many leading ideals for a given ideal of a polynomial ring. The argument is elementary and very cute. I learned it from [Stu96, Chapter 1]. Mora and Robbiano [MoRobbiano88, Page 193] write that it is due to Alessandro Logar.

Theorem 2. Every ideal \(I\) of \(\kk[x_1, \ldots, x_n]\) has finitely many distinct leading ideals. In particular, there is a finite subset of \(I\) such that for every monomial order, the leading terms of polynomials in that subset generate the leading ideal of \(I.\)

Proof. Assume to the contrary that there is an infinite collection \(\scrJ_0\) of distinct leading ideals of \(I\). Pick an arbitrary nonzero element \(f_1 \in I\). Out of the finitely many monomial terms of \(f_1\), there must be one, say \(m_1\), such that \(\scrJ_1 := \{J \in \scrJ_0: \langle m_1 \rangle \subsetneqq J\}\) is infinite. Pick \(J_1 \in \scrJ_1\) and a monomial order \(\preceq_1\) such that to \(J_1 = \ld_{\preceq_1}(I)\). Since \(m_1 \in J_1\), there is \(g_1 \in I\) such that \(\ld_{\preceq_1}(g_1) = m_1\), and since \(\langle m_1 \rangle \neq J_1\), there is \(h_1 \in I\) such that \(\ld_{\preceq_1}(h_1) \not\in \langle m_1 \rangle.\) Let \(f_2\) be the remainder of \(h_1\) modulo division by \(g_1\). Then \(f_2\) is nonzero, and no monomial term in \(f_2\) belongs to \(\langle m_1 \rangle\). Since \(f_2\) has finitely many monomial terms, there must be a monomial term \(m_2\) of \(f_2\) such that \(\scrJ_2 := \{J \in \scrJ_1: \langle m_1, m_2 \rangle \subsetneqq J\}\) is infinite. Now choose \(J_2 \in \scrJ_2\) and a monomial order \(\preceq_2\) such that to \(J_2 = \ld_{\preceq_2}(I)\). As in the preceding case, there are \(g_{21}, g_{22}, h_2 \in I\) such that \(\ld_{\preceq_2}(g_{21}) = m_1\), \(\ld_{\preceq_2}(g_{22}) = m_2\), and \(\ld_{\preceq_2}(h_2) \not\in \langle m_1, m_2 \rangle.\) Define \(f_3\) to be the remainder of \(h_2\) modulo division by \(g_{21}, g_{22}\). Then \(f_3\) is nonzero, and no monomial term in \(f_3\) belongs to \(\langle m_1, m_2 \rangle,\) and we can proceed to define \(\scrJ_3\) as above. In this way we get an infinite strictly increasing chain of monomial ideals: \[ \langle m_1 \rangle \subsetneqq \langle m_1, m_2 \rangle \subsetneqq \langle m_1, m_2, m_3 \rangle \subsetneqq \cdots \] which violates the Noetherianity of \(\kk[x_1, \ldots, x_n]\) and completes the proof of the first assertion. The second assertion then follows from Corollary 3.

A universal (leading) basis for an ideal \(I\) of \(\kk[x_1, \ldots, x_n]\) is an ordered collection \(\scrB\) of elements in \(I\) such that for every monomial order \(\preceq\) on \(\znzero\), the leading ideal of \(I\) with respect to \(\preceq\) is generated by the leading terms of the elements in \(\scrB\). The above theorem implies that every ideal has a finite universal basis.

Every leading ideal comes from a positive weighted degree

Weighted degrees are natural generalizations of degree. Every \(\omega \in \rr^n\) defines a weighted degree, and for \(f \in \kk[x_1, \ldots, x_n]\) we write \(\ld_\omega(f)\) for the corresponding leading term of \(f\). Given an ideal \(I\) of \(\kk[x_1, \ldots, x_n]\) we similarly write \(\ld_\omega(I)\) for its leading ideal generated by \(\ld_\omega(f)\) for all \(f \in I\). In this section, also following [Stu96, Chapter 1], we prove the following result:

Theorem 3. For every ideal \(I\) of \(\kk[x_1, \ldots, x_n]\) and every monomial order \(\preceq\), there is \(\omega \in \znpos\) such that \(\ld_\omega(I) = \ld_\preceq(I)\) (in particular, this means that \(\ld_\omega(I)\) is a monomial ideal).

Remark. With a bit more work it can be more can be established (the proof is completely analogous to the proof of Theorem 3 from the next post): given a monomial order \(\preceq\) and polynomials \(g_1, \ldots, g_N ,\) there is \(\omega \in \znpos\) such that \[\ld_\omega(g_i) = \ld_\preceq(g_i),\ i = 1, \ldots, N.\]If in addition the \(g_i\) are elements of an ideal \(I\) such that \(\ld_\preceq(g_i),\) \(i = 1, \ldots, N,\) generate \(\ld_\preceq(I),\) then\[\ld_\omega(I) = \langle \ld_\omega(g_1), \ldots, \ld_\omega(g_N)\rangle = \langle \ld_\preceq(g_1), \ldots, \ld_\preceq(g_N)\rangle = \ld_\preceq(I)\]Moreover, every \(f \in I\) has an expression of the form \(f = \sum_i f_ig_i\) such that

  1. \(\delta_\preceq(f) = \max_\preceq\{\delta_\preceq(f_ig_i)\},\) and
  2. \(\omega(f) = \max\{\omega(f_ig_i)\}.\)

Proof of Theorem 3. Fix a Gröbner basis \(g_1, \ldots, g_N\) of \(I\) with respect to \(\preceq\). Write \[g_i = c_{i0} x^{\alpha_{i0}} + \cdots + c_{iN_i}x^{\alpha_{iN_i}}\]with \(\alpha_{i0} = \delta_\preceq(g_i)\). Define \[C := \{\omega \in \rnzero: \langle \omega, \alpha_{i0} – \alpha_{ij} \rangle \geq 0,\ i = 1, \ldots, N,\ j = 1, \ldots, N_i\}\]Note that \(C\) is a convex polyhedral cone. We claim that \(\dim(C) = n\). Indeed, \(C\) is dual to the cone \(C’\) generated by \(\alpha_{i0} – \alpha_{ij},\ i = 1, \ldots, N,\ j = 1, \ldots, N_i\), and the standard basis vectors \(e_1, \ldots, e_n\) of \(\rr^n\). If \(\dim(C) < n\), it would follow that \(C’\) is not strongly convex. Since \(C’\) is also rational, there would be nonnegative integers \(\lambda_{ij}\), not all zero, such that \[\sum_{i,j}\lambda_{ij}(\alpha_{i0} – \alpha_{ij}) = -\sum_k r_ke_k\]for some nonnegative integers \(r_1, \ldots, r_n\). Due to the second and third properties of a monomial order, it would follow that \[\sum_{i,j}\lambda_{ij} \alpha_{i0} \preceq \sum_{i,j}\lambda_{ij} \alpha_{ij}\]On the other hand, by construction \(\alpha_{i0} \succ \alpha_{ij}\) for each \(i,j\), so that \[\sum_{i,j}\lambda_{ij} \alpha_{i0} \succ \sum_{i,j}\lambda_{ij} \alpha_{ij}\]This contradiction shows that \(\dim(C) = n\), as claimed. In particular this implies (see e.g. [Mon21, Proposition V.15]) that the topological interior \(C^0\) of \(C\) has a nonempty intersection with \(\znpos\), and if \(\omega \in C^0 \cap \znpos\), then \(\ld_\omega(g_i) = c_{i0}x^{\alpha_{i0}}\). Since \(\ld_\preceq(I)\) is generated by \(c_{i0}x^{\alpha_{i0}}\), \(i = 1, \ldots, N,\) it follows that \(\ld_\omega(I) \supseteq \ld_\preceq(I)\). If this containment were proper, then Corollary 1 to Theorem 1 would imply that \[\ld_\preceq(\ld_\omega(I)) \supsetneqq \ld_\preceq(\ld_\preceq(I)) = \ld_\preceq(I)\]On the other hand, it is straightforward to check that \[\ld_\preceq(\ld_\omega(I)) = \ld_{\preceq_\omega}(I)\] where \(\preceq_\omega\) is the binary relation on \(\znzero\) defined as follows: \(\alpha \preceq_\omega \beta\) if and only if either \(\langle \omega, \alpha \rangle < \langle \omega, \beta \rangle\), or \(\langle \omega, \alpha \rangle = \langle \omega, \beta \rangle\) and \(\alpha \preceq \beta\). Since \(\omega \in \znzero\), it follows that \(\preceq_\omega\) is a monomial order, and taken toegther, the last two displays above contradict Corollary 2 to Theorem 1. Consequently, \(\ld_\omega(I) = \ld_\preceq(I)\), as required.

References

Lüroth’s theorem (a “constructive” proof)

Lüroth’s theorem (Lüroth 1876 for \(k = \mathbb{C}\), Steinitz 1910 in general). If \(k \subseteq K\) are fields such that \(k \subseteq K \subseteq k(x)\), where \(x\) is an indeterminate over \(k\), then \(K = k(g)\) for some rational function \(g\) of \(x\) over \(k\).

I am going to present a “constructive” proof of Lüroth’s theorem due to Netto (1895) that I learned from Schinzel’s Selected Topics on Polynomials (and give some applications to criteria for proper polynomial parametrizations). The proof uses the following result which I am not going to prove here:

Proposition (with the set up of Lüroth’s theorem). \(K\) is finitely generated over \(k\), i.e. there are finitely many rational functions \(g_1, \ldots, g_s \in k(x)\) such that \(K = k(g_1, \ldots, g_s)\).

The proof is constructive in the following sense: given \(g_1, \ldots, g_s\) as in the proposition, it gives an algorithm to determine \(g\) such that \(K = k(g)\). We use the following notation in the proof: given a rational function \(h \in k(x)\), if \(h = h_1/h_2\) with polynomials \(h_1, h_2 \in k[x]\) with \(\gcd(h_1, h_2) = 1\), then we define \(\deg_\max(h) := \max\{\deg(h_1), \deg(h_2)\}\).

Proof of Lüroth’s theorem

It suffices to consider the case that \(K \neq k\). Pick \(g_1, \ldots, g_s\) as in the proposition. Write \(g_i = F_i/G_i\), where

  • \(\gcd(F_i, G_i) = 1\) (Property 1).

Without loss of generality (i.e. discarding \(g_i \in k\) or replacing \(g_i\) by \(1/(g_i + a_i)\) for appropriate \(a_i \in k\) if necessary) we can also ensure that

  • \(\deg(F_i) > 0\) and \(\deg(F_i) > \deg(G_i)\) (Property 2).

Consider the polynomials \[H_i := F_i(t) – g_iG_i(t) \in K[t] \subset k(x)[t], i = 1, \ldots, s,\] where \(t\) is a new indeterminate. Let \(H\) be the greatest common divisor of \(H_1, \ldots, H_s\) in \(k(x)[t]\) which is also monic in \(t\). Since the Euclidean algorithm for computing \(\gcd\) respects the field of definition, it follows that:

  • \(H\) is also the greatest common divisor of \(H_1, \ldots, H_s\) in \(K[t]\), which means, if \(H = \sum_j h_j t^j\), then each \(h_j \in K\) (Property 3).

Let \(H^* \in k[x,t]\) be the polynomial obtained by “clearing the denominator” of \(H\); in other words, \(H = H^*/h(x)\) for some polynomial \(h \in k[x]\) and \(H^*\) is primitive as a polynomial in \(t\) (i.e. the greatest common divisor in \(k[x]\) of the coefficients in \(H^*\) of powers of \(t\) is 1). By Gauss’s lemma, \(H^*\) divides \(H^*_i := F_i(t)G_i(x) – F_i(x)G_i(t)\) in \(k[x,t]\), i.e. there is \(Q_i \in k[x,t]\) such that \(H^*_i = H^* Q_i \).

Claim 1. If \(\deg_t(H^*) < \deg_t(H^*_i)\), then \(\deg_x(Q_i) > 0\).

Proof of Claim 1. Assume \(\deg_t(H^*) < \deg_t(H^*_i)\). Then \(\deg_t(Q_i) > 1\). If in addition \(\deg_x(Q_i) = 0\), then we can write \(Q_i(t)\) for \(Q_i\). Let \(F_i(t) \equiv \tilde F_i(t) \mod Q_i(t)\) and \(G_i(t) \equiv \tilde G_i(t) \mod Q_i(t)\) with \(\deg(\tilde F_i) < \deg(Q_i)\) and \(\deg(\tilde G_i) < \deg(Q_i)\). Then \(\tilde F_i(t)G_i(x) – F_i(x) \tilde G_i(t) \equiv 0 \mod Q_i(t)\). Comparing degrees in \(t\), we have \(\tilde F_i(t)G_i(x) = F_i(x) \tilde G_i(t)\). It is straightforward to check that this contradicts Propeties1 and 2 above, and completes the proof of Claim 1.

Let \(m := \min\{\deg_\max(g_i): i = 1, \ldots, s\}\), and pick \(i\) such that \(\deg_\max(g_i) = m\). Property 2 above implies that \(\deg_t(H^*_i) = \deg_x(H^*_i) = m\). If \(\deg_t(H^*) < m\), then Claim 1 implies that \(\deg_x(H^*) < \deg_x(H^*_i) = m\). If the \(h_j\) are as in Property 3 above, it follows that \(\deg_\max(h_j) < m\) for each \(j\). Since \(H^* \not\in k[t]\) (e.g. since \(t-x\) divides each \(H_i\)), there must be at least one \(h_j \not \in k\). Since adding that \(h_j\) to the list of the \(g_i\) decreases the value of \(m\), it follows that the following algorithm must stop:

Algorithm

  • Step 1: Pick \(g_i := F_i/G_i\), \(i = 1, \ldots, s\), satisfying properties 1 and 2 above.
  • Step 2: Compute the monic (with respect to \(t\)) \(\gcd\) of \(F_i(t) – g_i G_i(t)\), \(i = 1, \ldots, s\), in \(k(x)[t]\); call it \(H\).
  • Step 3: Write \(H = \sum_j h_j(x) t^j\). Then each \(h_j \in k(g_1, \ldots, g_s)\). If \(\deg_t(H) < \min\{\deg_\max(g_i): i = 1, \ldots, s\}\), then adjoin all (or, at least one) of the \(h_j\) such that \(h_j \not\in k\) to the list of the \(g_i\) (possibly after an appropriate transformation to ensure Property 2), and repeat.

After the last step of the algorithm, \(H\) must be one of the \(H_i\), in other words, there is \(\nu\) such that \[\gcd(F_i(t) – g_i G_i(t): i = 1, \ldots, s) = F_{\nu}(t) – g_{\nu}G_{\nu}(t).\]

Claim 2. \(K = k(g_{\nu})\).

Proof of Claim 2 (and last step of the proof of Lüroth’s theorem). For a given \(i\), polynomial division in \(k(g_\nu)[t]\) gives \(P, Q \in k(g_\nu)[t]\) such that \[F_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))P + Q,\] where \(\deg_t(Q) < \deg_t(F_{\nu}(t) – g_{\nu}G_{\nu}(t))\). If \(Q = 0\), then \(F_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))P\), and clearing out the denominator (with respect to \(k[g_\nu]\)) of \(P\) gives an identity of the form \(F_i(t)p(g_\nu) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))P^* \in k[g_\nu, t]\) which is impossible, since \(F_{\nu}(t) – g_{\nu}G_{\nu}(t)\) does not factor in \(k[g_\nu, t]\). Therefore \(Q \neq 0\). Similarly, \[G_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))R + S,\] where \(R, S \in k(g_\nu)[t]\), \(S \neq 0\), and \(\deg_t(S) < \deg_t(F_{\nu}(t) – g_{\nu}G_{\nu}(t))\). It follows that \[F_i(t) – g_iG_i(t) = (F_{\nu}(t) – g_{\nu}G_{\nu}(t))(P – g_iR) + Q – g_iS.\] Since \(F_{\nu}(t) – g_{\nu}G_{\nu}(t)\) divides \(F_{i}(t) – g_{i}G_{i}(t)\) in \(k(x)[t]\) and since \(\deg_t(Q – g_iS) < \deg_t(F_{\nu}(t) – g_{\nu}G_{\nu}(t))\), it follows that \(Q = g_iS\). Taking the leading coefficients (with respect to \(t\)) \(q_0, s_0 \in k(g_\nu)\) of \(Q\) and \(S\) gives that \(g_i = q_0/s_0 \in k(g_\nu)\), as required to complete the proof.

Applications

The following question seems to be interesting (geometrically, it asks when a given polynomial parametrization of a rational affine plane curve is proper).

Question 1. Let \(k\) be a field and \(x\) be an indeterminate over \(k\) and \(g_1, g_2 \in k[x] \). When is \(k(g_1, g_2) = k(x)\)?

We now give a sufficient condition for the equality in Question 1. Note that the proof is elementary: it does not use Lüroth’s theorem, only follows the steps of the above proof in a special case.

Corollary 1. In the set up of Question 1, let \(d_i := \deg(g_i)\), \(i = 1, 2\). If the \(\gcd\) of \(x^{d_1} – 1, x^{d_2} – 1\) in \(k[x]\) is \(x – 1\), then $k(g_1, g_2) = k(t)$. In particular, if \(d_1, d_2\) are relatively prime and the characteristic of \(k\) is either zero or greater than both \(d_1, d_2\), then $k(g_1, g_2) = k(x)$.

Remark. Corollary 1 is true without the restriction on characteristics, i.e. the following holds: “if \(d_1, d_2\) are relatively prime, then $k(g_1, g_2) = k(x)$.” François Brunault (in a comment to one of my questions on MathOverflow) provided the following simple one line proof: \([k(x): k(g_1, g_2)]\) divides both \([k(x): k(g_i)] = d_i\), and therefore must be \(1\).

My original proof of Corollary 1. Following the algorithm from the above proof of Lüroth’s theorem, let \(H_i := g_i(t) – g_i(x)\), \(i = 1, 2\), and \(H \in k(x)[t]\) be the monic (with respect to \(t\)) greatest common divisor of \(H_1, H_2\).

Claim 1.1. \(H = t – x\).

Proof. It is clear that \(t-x\) divides \(H\) in \(k(x)[t]\), so that \(H(x,t) = (t-x)h_1(x,t)/h_2(x)\) for some \(h_1(x,t) \in k[x,t]\) and \(h_2(x) \in k[x]\). It follows that there is \(Q_i(x,t) \in k[x,t]\) and \(P_i(x) \in k[x]\) such that \(H_i(x,t)P_i(x)h_2(x) = (t-x)h_1(x,t)Q_i(x,t)\). Since \(h_2(x)\) and \((t-x)h_1(x,t)\) have no common factor, it follows that \(h_2(x)\) divides \(Q_i(x,t)\), and after cancelling \(h_2(x)\) from both sides, one can write \[H_i(x,t)P_i(x) = (t-x)h_1(x,t)Q’_i(x,t),\ i = 1, 2.\] Taking the leading form of both sides with respect to the usual degree on \(k[x,t]\), we have that \[(t^{d_i} – x^{d_i})x^{p_i} = a_i(t-x)\mathrm{ld}(h_1)\mathrm{ld}(Q’_i)\] where \(a_i \in k \setminus \{0\}\) and \(\mathrm{ld}(\cdot)\) is the leading form with respect to the usual degree on \(k[x,t]\). Since \(\gcd(x^{d_1} – 1, x^{d_2} – 1) = x – 1\), it follows that \(\mathrm{ld}(h_1)\) does not have any factor common with \(t^{d_i} – x^{d_i}\), and consequently, \(t^{d_i} – x^{d_i}\) divides \((t-x)\mathrm{ld}(Q’_i)\). In particular, \(\deg_t(Q’_i) = d_i – 1\). But then \(\deg_t(h_1) = 0\). Since \(H = (t-x)h_1(x)/h_2(x)\) is monic in \(t\), it follows that \(H = t – x\), which proves Claim 1.1.

Since both \(H_i\) are elements of \(k(g_1, g_2)[t]\), and since the Euclidean algorithm to compute \(\gcd\) of polynomials (in a single variable over a field) preserves the field of definition, it follows that \(H \in k(g_1, g_2)[t]\) as well (this is precisely the observation of Property 3 from the above proof of Lüroth’s theorem). Consequently \(x \in k(g_1, g_2)\), as required to prove Corollary 1.

References

  • Andrzej Schinzel, Selected Topics on Polynomials, The University of Michigan Press, 1982.