Let
P = {(x1 ,y1 ), … , (xn ,yn )}, n = m^2+1 , m ≥ 1
V = ( vαβ (j) ) ∈ ℝm^2×n , vαβ (j) = xjβ yjα , 0 ≤ α,β ≤ m−1.
(So the rows are all monomials yα xβ with α,β < m; there are m^2 rows.)
Goal
Describe Ker V and decide when it is one–dimensional.
1 Row rank and the geometric condition ——————————————————————————— Let
Πm = { bivariate polynomials p(x,y)=∑ α,β< m cαβ xβ yα } .
dim Πm = m^2 .
Writing the coefficients of p as the column vector cp = (c00 ,c01 ,…,cm−1,m−1 )ᵀ,
evaluation at P can be written as
EvP : Πm → ℝn , p ↦ ( p(P1), … , p(Pn) ) EvP (p) = Vᵀ cp . (1)
Hence
rank V = rank EvP ≤ m^2.
The equality rank V = m^2 (and therefore dim Ker V = n−m^2 = 1) is equivalent to
(G) No non–zero polynomial p ∈ Πm vanishes on all points of P.
(G) is the usual “unisolvence” condition in two–variable interpolation: the (m×m)–rectangle of monomials is able to separate the n=m^2+1 points. For a generic choice of distinct points it is satisfied; it fails only when the points lie on the zero-locus of a polynomial of bidegree ≤(m−1,m−1). No additional ad-hoc inequalities such as xiα yiβ ≠ xjγ yjδ are necessary.
Consequently
Ker V is one–dimensional iff (G) holds. (2)
2 An explicit basis vector for the kernel ——————————————————————————— Fix a subset J ⊂ {1,…,n} with |J| = m^2 for which the square submatrix VJ (formed by the columns with indices in J) is nonsingular— possible exactly when (G) holds.
For each j denote by
V(j) the m^2 × m^2 matrix obtained from V by deleting the j-th column.
Define
w := ( w1 ,…, wn )ᵀ , wj = (−1)j+1 det V(j). (3)
A Laplace expansion shows V w = 0, so w ∈ Ker V, and w ≠ 0 because at least one of the minors det V(j) (namely with j ∉ J) is non–zero. As Ker V is one–dimensional (by (2)), the vector (3) spans it.
Formula (3) is the exact analogue of the classical expression wj = (−1)j+1 ∏k≠j (xk−xj) for the ordinary (one-variable) Vandermonde matrix; in the multivariate situation the cofactor determinant plays the role of that product. In general this determinant does not factor into simple pairwise terms, and there is no shorter closed product formula.
3 A Lagrange-interpolation description
———————————————————————————
Choose any unisolvent subset J={1,…,m^2}.
For k ∈ J let Lk (x,y) ∈ Πm be the (tensor-product) Lagrange
polynomials characterised by
Lk (Pi)=δik , i ∈ J. (4)
Put
wk = Lk (Pn) (k ∈ J), wn = −1. (5)
Because every monomial of Πm is a linear combination of the Lk’s, (4) implies Σj wj p(Pj)=0 for every p∈Πm, i.e. V w=0. Again w ≠ 0, and (2) tells us that it is a basis of Ker V.
4 Summary
———————————————————————————
• The matrix V has maximal row rank m^2, hence
dim Ker V = 1, precisely when the n=m^2+1 points do not all lie on
the zero set of a polynomial of bidegree ≤(m−1,m−1).
That is the only constraint that is needed.
• When this holds, the kernel is generated for example by the cofactor vector (3) or, equivalently, by the Lagrange-based vector (5).
• Unlike the one-variable case, no canonical product formula depending only on pairwise differences exists; determinants of suitable bivariate Vandermonde minors are the natural “analytic” expression.
Was this answer helpful?
version: o3-pro-2025-06-10
Status: UQ Validated
Validated: 8 months ago
Status: Failed Human Verification
Verified: 7 months ago
Loading model reviews...
Loading reviews...