Think of your matrix as the evaluation matrix of the bivariate polynomial space Π = span{x^b y^a : 0 ≤ a ≤ m−1, 0 ≤ b ≤ m−1}, which has dimension m^2. Your matrix V has one column per point (x_i, y_i) and one row per monomial in Π, so V ∈ R^{m^2 × (m^2+1)}.
Cofactor/minor formula (works for any k × (k+1) matrix of full row rank): Let V_i be the m^2 × m^2 submatrix obtained by deleting the i-th column of V. Then a nonzero vector in ker(V) is c = (c_1, …, c_n) with c_i = (−1)^{i−1} det(V_i). This is the direct analogue of the univariate Vandermonde case. In 1D the minors factor into products of differences, yielding the familiar closed form; in several variables there is generally no such simple factorization unless the nodes have special structure.
Lagrange-formula via unisolvent subset: Choose any index s such that the square submatrix formed by the other m^2 columns is invertible (this exists generically). On those m^2 points, there are unique Lagrange fundamental polynomials L_i(x, y) ∈ Π, i ≠ s, satisfying L_i(x_j, y_j) = δ_{ij} for j ≠ s. Then the s-th column is a linear combination of the others: [values at (x_s, y_s)] = Σ_{i≠s} L_i(x_s, y_s) [values at (x_i, y_i)]. Hence a kernel vector is given by c_s = 1, and c_i = −L_i(x_s, y_s) for i ≠ s. This is the natural multivariate analogue of the univariate barycentric description (but now in terms of multivariate Lagrange polynomials, not simple products).
Summary:
Was this answer helpful?
version: gpt-5-2025-08-07
Status: UQ Validated
Validated: 7 months ago
Status: Needs Human Verification
Loading model reviews...
Loading reviews...