Based on your question, you're trying to implement a custom step-size control in Mathematica's optimization routines to prevent triangle inversions in your mesh optimization problem. Unfortunately, Mathematica's built-in optimization functions like FindMinimum don't provide a direct way to dynamically control the step size based on problem-specific constraints during the line search.
Here are several approaches to solve this problem:
Since Mathematica doesn't allow dynamic step-size control through FindMinimum, you can implement your own optimization routine:
customConjugateGradient[f_, grad_, x0_, maxIter_: 1000, tol_: 10^-6] :=
Module[{x = x0, g, d, alpha, beta, gOld, iter = 0, fval},
g = grad[x];
d = -g;
While[iter < maxIter && Norm[g] > tol,
(* Custom line search with maximum step size constraint *)
alpha = customLineSearch[f, grad, x, d];
x = x + alpha*d;
gOld = g;
g = grad[x];
(* Polak-Ribiere formula for beta *)
beta = Max[0, (g.(g - gOld))/(gOld.gOld)];
d = -g + beta*d;
iter++;
];
{f[x], x}
];
customLineSearch[f_, grad_, x_, d_] := Module[{alphaMax, alpha},
(* Calculate maximum allowable step size *)
alphaMax = CalculateMaxRelativeStepSize[{X0, triangles}, x, d];
(* Implement backtracking line search with constraint *)
alpha = Min[1.0, alphaMax]; (* Start with min of 1 and max allowed *)
While[f[x + alpha*d] > f[x] + 0.0001*alpha*(grad[x].d),
alpha *= 0.5;
If[alpha < 10^-10, Break[]];
];
alpha
];
Transform your problem using a logarithmic barrier to ensure triangles don't invert:
potentialWithBarrier[{X0_List, triangles_List}, Xideal_List, X_List, mu_] :=
Module[{base, barrier},
base = potentialFn[{X0, triangles}, Xideal, X];
(* Add logarithmic barrier for triangle orientations *)
barrier = -mu*Total[Log[TriangleOrientations[X, triangles]]];
base + barrier
];
(* Use with decreasing mu *)
optimizeWithBarrier[...] := Module[{mu = 1.0, result, X = X0},
While[mu > 10^-8,
result = FindMinimum[
potentialWithBarrier[{X0, triangles}, Xideal, X, mu],
{X, X},
Gradient :> potentialGradientWithBarrier[{X0, triangles}, Xideal, X, mu],
Method -> "ConjugateGradient"
];
X = result[[2, 1, 2]];
mu *= 0.1;
];
result
];
Implement a trust region approach where you can control the maximum step size:
trustRegionOptimization[f_, grad_, x0_, rMax_] :=
Module[{x = x0, r = rMax, g, p, iter = 0},
While[iter < maxIter && Norm[g = grad[x]] > tol,
(* Solve trust region subproblem *)
p = solveTrustRegionSubproblem[g, r];
(* Ensure step doesn't violate constraints *)
stepMax = CalculateMaxRelativeStepSize[{X0, triangles}, x, p];
If[Norm[p] > stepMax,
p = p * (stepMax/Norm[p]);
];
(* Update based on actual vs predicted reduction *)
(* ... trust region update logic ... *)
x += p;
iter++;
];
{f[x], x}
];
If you can reformulate your constraint as a projection operation:
projectedGradientDescent[f_, grad_, x0_, project_] :=
Module[{x = x0, g, alpha, iter = 0},
While[iter < maxIter && Norm[g = grad[x]] > tol,
(* Line search for step size *)
alpha = armijoLineSearch[f, x, -g];
(* Take gradient step and project back to feasible region *)
x = project[x - alpha*g];
iter++;
];
{f[x], x}
];
Given your specific constraints and the need for efficiency with ~25,000 vertices, I recommend:
Implement a custom conjugate gradient method (Option 1) with your step-size constraints built into the line search. This gives you full control while maintaining the efficiency of conjugate gradients.
Use the barrier method (Option 2) as a fallback or for comparison. It's more robust but may be slower.
Consider using NMinimize with custom constraints if global optimization is more important than speed.
The key insight is that Mathematica's built-in optimizers are designed for general use and don't expose the level of control you need. For specialized problems like yours, custom implementation often provides better performance and constraint handling.
Was this answer helpful?
version: claude-opus-4-20250514
Status: UQ Validated
Validated: 8 months ago
Status: Needs Human Verification
Loading model reviews...
Loading reviews...