next up previous
Next: Goals Up: An Examination of an Previous: An Examination of an

Introduction

Dynamic constraints are a natural way of describing many interesting physical systems. There are two general categories of dynamic constraints discussed in the computer graphics literature: reduced coordinate formulations and Lagrange multiplier dynamic constraints.

Reduced coordinate formulations involve re-parameterizing an unconstrained system with n degrees of freedom as a constrained system with i < n degrees of freedom. This can be quite difficult for arbitrary system of constraints, since there is often no obvious way of re-parameterizing systems involving constraint loops [5,3].

A more flexible technique is to use Lagrange multiplier dynamic constraints. Lagrange multiplier approaches maintain constraints by introducing constraint forces into the original n degrees of freedom system. Re-parameterization is not involved, and the constraints are described in a modular way which makes the construction of arbitrary systems of constraints straightforward. However, Lagrange multiplier techniques involve solving for a vector of Lagrange multipliers \ensuremath{\mathbf{\lambda}} each time the constraint forces are evaluated, which requires the solution of a system of linear equations. For a system of n constraints, this generally requires solving an equation of the form

\begin{displaymath}
A \ensuremath{\mathbf{\lambda}} = \ensuremath{\mathbf{b}},\end{displaymath}

where A is a square symmetric positive definite matrix of dimension n, \ensuremath{\mathbf{\lambda}} is a vector of n Lagrange multipliers, and \ensuremath{\mathbf{b}} is a vector of dimension n. Very often A is sparse, in which case conjugate gradient (or other iterative convergence) methods exploiting sparseness may be used. In theory such techniques reduce the order to approximately O(n2) [8,3], but since the sparseness of A is not guaranteed solving for \ensuremath{\mathbf{\lambda}} is generally a O(n3) operation [3].

Recently, Baraff [3] has described an O(n) Lagrange multiplier solution for acyclic constraint problems. Central to this technique is an always sparse formulation, and a relatively simple O(n) technique for solving the resulting linear system.

Over the last year and a half, I have developed an iterative algorithm for solving arbitrary, cyclic, Lagrange multiplier constraint problems. This algorithm (which is central to my as-yet unpublished M.S. Visualization Science thesis) appears to be O(n) with respect to the number of constraints. However, I do not yet fully understand the convergence properties of this algorithm, which (I believe) is essentially a relaxation process. Gaining a better understanding of this process would be a great benefit to my research.


next up previous
Next: Goals Up: An Examination of an Previous: An Examination of an
Richard W. DeVaul
11/12/1997