3x3 symmetric matrix eigen system calculation

Started by
6 comments, last by errantkid 15 years, 2 months ago
I'm trying to find the eigenvalues & eigenvectors of a real symmetric 3x3 matrix for a best-fit plane calculation (principle component analysis), but I can't decide which algorithm best suits my needs and I'm running out of time :) My requirements are pretty simple: * Numerically stable * Efficient * Preferable, relatively "quick" to implement. (But I prefer the first two) For the calculating the eigenvectors I've briefly looked at: * Simple root finding on the characteristic polynomial (Eigenvalue algorithm) * QL Algorithm with Implicit Shifts (from numerical recipes) But with the root finding algorithm you also have to decide on which cubic root finding algorithm you will use. I.e. Cubic equation Does anyone with experience in this have some advice for me?
Advertisement
Quote:I'm trying to find the eigenvalues & eigenvectors of a real symmetric 3x3 matrix for a best-fit plane calculation (principle component analysis), but I can't decide which algorithm best suits my needs and I'm running out of time :)


Refer to section "the general problem here":
http://en.wikipedia.org/wiki/Linear_least_squares

X B=y is your overdetermined system. First you form

(X^T X) B = X^T y

which is uniquely determined linear equation for the best fit plane. Here, "B" is the solution. This equation can be condensed and rewritten in the more familiar form, where 'x' is the solution:

A x = b

(note A = (X^T X) and b = X^T y)

This equation can easily be solved by the 3x3 inverse of A,

x = A^{-1} b

And a 3x3 inverse is very easy to calculate. If your problem were larger, I would recommend using Cholesky decomposition with back substitution to solve it, but for a 3x3...I would just use the inverse because that is the most direct way.

Anyway, there is no reason to find the eigenvalues or eigenvectors if all you want is the least squares solution. Finding the eigenvalues is much more difficult. Note that if you did want to use the eigenvalue method, you would only need to find the principle eigenvalue, which can be found very easily using the Power method. The QR or QZ algorithms are analogs of the power method which are for when you need ALL the eigenvectors...but it would be pointless to use implicit shifts because they are just optimizations for large matrices.
Thank you for the concise answer! I'm afraid the problem looked like a nail to me, so I wanted to use my hammer :)

I'm having a little bit of trouble trying to figure out how to construct my overdetermined system.

I'm not quite sure what X B and y represent (Sorry, my brain is a bit slow tonight. I might be a bit sharper tommorow...)

The way I had pictured least-squares:
Suppose I want a plane normal and position. From what I understand the plane position is simply the mean of all the points.
So suppose:
* p is the plane position
* n is the plane normal
* N is the number of points
* x is the point with index i where i is in the range 0..N-1

p = sum(x[0..N-1]) / N

And I want to minimize the square distance between the plane and each of the points:
sum(|(x[0..k-1] - p) * n|^2)

Does this still count as "linear" least squares? Can I find an n to minimize this equation?
That equation is not linear, but it can be reduced to an equivalent linear set of equations by recognizing that the derivative of that function will be zero at the solution. Thus, you can solve for where the derivative equals 0.

This makes it a homogeneous equation which you cant actually solve for the non-trivial solution by doing a matrix inverse, so I actually steered you in the wrong direction with my previous post--sorry about that.

So...you can solve that by taking the eigenvector corresponding to the smallest eigenvalue, but you cant find that with the power method (which gives you the largest eigenvalue). You can find the spectrum of eigenvalues by solving the cubic characteristic equation (easier) or by using the QR algorithm (which is a simple recursive loop calling the QR decomposition).

Once you have the smallest eigenvalue you can subtract it along the diagonal from your homogeneous linear system and solve for the eigenvector using gaussian elimination. Note that this isnt a unique solution because any scalar multiplied by the eigenvector will work.

If you have access to a linear algebra package you should just use that, although you could optimize it by solving the cubic roots directly etc because the package would probably use a modification of QR algorithm which is slower in the 3x3 case.
Thank you! Well, I think I might try to solve using the cubic roots because there's a good chance that this function will go into our library and be re-used.

To give some context I previously used a function (not written by me) that simply appears to eliminate the upper triangular part of the matrix i.e. (1,2), (1,3), (2,3). Unfortunately this function has not been stable enough for me and used a lot of square roots. This appears to be a QL-decomposition, but it's quite possible that it is not very well written. (I think the problem comes in when those non-diagonal entries are very small). From what I understand, it is more efficient to first convert the matrix into tridiagonal form??? I found some public domain code at http://barnesc.blogspot.com/2007/02/eigenvectors-of-3x3-symmetric-matrix.html that appears to do this.

But suppose I try to find cubic roots: Should I use Cardano's method? Looking at the
wikipedia reference it appears to me that you first have to find the D = q^2/4 + p^2/27 to determine how many roots the cubic equation has and then evaluate the three possible calculations of t. But those imaginary numbers worry me...
3x3 matrix is simple case
i used it for fitting oriented bounding box and used simpliest solution
1. find x0 using newton iteration
2. divide original polynomial by (x-x0)
3. use good old equation for quadratic roots and find other two
4. profit
Quote:Original post by errantkid
To give some context I previously used a function (not written by me) that simply appears to eliminate the upper triangular part of the matrix i.e. (1,2), (1,3), (2,3). Unfortunately this function has not been stable enough for me and used a lot of square roots. This appears to be a QL-decomposition, but it's quite possible that it is not very well written.


The function you are describing is probably the LU-decomposition, which is known to be unstable if the matrix contains values close to 0 (it can be perfectly well written, its just a weakness of the method). A more stable variant is the LUP-decomposition which uses a permutation matrix to achieve pivoting. The QR-decomposition (analogous to, but more standard than "QL") is similar to the LU-decomposition but more stable.

biki's method of finding the cubic roots is commonly used. I think that Cardano's method is faster, but is difficult to implement and I'm not sure it works in all cases.

[Edited by - yahastu on February 13, 2009 10:09:50 AM]
Thank you both! The newton iteration is nice and simple to implement (I like it!). ++yahatsu and ++biki_

Just a note for any one else looking for this stuff: After much searching I found that http://www.geometrictools.com/LibFoundation/NumericalAnalysis/NumericalAnalysis.html has documents and code for QR decomposition and root finding.

When my deadlines are a little less tight I'd like to pit both techniques against each other

EDIT:
According to the docs on aforementioned website:
"The noniterative approach is comparable in accuracy and has a speed-up of about 4.5, which is a significant savings in computation time." (compared to QR decomposition for a 3x3 matrix). They appear to use Cardano's method and it is indeed quite complicated, but has a nice payoff if you're careful to avoid numerical problems...


[Edited by - errantkid on February 14, 2009 12:57:05 AM]

This topic is closed to new replies.

Advertisement