Impulse Solvers and Guaranteed Collision Free Frames

Started by
11 comments, last by raigan 2 years, 6 months ago

D.V.D said:
So the physics engines I see on github are sorta “distributing” this computation across frames, and hoping that overtime, we won't have new collisions created as we resolve contact points since things will stabilize.

Yeah, and there are some arguments why this should do fine:
Timestep is small, so error is small too.
Chaotic scene (explosion, many collisions, high velocities) might not need an accurate solution - it's just chaos.
Resting scene (stack of boxes) needs an accurate solution and is the main problem for us to solve. For that we get a complete graph of all contacts anyway.

D.V.D said:
Add n^2 penetration constraints for each object vs all others such that each object doesn't overlap (we need the equation/SDF here to specify the constraint)

Assuming this algorithm aims for a more complete collision graph than the above, we could add pairs of potentially colliding bodies so we can keep checking them within each solver iteration.
We could implement it with a speculative broad phase with extended bounds of the bodies. This way we don't have to run the complete collision detection for each solver iteration.
Result should be more accurate, but also more work because there are more potentially colliding pairs than actually colliding pairs, and we need to check for penetrations constantly.
The penetration check remains expensive, even if you use SDF. Because you have to intersect two SDFs for each pair, it's not just a single lookup but a search for the deepest penetration out of many, or some weighted average of all penetrations.
And finally, there is still no guarantee we detect all collisions as the system changes. Things could go out of our extended bounds, which was just a guess we did for the broad phase.

I assume it's no win in most cases. Performance suffers and errors still remain. Due to errors, you need good enough results from an incomplete collision graph anyway, so i would implement the faster algorithm first. Maybe it's good enough and you'll see it's not necessary to handle each and every contact which might happen within one frame. (But that's just my personal belly feeling assuming generic scenes.)

Advertisement

@D.V.D I would recommend looking up Erin Catto's talks about how the Box2D solver works (see https://box2d.org/publications/​ ), and/or you could read the paper “Position-Based Dynamics” (or these slides http://mmacklin.com/EG2015PBD.pdf​ ), to get some good references for why and how most people solve game physics using iterative Gauss-Seidel methods (which amount to solving pairwise constraints and hoping the error propagates/diffuses and things eventually converge on a solution).

There are other more direct solvers, eg IIRC Doom 3 solved the big-matrix problem directly, and Cel Damage used a Conjugate Gradient-based solver.

About “the resolved positions might still collide with objects they didn't previously collide when we setup the matrix.”, in an impulse-based solver this is a problem for next frame (ie those collisions will be found next frame), because during the current frame nothing changes position, only velocities are changed. (This is what lets impulse-based solvers be so fast, the constraint gradients (which are usually a function of position) are fixed so they can be computed once and re-used over multiple solver iterations during the same frame.) Position errors are typically corrected by feeding some of the error into the velocities, ie generating extra forces which push the positions out of error, and/or you can directly solve the position constraints (see below).

In a position-based simulator, it's indeed a problem that the solver can move things into collision when there weren't any collisions initially. The typical solution for this is “speculative contacts”: you generate a collision constraint between anything that might collide this frame (ie generate a constraint if the shapes are within some distance apart, not only if they currently overlap), and then when solving these constraints you skip them if they're not currently overlapping. This moves collision detection into the solver loop a bit, which is more expensive. Also you can get errors depending on how much you approximate the collision constraint (ie typically they're point-to-plane constraints, while the actual shapes that generated them are finite (not infinite planes), which means you can get ghost collisions). This paper has lots of useful references for PBD simulation: http://mmacklin.com/smallsteps.pdf

Anyway

@joej @raigan Thanks for the helpful posts, I think it really cleared up a lot of stuff for me. I understand why we don't do these kind of perfect solvers with each potential collision due to computation cost but it does clear up what the “perfect” solution would look like, and where we choose approximations.

I'm looking to implement a PGS solver right now, but I'm running into the following issue. I'm looking at the code here as a reference: https://github.com/granj2020/Cirobb-Engine/blob/master/cirobb/Manifold.cpp​ ← from my understanding, this is a simplification of Box2DLite. Sequential Impulses and PGS should be the same thing, and so I went through the math of deriving what how we solve the below matrix:

J M^-1 J^t lambda = -(Jq2 + b)

where q2 is the integrated velocities for position/angle that are not yet corrected, and b is a bias. My understanding is that we want to throw all our constraints into J since we can have multiple constraints operating on a single body so we want the matrix to capture that interaction. Going through the algebra, I got a pretty complicated equation that solves for lambda using Gauss-Seidel (decompose the matrix on the left into a upper and lower triangular, then propagate the results backwards to solve for lambda). The equation is pretty complicated, but the main thing that comes out of it is that we have gradients of different constraints dotted with each other. In practice, the gradients of constraints are sparse since they affect only one or two bodies, but this is still a interaction that needs to be captured.

When I look at the source for https://github.com/granj2020/Cirobb-Engine/blob/master/cirobb/Manifold.cpp​ however, it seems to build a separate J matrix for each constraint and deal with all of them separately (non penetration constraints that deal with the same body should be dotted with each other). Now I understand that PGS is all about handling one constraint at a time irrespective of the others, but doesn't that make it not equivalent to Gauss-Seidel method? Or is there a reason why we build J separately for each constraint instead of throwing it all into one big constraint matrix? I ask because the Box2D GDC presentations appear to handle constraints this way, but going through the article here, they mention that you want to throw the constraints all together: https://www.toptal.com/game/video-game-physics-part-iii-constrained-rigid-body-simulation​ and I'm not really sure which is correct now.

The equation for each x term or in our case lambda term in Gauss-Seidel is given here: https://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method​ and the A matrix that we use is J M^-1 J^t which does dot products between the gradients of constraints against each other, so if we have more than one constraint, we will have these gradient dot products in our Gauss-Seidel solver but those don't seem to be present in the source code. Not sure if I'm just messing up my math somewhere or not getting something thats simple here.

I honestly don't know the math well enough to answer you, aside from you're correct that Box2D (and, every other physics engine AFAIK) only solves constraints pair-wise. I think “relaxation” is the general term. (Recently Roblox added a direct solver to their simulation, and there are some other direct solvers (eg IIRC PhysX has a Featherstone solver for articulated trees of joints), but in general things are solved row-by-row. The “small steps position-based dynamics” paper I linked above should explain this and have some references to papers where they expand on the details.

If I'm reading it correctly, Wikipedia suggests that solving each row of the constraint matrix is how these methods are meant to work. See the section containing the text “can be computed sequentially using forward substitution” on this page: https://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method

Or, if instead of changing the state when solving each row, you instead accumulate and average the changes, you get this method: https://en.wikipedia.org/wiki/Jacobi_method

(Jacobi converges more slowly, but behaves better in the presence of contradictory/impossible pairs of constraints since it produces a midpoint instead of ping-ponging between the the solutions)

I would recommend reading through the archives of the Bullet forum, there are a bunch of threads which cover physics simulation (in around 2009 when I was learning, I posted a lot of questions and some kind people helped walk me through the answers): https://pybullet.org/Bullet/phpBB3/viewforum.php?f=4

This topic is closed to new replies.

Advertisement