Advertisement

Scaling physics simulations

Started by February 14, 2013 02:45 PM
3 comments, last by Emergent 12 years ago

Hi, i'm having trouble using an integrated version of ODE, basically i'm modelling a golfball. If I use the correct dimensions, mass, friction, etc the collision detection fails unless I raise the iteration step very high, however this is intended for the Android market so that's a no no.

I'm attempting to scale everything manually.

If I remember correctly if scaling by 2 the volume is multiplied by 8, but in this case should I multiply the mass by 8 or by 2?

That beside, in tests the angular velocity seems totally off. I initially (before scaling) apply an angluar impulse of 500 radians per second, after scaling this looks very slow, so I doubled it, and then multiplied it by 8, but still it appears wrong. Should I be ammending the friction, dampening, bounce, etc?

Thanks for any help...

(I've not prefixed this with ODE since I believe it applicable to all really?)

Well, for the second question, yes, doubling the dimensions increases volume by 2^3 = 8 (since you're doubling in every dimension) and since density = mass / volume, we have mass = density * volume, since density is supposed to be constant, if you are multiplying the volume by X you must multiply the the mass by X to keep the density constant.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Advertisement

I don't have a specific answer, only a thought, or possible approach. Your problem made me think of astrodynamics work I've done (computation of the orbits of satellites, spacecraft, debris, asteroids, etc). That field has a similar problem. If you try to compute orbits the "obvious way", namely just sum the gravitational forces of relevant objects (usually sun and planets), to get any sort of precision you must reduce the time step to absurdly tiny increments, and the computations take forever (far longer than is practical).

In effect, at each time step you compute the position and velocity, compute the next position, then repeat. Which means, the computation assumes the object moves in a straight line between computations. While that may seem reasonable given the huge distances involved, the difference between the computed and actual path is fatal to medium and long term precision.

The usual solution is to go ahead and compute the forces of massive objects at each timestep, and adjust the direction and velocity accordingly. But then what you do is take that linear position and velocity, and compute the conic (ellipse, parabola, hyperbola) orbit that position and velocity corresponds with relative to each massive body (sun, planet, moon), and sum the changes in position and velocity those influences generate (to the conic orbits). Therefore, the changes between time steps are not linear, but correspond to conic orbital paths, which are correct to vastly greater precision.

This is, of course, annoying, since at every step we must convert from linear position, velocity, acceleration to [multiple] conic orbits, then perform computations, then convert back to linear again. However, that's what it takes to assure the spacecraft or asteroid passes near earth (or moon, etc) rather than impacts into it, or vice versa, or gets lost over time, etc.

You likely need to find something similar... some way to represent and compute motion through fluids over longer time frames, then convert back to linear form again at each conveniently long time step. I don't know what those longer-term path equations are, but you probably need to find out. This approach will help (if you can find the appropriate equations), but your "longer time step" still can't be too long. I'm guessing maybe 0.100 to 1.000 seconds may be practical.

That's my best guess off the top of my head.

You can get the scaling working, but it won't help you at all - all you're doing is changing units. If you make all your lengths bigger by a factor of 10, your masses need to increase by 1000. However, if you don't want everything to look like it's in slow motion, gravity will also need to increase by 10, as will your velocities, so you'll end up with exactly the same penetration problems.

If you've only got one object to simulate then you probably can afford to have a pretty high simulation rate (I use about 300Hz in my flight sim, PicaSim), but I suspect it's not going to be good enough. The max golf ball speed is 200mph, so 90m/s. For a diameter of 4.5cm that means it travels its own length in 0.045/90 seconds, which means you need a physics update rate of at least 2000 per second, if you don't want the ball to penetrate the ground significantly.

If ODE supports continuous collision detection (CCD), that should help, and you can run at more sensible timesteps. Bullet (which you can use on Android & iOS) does. PhysX - free on Android does (it's available on iOS, but I'm not sure if it's free).

However - CCD will prevent penetration but you might end up with weird angular velocities when the ball lands. This will likely be really difficult to tune and adjust - which will be important to you.

My recommendation would be to ditch the physics engine and roll your own. It's not that difficult since a sphere will only ever have one point of contact, and that with a static object. Physics engines are only complicated because they have to handle multiple contacts between dynamic objects etc. This way you have complete control over the ball movement and can easily tune it to provide the results you want.

What is the golf ball colliding with? A heightmap? Anything else?

I ask because the problem of detecting collisions between a sphere and a heightmap is a lot simpler than the general collision detection problem. There is a lot of structure to the problem that a general-purpose physics engine won't exploit. Specifically, you always know whether the ball is below or above ground by comparing its altitude to the terrain altitude -- regardless of whether you have "noticed" an intersection between the sphere and the terrain's infinitely-thin surface.

If you were to approximate the ball by a point, then this would be trivial:

[source]

if(ball.z > terrain_height(ball.x, ball.y))

{

// Above ground

}

else

{

// Below ground

}

[/source]

Since your ball has a nonzero radius, this gets very slightly trickier, but not by much. Formally, the thing you'd want is the Minkowski sum of a sphere with your heightmap. I'm assuming you actually have a very special case -- that your sphere's radius is smaller than 1/2 the vertex spacing in your heightmap -- which will make things simpler. I haven't worked out the details myself, but I assume other people have and you can find them with some googling. It's going to boil down to a few cases: When you're near a concave vertex, your dilated heightmap will return the z-coordinate of a sphere of radius 'r' centered at that vertex, and when you're away from a vertex, it will return the height of the corresponding plane pushed out by a distance 'r' along its normal (where 'r' is the radius of the golf ball). I've illustrated this in 1d below: The original heightmap is in blue, and the "dilated" heightmap is in red.

[attachment=13653:heightmap-sphere-minkowski-sum.png]

[EDIT: In fact, other people have had similar problems with ODE and heightmaps.)

This topic is closed to new replies.

Advertisement