Advertisement

Star System N-Body Simulation Over Long Time Periods

Started by March 14, 2015 08:29 PM
6 comments, last by h4tt3n 9 years, 8 months ago

I've been working on a rather complex game that requires procedural generation of stars, planets, and macroscopic organisms along with realistically simulating these systems over very long to extremely short periods of time. In order to simulate the evolution of star systems (just one at the moment) I implemented an n-body physics simulator that also simulates the collisions of objects (they simply combine masses and the velocity vector is recalculated) that uses a time scale of seconds when the game is running in realtime. Here N maxes out at 200 (including stars, planets, planetoids, and moons) so it's not very expensive to run the simulation on the CPU. However, my game has a feature where the user can "time-warp" anywhere from days to millions of years into the future, therefore the n-body sim uses larger time steps, scaling up for days or even one year ins't too much of a problem, however one million years is a different story. Most n-body simulations that run over the course of millions of years are of galaxies or star clusters and use a time step on the order of thousands of years which is reasonable considering the massive distances between stars. However for a system as small (in comparison) as a star system, I speculate that the largest time step that could be used and retain enough accuracy would be a few days, maybe up to a week. Considering the number of days in one million years (365 million-ish) simply scaling up the simulation isn't a viable option so I'm wondering if there is any way of simulating 200 bodies over the course of a million years without sacrificing too much accuracy, another big problem is that the simulation can't take much longer than a minute for gameplay purposes.

Wikipedia has an article on some of the different methods that can be used, so you might want to start there. In particular, all the particles representing one star system (or a planet and its moons) can probably be abstracted into a point mass summed at the barycenter: if the other objects are far enough away the effect is identical. Doing this dynamically with a tree simulation might be your best bet if you're trying to simulate a solar system in detail.

Getting a deterministic result with a variable timestep may be tricky. Depending on how your simulation works, can you abstract away the smaller particles when fast-forwarding?

Advertisement

Doing this dynamically with a tree simulation might be your best bet if you're trying to simulate a solar system in detail.

Getting a deterministic result with a variable timestep may be tricky. Depending on how your simulation works, can you abstract away the smaller particles when fast-forwarding?

Well I assumed those methods were more useful when N is really large, in the case of most simulations. If I use a tree method, the trees that represent smaller systems thus requiring smaller time steps would still be simulated for a million years with hour to day-long time steps which is were the bottleneck is, if I understand correctly, N could be 5 (in the case of a small leaf) but we still have to use a small time step and we would still have problems, it would be an improvement but still not a viable solution.

EDIT: That being said I'm still definitely going to implement some sort of octree system considering I could get the complexity down to O(n log n) as opposed to O(n^2) but I'm not sure that entirely solves the problem.

You can increase your timestep substantially if you use a more accurate numerical integration method like RK4, since the error for such a method is on the order of O(dt^5), whereas the error is O(dt^2) for first-order integration methods like Euler. You may be able to increase your timestep by a few orders of magnitude and still get reasonable results, at roughly 4x the cost per time step.

As long as the simulation is stable, do you really need high accuracy anyway, or is good-enough ok?


As long as the simulation is stable, do you really need high accuracy anyway, or is good-enough ok?

"Good-enough" is sufficient, using RK4 integration along with the Barnes-Hut method (for adaptive time steps and less complexity), together allowing me to increase time steps might do the trick. The system tends to become more stable over time and is initially rather chaotic with objects colliding frequently which is where more accuracy would be necessary. Anyway, I will see how that goes and report back later.

To be honest, over very long periods most N-body systems are chaotic and are strongly sensitive to their starting state, so it's not really clear what it means to "simulate" a star system over millions of years; whatever state you end up with will be basically up to whatever precision you are carrying out the simulation at, i.e. arbitrary.

Can't you switch the smaller n-body systems like star systems to a local, on-rails orbital model during timewarps with long timesteps? That would make the most sense to a player IMHO, and might work with your gameplay (unless it's a multiplayer game, where the timewarp mechanisms can cause problems...)

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Advertisement


Can't you switch the smaller n-body systems like star systems to a local, on-rails orbital model during timewarps with long timesteps?

I would use an on-rails system except that with the way the star systems are generated, in the initial state, objects inevitably collide (Theia and Earth for example) and moons may get sent on escape trajectories and get recaptured elsewhere, etc and there isn't really a way to account for this with an on-rails system, in addition there aren't solutions to most 3-body systems and for rare, but possible n>3-body systems there is no real solution. Of course I could restrict the procedural generator so that certain configurations would never appear but that would end up being less accurate to the way star systems actually form (which is the opposite of what I'm going for) than an n-body sim would and would result in less varied, unique, and interesting solar systems (major problem with most games that use proc-gen).


As long as the simulation is stable, do you really need high accuracy anyway, or is good-enough ok?

"Good-enough" is sufficient, using RK4 integration along with the Barnes-Hut method (for adaptive time steps and less complexity), together allowing me to increase time steps might do the trick. The system tends to become more stable over time and is initially rather chaotic with objects colliding frequently which is where more accuracy would be necessary. Anyway, I will see how that goes and report back later.

RK4 integration is *not* a good choice for n-body simulations, as it is not symplectic and looses energy over time. Instead I would like to recommend David Whysong's symplectic integrators, which can be found here:

http://www.projectpluto.com/symp.cpp

If you replace the time-stepping method with Keplers equations of planetary motion, you can accurately compute the position of any celestial body at every conceivable point in time. The drawback is that the equations are limited to two-body interaction.

http://en.wikipedia.org/wiki/Kepler%27s_laws_of_planetary_motion

http://www.slideshare.net/IngesAerospace/orbital-mechanic-4-position-and-velocity-as-a-function-of-time

Cheers,

Mike

This topic is closed to new replies.

Advertisement