Distant Star Rendering

Started by
9 comments, last by Aressera 6 months, 2 weeks ago

I'm making a planet/space simulator and I am wondering what would be the best way to render a realistic star background. My requirements are:

  • Is based on the procedurally-generated stars surrounding the system the player is in.
  • Can be dynamically updated periodically as player moves a significant distance to show motion parallax.
  • Is HDR (high-dynamic range). I want the stars to have realistic brightness compared to the sun, which is also calibrated in realistic units (Watts/m^2). This means they will probably not be visible unless looking away from bright objects (sun, planets, etc.). Eye adaptation should make fainter stars more visible slowly over time.
  • Is high-resolution, i.e. should have enough resolution for largest of today's monitors (e.g. 2K vertical), and not show any aliasing at lower resolutions.
  • Is efficient in both memory and GPU time.
  • I'm not planning on doing any cartoon-y colorful nebula backgrounds like in No Man's Sky. That is not very realistic, real nebulae don't look like that up close. It's just the blackness of space and many stars, each with different brightness and color.
  • The number of stars that are visible is around 10,000. I only need objects that would be visible to the naked eye.

The options seem to be:

  • float16 RGB Cube map - This would use an uncompressed 16F texture to draw the stars like a normal skybox. The texture can be regenerated occasionally as the player moves. This would be fast but not very memory efficient. It would need to be at least 2048px in size to have sufficient resolution for stars to occupy 1 pixel. That means 192 MB with mip maps - ouch! Considering that most of the texture would be black, it's hard to justify this.
  • Sprites/Particles - Here I would treat each star as a tiny quad (4px or so in size) at infinity with a texture map from an atlas containing stars of various types. I can give each star a different color and brightness using a custom vertex attribute. This can also draw galaxies using different parts of the texture atlas. Since the number of stars is only around 10,000 it's not much more expensive than rendering an average-sized mesh. It's probably pretty memory efficient and reasonably fast, and scales well to any resolution. Less memory bandwidth due to smaller textures, and faster to regenerate. However it won't scale well to very large numbers of stars.
  • ????

So far I'm leaning towards the sprites method. I could also possibly combine the ideas - use sprites for stars drawn on top of a lower resolution cube map for really far away stuff. Does anyone have any feedback on these ideas or any pointers?

Advertisement

I always thought about a way to render distant landscape and atmosphere background to a environment cube map. We would only update a small square of the map per frame, so expensive rendering is possible.
Not sure about the transition to the foreground, which won't match due to the infinite distance of the env. map. Eventually using multiple, nested env. maps at multiple distances would work.

Regarding sprites i'm not sure it's good enough to avoid aliasing. E.g. if you fly through space at high speed, and stars start to move, some kind of pixel crawling might become noticable.
Eventually you need something avoiding any quantization, like the recent paper about ‘Spherical Gaussians’ has shown. Really a inspiring paper. They use depth transparent ellipsoids as the rendering primitive, but spheres with some weighted OIT trick might be good enough for stars.

Aressera said:

I'm making a planet/space simulator and I am wondering …

Does anyone have any feedback on these ideas or any pointers?

the space engine has already been created and is on the Internet, it can be found on YouTube and through a search engine, and apparently there are even some descriptions of how it was done

but in fact, even though the engine was created a long time ago, it did not bring any profit to anyone, and besides, it is a space viewing simulator , with planet generation : Probably, without the participation of the creator of the engine, it is not applicable in the gaming industry, and perhaps even with it, since the engine is cumbersome: written in OpenGL : now we are watching for a new approach

___________

if it is a unit that can be connected to Unity and other engines, then it will make sense - if it is a game engine for a space game

if you look from a star system - and not fly between stars - then there will be a certain number of stars within a certain radius: a stable number

and in terms of range, depending on the viewing radius, you can render stars in a certain viewing sector: for each frame, render a “window of stars” to a certain depth

and then it’s possible to create / transfer it to SkyBox .. and so on: by the way, I’m developing a space game: a space combat strategy on a large scale, but so far no one has even wanted to game crowdfunding, which would significantly speed up the process, and this is a lot of money in the future, and almost privately

___________

create a sample by stellar radius from a stable number of stars : indexing

Alice Corp Ltd

cosmic gamedev :

here is a video from my channel

to make it clear what I'm doing

my matrix assembly : Cosmo Visor

"Black Visor"

___________

pink power

alicewolfraider@gmail.com

Alice Corp Ltd

JoeJ said:
I always thought about a way to render distant landscape and atmosphere background to a environment cube map. We would only update a small square of the map per frame, so expensive rendering is possible. Not sure about the transition to the foreground, which won't match due to the infinite distance of the env. map. Eventually using multiple, nested env. maps at multiple distances would work.

That's kind of similar to how I render distant terrain (e.g. 10s to 1000s of km away), though I render everything each frame. The objects in the infinite camera frustum are sorted in front-to-back order and then split into depth slices. Objects that overlap multiple slices are drawn in all slices they overlap. I render the slices in back-to-front order, each time clearing the depth buffer. 3 slices are sufficient for solar-system scale views with 24-bit depth buffer, roughly 0.1m to 500m, 500m to 2,500,000m, and 2,500,000m to 12,500,000,000m. There are a few artifacts around the first transition, mostly with water refraction and reflection.

I still need to switch to a reversed float depth buffer. That should hopefully allow for much larger slices (e.g. hopefully at least 10,000 km), and at least push the transitions far enough away that they shouldn't be as noticeable. This scheme also doesn't work that well unless you use a forward renderer. I'm considering implementing a deferred renderer that uses an array texture for depth buffer, which would allow storing and sampling the depth for all slices. Then in the lighting shader I can sample the first depth layer that has a value closer than the far plane for the slice, and use that to reconstruct position, similar to how you choose the layer for cascaded shadow maps.

Judging by the fact that there may be objects between the stars and the observer “camera”, it is clearly necessary to strive to display the stars on the sky box, like a real moving background during a space battle: and separately create a navigation map with a planar slice

if it were possible to combine the transition from a navigation map by scrolling a mouse roller to an intra-system area or interstellar space, then that would be cool, but this may be some kind of third unifying technology

Alice Corp Ltd

It feels to me like there's kind of two categories of star you might care about.

“Near” stars, which you may need to render with greater detail to achieve something resembling a realistic effect. Under realistic conditions, you are unlikely to have many of these active at any given time, and something like your sprite-based solution becomes a good answer. Your procedural generation could also guarantee that “realistic conditions” assumption.

“Far” stars, which are mostly just small dots speckled throughout the sky. These are the ones that are many and small, and I suspect produce the majority of the problem you're concerned with. If you're concerned that a traditional cubemap is prohibitively expensive from a memory perspective and you're concerned about large numbers of tiny triangles from the sprite approach, have you considered using Compute to solve this problem? Maybe pass in something like…

  • an RWTexture2D wrapping the render surface
  • a constant buffer containing the view-projection matrix
  • a structured buffer containing locations, sizes, colors, etc… of your various procedurally generated stars

If you're concerned about fill-rate and want it to be the last thing that renders, you'd also need to pass in a Texture2D representing your depth buffer. But if you're looking at a mostly black background with mostly small dots spotted around, good chance it would be faster to just render this “skybox” first with no depth output and everything else is just rendered on top of it automatically than it would be to actually depth test inside your shader.

Dispatch enough threads that each thread processes one element from your structured buffer, manually rasterizes its position and size, and writes to a few output pixels at the appropriate location if necessary. If multiple stars happen to land in the same location from your current viewpoint, one of them probably just gets eaten… but if they are mostly tiny it's probably not going to have a significant negative effect on user experience. If which star gets eaten ends up being random per-frame, maybe you even get a reasonable-looking twinkle. Thanks to the nature of Compute, you don't have to deal with the tiny-triangle problem - just write your pixel(s).

Just a thought, anyways.

Aressera said:
That's kind of similar to how I render distant terrain (e.g. 10s to 1000s of km away), though I render everything each frame. The objects in the infinite camera frustum are sorted in front-to-back order and then split into depth slices. Objects that overlap multiple slices are drawn in all slices they overlap. I render the slices in back-to-front order, each time clearing the depth buffer. 3 slices are sufficient for solar-system scale views with 24-bit depth buffer, roughly 0.1m to 500m, 500m to 2,500,000m, and 2,500,000m to 12,500,000,000m. There are a few artifacts around the first transition, mostly with water refraction and reflection.

So you composite a frame from 3 renders. What's the cost of that?
I may have to do something similar because of LOD. I would need to render two (or maybe three) framebuffers, one with higher and one with lower detail. Then blend them to hide the transitions and avoid popping.
Sounds like a high cost, but if i can cache lighting in texture space, it might be acceptable because actual rendering is cheap.

Having thought more about rendering stars to a cube map, i tend to conclude it's not worth it. Rendering some dots should be cheap, so no need to cache it and doing only stochastic updates per frame.
If you render stars individually, you can also make some of them flicker. Looking up the night sky, flickering is the only dynamic thing some stars do, and with games so far not showing such effect, it seems an opportunity.

I also remembered how impressed i was from the way CP 2077 handles distant cars. They render only the lights with some procedural trajectories, but it looks really awesome. I was surprised such rendering improvements on something simple such as small bright dots are possible.
If you have not seen this, worth to take a look. Sadly i never came across some talk or paper on how they did those traffic lights, but maybe there is something to find. (If only they had spend such love on the actual gameplay too.)

Thinias said:

“Near” stars, which you may need to render with greater detail to achieve something resembling a realistic effect. Under realistic conditions, you are unlikely to have many of these active at any given time, and something like your sprite-based solution becomes a good answer. Your procedural generation could also guarantee that “realistic conditions” assumption.

Realistically, all stars will be the size of a single pixel unless you are inside their star system. But realism might not be what you're going for, because real space is very large and very empty.

JoeJ said:
So you composite a frame from 3 renders. What's the cost of that?

It's not that bad, the only additional cost is clearing the depth buffer twice more, which should be pretty cheap due to hardware optimizations. Color buffer is shared between the passes (nearer passes are drawn on top of farther ones). The main thing that is expensive is that I currently do water reflection/refraction and atmosphere haze separately in each depth pass (e.g. pass 0 opaque,transparent,refraction,haze, then pass 1 opaque,transparent,refraction,haze, etc.). These require making a copy of the color and depth buffer so that they can be read as a texture (I do this 6 times in total). I'd like to deinterleave the passes (do all opaque first, then all transparent, etc.), but this will require using an array texture for depth buffer so that depth for all slices is available. It will fix some of the transition artifacts for water so I'll probably end up doing that.

This topic is closed to new replies.

Advertisement