Realistic Emissive Color Rendering for Stars

Started by
7 comments, last by Aressera 1 month ago

I have a concern about how to accurately render the emissive color of stars in my planet renderer. It seems difficult to set the emissive intensity for the star sphere to a value that produces acceptable levels of bloom and doesn't exceed 16-bit float precision.

Currently I calculate lighting for a star using a point/directional light with a realistic intensity (3.0e25 W/m^2 for the sun), and this all works well. I can place a sun-like star at 1 AU distance from a planet and render the planet with realistic lighting intensity (around 1000 W/m^2).

However, there is a question about how to handle the emissive lighting for the star, which is necessary to produce a bloom effect when looking at the star. For a sun-like star at a distance of 1 AU, I can set the emissive intensity of the star sphere to be 1000 and the bloom effect looks good. The HDR pixel values of the star sphere simply become 1000*starColor, which then spread out into nearby pixels when the bloom is applied.

When I try a much smaller red dwarf star with 0.1 solar masses and emissive intensity 10,000 times less (3.0e21), at a distance of 0.01 AU, I get something like this:

Here the apparent size of the star is very large because it is very close relative to its radius (also calculated realistically). However, the emissive intensity of 1000 suddenly seems to be too much, even though the star distance is chosen to make the incident intensity at the planet close to 1000 W/m^2.

So the question is, how should I handle the emissive color intensity to produce realistic results for stars of any reasonable luminosity? If I look directly at the sun, should the pixel values be 1000 or 3.0e25? (3.0e25 totally explodes the bloom and is unusable) According to inverse square law, the light is 1000 W/m^2 at the planet, so the value of an individual pixel should be… what? I guess it would depend on the image sensor size and camera's optical setup which I have no concept of in my renderer. I'm not sure at all how to handle this to get the most realistic results. Should I apply the inverse square law to the emissive intensity in the shader? (this seems wrong because inverse square is already in effect because sphere covers fewer pixels)

I feel like to answer this accurately I need to write a simple path tracer to calculate the lighting for a star accurately for a camera of known image sensor size and resolution, then somehow translate those results to the real time renderer.

Whatever approach I use needs to work well for stars in the same system, as well as stars light years away, so that the apparent brightnesses are preserved.

Advertisement

I can only give an artistic proposal.

The star is much too bright. At this size, it should even have some visible texture.
Tried with Paint, but lacks any transparent painting or blur, so result sucks. It's still too bright, but better:

Regarding visible texture i think about very big cells and structures, not small realistic ones.
You know, like this:

:D

And i think you need to model scattering through atmosphere somehow. Bloom alone feels too flat and bolt on imo.

Landscape looks great.

Aressera said:
However, the emissive intensity of 1000 suddenly seems to be too much, even though the star distance is chosen to make the incident intensity at the planet close to 1000 W/m^2.

I just notice the unit ‘W/m^2’ relates to area, probably of the emitter.
This would mean if we move closer, the emitter area per pixel becomes smaller and the sun would become less bright.
So maybe you could use the solid angle of the star to the camera calculate an approximated but uniform scaling factor for the whole image.

Calculating accurate emitter area per pixel would need some work.
But it it confuses me that doing it correctly would cause a brighter contour of the sun.
This feels wrong, so i guess the effect is compensated by the surface normal becoming tangent to the ray towards the receiver.

If those two effects cancel each other out and cause a uniform emission to the receiver as i would expect, then maybe a uniform value for the whole image is no approximation but actually correct.

Aressera said:
I have a concern about how to accurately render the emissive color of stars in my planet renderer. It seems difficult to set the emissive intensity for the star sphere to a value that produces acceptable levels of bloom and doesn't exceed 16-bit float precision.

Your concern is absolutely valid and to answer your question - this problem isn't solved in astrophotography, you (likely) won't be able to solve it in a physically-based manner.

Let me paste here a shot (not sure who the author is):

Proxima Centauri

There is a marked start, to the top-left of it there is another one. Let's zoom in to that one (this time not with telescope, but under Hubble (note, data behind them is cleaned as there would be additional things visible behind them):

Hubble's Best Image of Alpha Centauri A and B

Oh… it's a binary star, one couldn't say from the first shot, right?

I assume that you might know, that the marked start is Proxima Centauri, and the one to the top left is Alpha Centauri A and Alpha Centauri B. All of those are quite “close” to us.

But nice photos of stars is not what I wanted to demonstrate - what I wanted to demonstrate is - do you clearly see Proxima Centauri in the first shot? Hardly, because the visible stars are very bright - it is almost impossible to see it with plain eye. Now, let us zoom in on Proxima Centauri and look at the Hubble shot of it:

New shot of Proxima Centauri, our nearest neighbour | ESA/Hubble

Ohhhhh… it's now quite bright right?

So, the main problem here is - that stars (even as ‘dim’ as Proxima Centauri) are way too bright. Let's look at something closer - the closest star from us:

The Brightness of the Sun - NASA

This is a photo from ISS, you can clearly see that any star is just invisible because sun's brightness just over-shines any more distant thing. If you wanted to have (any) stars visible - sun itself would already turn whole screen into pure white before you could see any.

The sun (under solar filter - removing vast majority of light) looks something like this:

Sun | Sun, single shot Nikon Coolpix P1000 + Solar filter Su… | Flickr

So, to conclude - your first image doesn't look entirely wrong (although the surface doesn't seem that bright to me - but that might be just how your albedo is set up (which is possible and likely due to low angle of the star)).

It isn't that far off when you'd set up similar scene in Cycles in Blender (I assume you don't have any participating media/atmosphere on that given planet)

You will have to decide though - whether you want your render to be accurate (which may not look that good in the end) or good looking (which will require some experimentation and “playing with factors”).

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

I went ahead and wrote a simple path tracer to see how things should behave with a more realistic renderer. The outcomes are:

  • For a star of fixed power, the maximum pixel value in an image varies according to inverse square law with distance.
  • If two stars of different powers are placed at two different distances using inverse square law, such that the incident intensity at the camera is the same, the maximum pixel value is roughly constant.

I tested this using a simple setup where the camera is looking directly at the star, and I can vary the star power and distance. For the camera, I had two different setups, one where the camera is just a dumb image sensor without any optics (and therefore can't resolve a proper image), and another where it uses ray tracing to simulate basic camera optics.

For the dumb image sensor (area 1 m^2), here are the results:

// sun @ 1 AU
Power: 3.916e26 W
Radius: 702191335.3752826 m
Distance: 149600000000 m
TOTAL: 1393.178402470257 W
MAX: 0.028401766410593 W

// 0.1 mSun @ 0.03 AU
Power: 3.67180264220761e23 W
Radius: 222052397.3021059 m
Distance: 4580890566.392502 m
TOTAL: 1394.02673275276 W
MAX: 0.028910514913586 W

You can see the path tracer produces the known correct value of ~1390 W for the total power received by the sensor of 1 m^2 for a sun-like star at a distance of 1 AU. If we change the star power to a bit less than 1/1000 sun power, and move the star to 0.03 AU to keep intensity the same, the total power is nearly identical. The maximum pixel value is also the same.

To see how the results vary with distance for a star of fixed power, I did another test:

// 0.1 mSun at 0.03 AU
Power: 3.67180264220761e23 W
Radius: 222052397.3021059 m
Distance: 4580890566.392502 m
TOTAL: 1393.17861864923 W
MAX: 0.001626196401358 W

// 0.1 mSun at 1 AU
Power: 3.67180264220761e23 W
Radius: 222052397.3021059 m
Distance: 149600000000 m
TOTAL: 1.305534996753706 W
MAX: 0.000001511845609 W

Here the total and maximum pixel values both decrease almost exactly according to the inverse square law. (multiplying 1393.178 by the distance ratio squared produces a value of 1.306).

This test case seems to behave as expected, but it cannot produce an actual image since the sensor collects light from any direction, not just along a pixel ray. So, I implemented a ray casting method instead of sampling random points on sphere surface. Here are the results:

0.1 mSun at 0.03 AU, image is normalized to max pixel
// sun
Power: 3.916e26 W
Radius: 702191335.3752826 m
Distance: 149600000000 m
TOTAL: 0.020631446822417 W
MAX: 0.001609914635569 W

// 0.1 mSun
Power: 3.67180264220761e23 W
Radius: 222052397.3021059 m
Distance: 4580890566.392502 m
TOTAL: 2.356182928566321 W
MAX: 0.00186636636866 W

With this method it doesn't behave quite as nicely (maybe I have a bug?), but it's still within 15%. The total power and max power are much less than the first test case because now only pixels that look directly at the star produce a non-zero value.

Finally, here is how the ray casting method behaves with distance for a fixed star power:

// 0.1 mSun at 0.03 AU
Power: 3.67180264220761e23
Radius: 222052397.3021059
Distance: 4580890566.392502
TOTAL: 2.356166587835678
MAX: 0.0004667823226

// 0.1 mSun at 1 AU
Power: 3.67180264220761e23
Radius: 222052397.3021059
Distance: 149600000000
TOTAL: 0.000001927746617
MAX: 0.00000033287694

Once again, the distance dependence for max pixel value is not quite inverse square, but close enough. However, the total pixel intensity follows the inverse 4th power of distance. So, I guess there is inverse square for star to camera distance, then another inverse square factor caused by size of star's projection on image plane.

From this, it seems like the correct thing is to set the emissive intensity according to the inverse square law, which doesn't jive with my expectations.

Here is the code used to produce these results. I'd appreciate if anyone experienced with graphics path tracing can spot a bug here.

const Size resolution = 1024;
const Size samplesPerPixel = 1024;
const Float64 nearPlaneSize = 1.0;//0.02;
const Float64 nearPlaneArea = math::square( nearPlaneSize );
const Float64 nearPlaneDistance = nearPlaneSize; // 45 degree FOV
const Float64 inverseSamplesPerPixel = 1.0 / samplesPerPixel;
const Float64 inverseResolution = 1.0 / resolution;
const Vector3d cameraPosition( 0.0, 0.0, 0.0 );
const Vector3d cameraDirection( 0.0, 0.0, -1.0 );
const Vector3d cameraUp( 0.0, 1.0, 0.0 );
const Vector3d cameraRight = math::cross( cameraDirection, cameraUp );
const Vector3d nearPlaneCenter = cameraPosition + cameraDirection*nearPlaneDistance;
const Float64 sphereSolarMass = 0.1;
const Float64 sphereTemperature = solarMassToTemperature( sphereSolarMass );
const Float64 spherePower = solarMassToLuminosityW( sphereSolarMass );
const Float64 sphereRadius = blackBodyRadius( sphereTemperature, spherePower );
const Vector3d spherePosition = Vector3d( 0.0, 0.0, -1.496e11 );// * math::sqrt( spherePower / solarMassToLuminosityW( 1.0 ) );
const Float64 sphereBRDFNormalize = 1.0 / math::pi<Float64>();

Random64 random( 123456789 );
Float64 maxPixelEnergy = 0.0;
Float64 totalPixelEnergy = 0.0;
ImageBuffer imageBuffer( PixelFormat::GRAY_8, resolution, resolution );
imageBuffer.allocate();

for ( Index i = 0; i < resolution; i++ )
{
    const Float64 i01 = Float64(i)/resolution - 0.5;
    for ( Index j = 0; j < resolution; j++ )
    {
        const Float64 j01 = Float64(j)/resolution - 0.5;
        Float64 pixelEnergy = 0.0;
        
        for ( Index k = 0; k < samplesPerPixel; k++ )
        {
            const Vector2d sampleOffset( random.sample01<Float64>(), random.sample01<Float64>() );
            const Vector3d nearPoint = nearPlaneCenter +
                    (i01 + sampleOffset.y*inverseResolution)*cameraUp +
                    (j01 + sampleOffset.x*inverseResolution)*cameraRight;
            const Vector3d nearPointDirection = (nearPoint - cameraPosition).normalize();
            
        #if 0
            // with optics
            Ray3d ray( nearPoint, nearPointDirection );
            Float64 spherePointDistance;
            if ( !ray.intersectsSphere( spherePosition, sphereRadius, spherePointDistance ) )
                continue;
            
            const Vector3d spherePoint = nearPoint + spherePointDistance*nearPointDirection;
            const Vector3d sphereNormal = (spherePoint - spherePosition).normalize();
            // Points facing away from camera have 0 contribution.
            const Float64 sphereNormalDot = math::dot( nearPointDirection, sphereNormal );
            if ( sphereNormalDot > 0.0 )
                continue;
            
            const Float64 spherePDF = 1.0 / (4.0*math::pi<Float64>());
            Float64 sampleEnergy = spherePower * spherePDF;
        #else
            // no optics
            const Vector3d sphereNormal = sampleSphereUniform( random.sample11<Float64>(), random.sample01<Float64>() );
            const Vector3d spherePoint = spherePosition + sphereRadius*sphereNormal;
            Vector3d spherePointDirection = spherePoint - nearPoint;
            const Float64 spherePointDistance = spherePointDirection.getMagnitude();
            spherePointDirection /= spherePointDistance;
            
            // Points facing away from camera have 0 contribution.
            const Float64 sphereNormalDot = math::dot( spherePointDirection, sphereNormal );
            if ( sphereNormalDot > 0.0 )
                continue;
            
            Float64 sampleEnergy = spherePower;
        #endif
            
            const Float64 distanceAttenuation = 1.0 / math::square( spherePointDistance );
            const Float64 sphereBRDF = sphereBRDFNormalize * (-sphereNormalDot);
            
            sampleEnergy *= sphereBRDF * distanceAttenuation;
            pixelEnergy += sampleEnergy;
        }
        
        pixelEnergy *= inverseSamplesPerPixel * (nearPlaneArea / (resolution*resolution));
        maxPixelEnergy = math::max( maxPixelEnergy, pixelEnergy );
        totalPixelEnergy += pixelEnergy;
        *imageBuffer( j, i ) = pixelEnergy;
    }
}

math::multiply( imageBuffer.getPixels(), Float32(1.0/maxPixelEnergy), resolution*resolution );

Image image;
imageBuffer.getImage( image );

ImageConverter converter;
converter.encode( image, "/star.png" );

Console << "Power: " < spherePower;
Console << "Radius: " < sphereRadius;
Console << "Distance: " < spherePosition.getMagnitude();
Console << "TOTAL: " < totalPixelEnergy;
Console << "MAX: " < maxPixelEnergy;

JoeJ said:
And i think you need to model scattering through atmosphere somehow. Bloom alone feels too flat and bolt on imo.

There is actually realistic atmospheric scattering according to Epic's paper, it's just that the star's light is mostly red wavelengths which don't scatter much. If you look closely near the horizon you can see a faint red glow. The planet has an earth-like atmosphere. With a sun-like star you get a blue sky.

Well, i was trying your numbers to make an emitting disk of given radius and distance, calculating the incoming light as used in radiosity methods. But result completely differs from yours, so likely i got some thing wrong and can't confirm anything:

// sun values:
			double Power = 3.916e26;
			double Radius = 702191335.3752826;
			double Distance = 149600000000;

			double area = PI * Radius*Radius;
			double d2 = Distance * Distance;			
			
// form factor of a disk, facing the receiver
			double formFactor = /*(cosR * cosE)*/ 1.0 / (d2 * (PI * d2 + area)) * area; 

			double incomingEnergy = Power * formFactor;
						
			ImGui::Text("incomingEnergy %f", incomingEnergy); // 0.38549

Looking around, it seems like my original image with the blown-out bloom may be pretty close to correct, according to some of the answers here.

One of them posts this image, which is not very different from mine (tone mapper differences could explain it).

So, if you are close enough to a low-mass star that the incident intensity is the same as the sun and earth, then the pixels will be about the same brightness, and the star covers a much wider FOV and therefore total brightness is higher.

This topic is closed to new replies.

Advertisement