Vulkan ray tracing shadows flaw

Started by
13 comments, last by taby 1 year, 1 month ago

I am using Sascha Willems' code ‘raytracingshadows’:

https://github.com/SaschaWillems/Vulkan/tree/master/examples/raytracingshadows

https://github.com/SaschaWillems/Vulkan/tree/master/data/shaders/glsl/raytracingshadows

Using the reflectionscene.glft file that I downloaded using download_assets.py, it becomes immediately obvious that, for self-shadowed triangles, the shadow detail is based on the geometry detail of the mesh. This is why you get big squares using this scene (the spheres and teapot are basically quad-based).

I found some answers on reddit, and so I changed the closesthit shader to the following:

...

vec3 lightVector = normalize(ubo.lightPos.xyz);
float dot_product = max(dot(lightVector, normal), 0.0);
hitValue = v0.color.rgb * dot_product + vec3(0.2, 0.2, 0.2);

// Shadow casting
float tmin = 0.001;
float tmax = 10000.0;

vec3 origin = gl_WorldRayOriginEXT + gl_WorldRayDirectionEXT * gl_HitTEXT;
vec3 biased_origin = origin + normal * 0.01;

shadowed = true;  
// Trace shadow ray and offset indices to match shadow hit/miss shader group indices
traceRayEXT(topLevelAS, gl_RayFlagsTerminateOnFirstHitEXT | gl_RayFlagsOpaqueEXT | gl_RayFlagsSkipClosestHitShaderEXT, 0xFF, 0, 0, 1, biased_origin, tmin, lightVector, tmax, 2);
	
if (shadowed) 
{
	hitValue *= 0.3;
}

...

.It looks moderately better, but not perfect:

Any ideas on how to really fix the problem? I was expecting pixel-perfect shadows like on the checkerboard, but no.

Advertisement

In the age of Nanite, does this even matter?

taby said:
Any ideas on how to really fix the problem?

This is not the problem, it is actually accurate solution. The problem is that your lighting is wrong.

The backside of your teapot is grey, while it should really be black (same as the color in the shadow) - this points out that you have constant ambient lighting. This ambient lighting should be applied as a separate lighting pass, not during the pass when you're casting shadows. I also see that you use a bias on shadows (because your shadows doesn't begin exactly at the spot where n.l=0)

taby said:
In the age of Nanite, does this even matter?

Now, I think you're pointing at artifacts on teapot - and yes, it still does. The problem is how detailed geometry you feed into dynamic LOD technique like Nanite (or whether you even use such technology in your software). And even with high-enough detailed geometry there is going to be a threshold at which you will still see this.

Not to mention - I still don't know how much Nanite plays with ray tracing, or whether it is even possible to use those 2 together. Someone using UE5+ may shed a bit more light onto this.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

All I know is that it’s not my fault that it looks bad simply because I can detect the edge of the shadow. I did not experience this using shadow maps.

You can end up with the same problem on shadow maps. Typical example is:

Shadows applied after whole lighting calculation is finished

Basically you multiply your total lighting (incl. ambient light) with shadows. That's incorrect. You can't apply shadows after your lighting calculation for ALL lights has been finished. Keep in mind, ambient light should be applied as a separate lighting pass (much like GI).

What you want to do, is to apply ambient separately, to obtain result like:

Shadows applied correctly - only for the respective light that casts them

Where light contribution is separated between 2:

Contribution of point light with shadows
Contribution of ambient lighting

So, practically what I'm saying (I looked at the Vulkan example) is that you really should do this:

vec3 lightVector = normalize(ubo.lightPos.xyz);
float lighting = max(dot(lightVector, normal), 0.0);

// Shadow casting
float tmin = 0.001;
float tmax = 10000.0;

vec3 origin = gl_WorldRayOriginEXT + gl_WorldRayDirectionEXT * gl_HitTEXT;
vec3 biased_origin = origin + normal * 0.01;

shadowed = true;  
// Trace shadow ray and offset indices to match shadow hit/miss shader group indices
traceRayEXT(topLevelAS, gl_RayFlagsTerminateOnFirstHitEXT | gl_RayFlagsOpaqueEXT | gl_RayFlagsSkipClosestHitShaderEXT, 0xFF, 0, 0, 1, biased_origin, tmin, lightVector, tmax, 2);
	
if (shadowed) 
{
	lighting = 0.0;
}

float AMBIENT_LIGHT = 0.2;
hitValue = v0.color.rgb + min(1.0, lighting + AMBIENT_LIGHT);

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

And for the record, I am immensely grateful for the codes by the code author.

Yes this lighting is temporary. multiple lights are in the works. Below is a shot of multiple omnidirectional lights.

This is what shadows looks like using a fractal as an example:

The demonstration codes don't take into consideration accurate lighting - but I guess - they rather try to demonstrate ray tracing with its own flaws. I absolutely like this approach, be it Vulkan samples, or D3D12 ones - those are great source, as reference is often way too technical and harder to follow.

Side note: I've never digged into Vulkan ray tracing, so I'm just assuming how traceRayEXT works - as I've mostly worked on in-house ray tracers - but it performs ray generation with some code (ray gen shader?) multi-level BVH traversal, and when it misses executes some code (ray miss shader?) and when it hits executes some code (ray hit shader?). Do I get this correctly? Won't this end up in occupancy problems though? How do they solve when one ray in batch exits early while other still perform traversal (speculative traversal? persistent threads?)? These might be questions to hardware manufacturers though…

In the in house ray tracers what we mostly did was generating ray buffers in separate compute program, then executing traversal compute program and at the end executing compute program that processed results. While you can call ‘traceRay’ inline in any other compute shader and get results immediately without heavy binding architecture (it simply returns whether it hit, what it hit, distance and coordinates (barycentric for triangles) … or in more advanced version we use it returns sample of ‘G-Buffer’ data you'd expect) … it tends to be quite heavy and mega-kernels were generally very bad at occupancy.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

OK, so I perfected the method.

I added the code:

if(dot(normal, lightVector) < 0.0)
	shadowed = true;

and so, the whole shader is:

#version 460
#extension GL_EXT_ray_tracing : require
#extension GL_EXT_nonuniform_qualifier : enable

layout(location = 0) rayPayloadInEXT vec3 hitValue;
layout(location = 2) rayPayloadEXT bool shadowed;
hitAttributeEXT vec2 attribs;

layout(binding = 0, set = 0) uniform accelerationStructureEXT topLevelAS;
layout(binding = 2, set = 0) uniform UBO 
{
	mat4 viewInverse;
	mat4 projInverse;
	vec4 lightPos;
	int vertexSize;
} ubo;
layout(binding = 3, set = 0) buffer Vertices { vec4 v[]; } vertices;
layout(binding = 4, set = 0) buffer Indices { uint i[]; } indices;

struct Vertex
{
  vec3 pos;
  vec3 normal;
  vec2 uv;
  vec4 color;
  vec4 _pad0;
  vec4 _pad1;
 };

Vertex unpack(uint index)
{
	// Unpack the vertices from the SSBO using the glTF vertex structure
	// The multiplier is the size of the vertex divided by four float components (=16 bytes)
	const int m = ubo.vertexSize / 16;

	vec4 d0 = vertices.v[m * index + 0];
	vec4 d1 = vertices.v[m * index + 1];
	vec4 d2 = vertices.v[m * index + 2];

	Vertex v;
	v.pos = d0.xyz;
	v.normal = vec3(d0.w, d1.x, d1.y);
	v.uv = vec2(d1.z, d1.w);
	v.color = vec4(d2.x, d2.y, d2.z, 1.0);

	return v;
}

void main()
{
	ivec3 index = ivec3(indices.i[3 * gl_PrimitiveID], indices.i[3 * gl_PrimitiveID + 1], indices.i[3 * gl_PrimitiveID + 2]);

	Vertex v0 = unpack(index.x);
	Vertex v1 = unpack(index.y);
	Vertex v2 = unpack(index.z);

	// Interpolate normal
	const vec3 barycentricCoords = vec3(1.0f - attribs.x - attribs.y, attribs.x, attribs.y);
	vec3 normal = normalize(v0.normal * barycentricCoords.x + v1.normal * barycentricCoords.y + v2.normal * barycentricCoords.z);


	// Basic lighting
	vec3 lightVector = normalize(ubo.lightPos.xyz);
	float dot_product = max(dot(lightVector, normal), 0.0);
	hitValue = v0.color.rgb * dot_product + vec3(0.4, 0.4, 0.4);

	// Shadow casting
	float tmin = 0.001;
	float tmax = 10000.0;

	vec3 origin = gl_WorldRayOriginEXT + gl_WorldRayDirectionEXT * gl_HitTEXT;
	vec3 biased_origin = origin + normal * 0.01;

	shadowed = true;
	// Trace shadow ray and offset indices to match shadow hit/miss shader group indices
	traceRayEXT(topLevelAS, gl_RayFlagsTerminateOnFirstHitEXT | gl_RayFlagsOpaqueEXT | gl_RayFlagsSkipClosestHitShaderEXT, 0xFF, 0, 0, 1, biased_origin, tmin, lightVector, tmax, 2);
	
	if(dot(normal, lightVector) < 0.0)
		shadowed = true;
	
	if (shadowed) {
		hitValue *= 0.3;
	}
}

You're basically correct. Here are the default shaders:

https://github.com/SaschaWillems/Vulkan/tree/master/data/shaders/glsl/raytracingshadows

This topic is closed to new replies.

Advertisement