Path tracing in Vulkan

Started by
116 comments, last by taby 8 months, 1 week ago

I believe that the chromatic aberration is working correctly. It's not rocket surgery, even for me.

Rainbows
Advertisement

I put the paper up online and linked to it here because I want it nit picked. Thank you very much for your time and expertise!

Actually, yes, the chromatic aberration is not very well done.

There is a better algorithm: use the hue of the colour to obtain the inverse of the index of refraction.

taby said:

Actually, yes, the chromatic aberration is not very well done.

There is a better algorithm: use the hue of the colour to obtain the inverse of the index of refraction.

This is surprisingly difficult to solve for. Any takers?

So, I've hit a roadblock with the paper.

The latest version is: https://github.com/sjhalayka/cornell_box_textured/blob/main/caustics/caustics.pdf

What should I add or expand upon?

Vilem Otte said:
I do remember seeing actual mathematical proof of ReSTIR being unbiased with both spatial and temporal filtering (keep in mind, you're storing and using reservoirs of sample points - discarding those that won't be temporally valid anymore … so technically it is possible that you may have to occasionally discard ALL of them).

I did read it too, but i did not understand it well enough. And i also don't know the precise definition of ‘bias’.
However, once you add samples from nearby but different locations or time, you always accept some minor error? This should be true even if all you take from those samples is a selected light out of many, but even more of you take radiance of an entire path.
I guess it's basically the same compromise as with denoising. We accept many nearby samples to estimate a local average. The error won't be large, but we loose at least high frequency details. And sadly Restir results are much more blurry than i would have hoped. But well, we can't have everything in realtime.

Vilem Otte said:
I do heavily dislike smear artifacts, disocclusion artifacts, etc.

Ah yes, i remember. But then i want to ask you: How do you feel about raytraced games in this regard?
Are the temporal issues a problem to you?
Does the advantage on better lighting affect your tolerance, so would say ‘it's fluctuating, but i accept it now because it looks so good?’

That's no technical question. It's really difficult to get an impression on how much temporal issues are acceptable to various people. Every opinion counts.

My personal experience is like that:

SSAO / SSR disocclusion artifacts: Terrible. The effects were cool when introduced, but currently i often turn them off if i can, to get rid of the artifacts.
Upscaling artifacts: Not sure yet. I know only FSR, and it works pretty well, but there are always spots where it breaks down. I accept it if i need it to maintain smooth framerate, but i'm not excited or happy about it.

Sadly i can tell about RT games, since i still lack a RT GPU.

But i do know people who absolutely hate those temporal issues. They would prefer games with inferior lighting but stable images.
We assume most people want photorealism. And we're willing to pay any price, e.g. putting 1000$ GPUs on recommended specs.
What if we are wrong? What if most people no longer expect improved visuals, but rather something that looks like a game and runs robustly on affordable HW?
This gives me lots of doubts recently. There should be statistics. Games industry constantly fails to estimate what people really want, it seems.

Vilem Otte said:
Unbiased progressive rendering algorithms have one property - at any point you can present them to user and then continue computation that will converge towards actual solution.

Maybe i got it wrong and the term really means just that, not involving dynamic changes at all. But then every from of RT would be progressive anyway, and we would not need the term?
But does not matter, and i can't find a definition. From what you say i conclude a simple standard method to trace dynamic scenes with still reusing older sampels was never defined, and nowadays we need to dig into denoising sooner or later anyway.

taby said:

So, I've hit a roadblock with the paper.

The latest version is: https://github.com/sjhalayka/cornell_box_textured/blob/main/caustics/caustics.pdf

What should I add or expand upon?

You're quick with writing papers. :D

But can't read, at least in Chrome it only gives errors like ‘invalid PDF’.

JoeJ said:
But can't read, at least in Chrome it only gives errors like ‘invalid PDF’.

Each time I read ‘PDF’ I'm reading it as Probability Distribution Function. We should do a fundraising to buy out majority in Adobe stock, then rename that format with changing suffix - also start bashing everyone who still attempts to use pdf at that point with court cases. Right after we should also sell it fast (before it plummets due to our actions). It can't be worse than what happened to Twitter.

Jokes aside…

JoeJ said:
I did read it too, but i did not understand it well enough. And i also don't know the precise definition of ‘bias’.

The definition of bias is quite simple. Let's say we try to find value θ, and we use an estimator E on set of samples X. Therefore we know that:

mean(X_n) → θ for n → \inf

I.e. sample means converges in probability to the value as the samples size approach infinity. We can express this relationship as:

E(mean(X_n)) - θ = 0

The expected value of the sample mean is equal to the actual value we seek. Now, as we work with estimators - we can rewrite that as:

E(δ_u(X)) - θ = 0

Which is unbiased estimator. Clearly the biased estimator is therefore:

E(δ_b(X)) - θ = bias

At which point question raises - what's the point of biased estimators, when you have an expected value different than the parameter we try to estimate (i.e. estimator where the result is clearly wrong!). This is in general considered as undesirable behavior, yet we have to take into account one thing - and that's rate of convergence. If δ_b converges much faster for first 1000 samples, while δ_u converges much slower … and we add a note that we are never going to exceed let's say 128 samples (due to performance/complexity/… reasons for example) - then estimator δ_b is superior in delivering closer approximation to the actual result. Eventually this boils down to what we want and require.

For interactive or real time rendering - consistency and low mean square error is more important than unbiasedness of the estimator. Being biased is very important for generating ground truth images (because eventually your MSE gets lower than precision of the data type you use - at which point, you have the actual result).

JoeJ said:
I guess it's basically the same compromise as with denoising. We accept many nearby samples to estimate a local average. The error won't be large, but we loose at least high frequency details. And sadly Restir results are much more blurry than i would have hoped. But well, we can't have everything in realtime.

ReSTIR is mathematically sound - it has an unbiased variant (the paper presents two variants - biased and unbiased - it is also a bit vague and confusing indeed - I've seen full proof of unbiasedness of ReSTIR done elsewhere), therefore the error in such variant is 0. Most of the implementations I've seen though, use the biased version of it - which results are still a bit blurry compared to ground truth.

JoeJ said:
Ah yes, i remember. But then i want to ask you: How do you feel about raytraced games in this regard? Are the temporal issues a problem to you? Does the advantage on better lighting affect your tolerance, so would say ‘it's fluctuating, but i accept it now because it looks so good?’

It's … well … problematic.

Ray tracing in real time rendering works, but combining it with current pipelines is … very challenging to be said. Having deformed geometry, particles, etc. often introduces many problems. I don't want this thread to decay into my rant about what is wrong with DXR, but let's just state this - I've been working with ray tracers for a long time - from hobbyist side, academic side (when I was in university and even some time after) and professional side (yes, I do use them in real world simulation applications - not entirely for rendering). The way it cleverly hidden large parts of algorithms is fine for some use ('I want to add somewhat believable reflections into my game'), but absolute nightmare for others ('custom definitions of geometry, deformable instanced geometry, dynamic LOD geometry, multiple kinds acceleration structures, working with participating media, etc.').

Out of curiosity I'm currently adding DXR approach into my engine (and trying to properly map it with built-in ray tracing structures). I'm doing it out of curiosity mainly for performance comparisons in cases where it is applicable (and it won't support all features). I have RTX capable hardware basically since day 1 it was released … yeah, I'm still a bit of hardware nut now eagerly waiting for release of RX 7800 to decide whether to upgrade from RX 6800 or not. I wrote some test applications with it, but wasn't nearly as impressed as most of the press or other developers (which is the reason why I didn't add it into my engine so far).

Back to the topic though - how do I feel about raytraced games in general? Not good, now let me show you one of the favorite games I've ever played:

Gothic II (Windows) screenshot: Exploring the only city on Khorinis. It's pretty big and has quite a few inhabitants who seem to be busy
Gothic II - Credits: MobyGames

Yeah… not a technical miracle (but a great game nevertheless). The art direction was consistent (which you can't often say about todays games). It was just few years before a trend in graphics came in - slapping Bloom on everything and considering it realistic graphics (TES IV: Oblivion is a prime example of that). Few years from there it was using yellow-ish filter on everything (Deus Ex: Human Revolution). And now it is RTX. It ruins games art direction and style - next years we will see games (and you already can see them) which overuse this effect to a ridiculous amount, just to state that “we've enabled as much RTX as possible”. Having more washed and smooth GI by over-blurring is going to be the new norm, together with DLSS/FSR heavy applications (which are always in papers presented on best-case scenarios, but in most cases it's like watching heavily compressed 4K movie with blocks and blur everywhere).

Temporal issues are a massive problem for me, and I rather prefer slightly noisy, but sharp image, to washed colorful mess. Better lighting has only sense when it keeps up with art direction in being consistent, otherwise one shouldn't use it.

This reminds me of the many remakes these days (by slapping “better graphics”) over the old games. In most cases it ends up in art-less colorful mess. It's a no from me then on those.

JoeJ said:
What if we are wrong? What if most people no longer expect improved visuals, but rather something that looks like a game and runs robustly on affordable HW? This gives me lots of doubts recently. There should be statistics. Games industry constantly fails to estimate what people really want, it seems.

This goes hand-in-hand with my previous statement about Gothic 2. The AAA games industry absolutely fails at estimating this in recent years - if you look at the past decade, the quality of games tends to be inferior to previous decade (this is a subjective opinion by me, and other people who are both older and younger, but in my social circle!). I'd recommend looking up 2 of the most popular mods in past years - Enderal: Forgotten Stories and The Chronicles of Myrtana: Archolos. These mods were released for relatively outdated games (The Elder Scrolls V: Skyrim in the first case, Gothic 2 in the other case), and by reviews (by both critics and players - which is rarely consistent today in AAA industry - as critics are often having vastly different view than players) are by far superior to AAA games released in past years.

Another comparison could be The Witcher 3: Wild Hund compared to the Cyberpunk 2077. The former being much more praised by the players for multiple reasons.

These things in general point out that games industry (and critics) are getting further from players. Eventually this will inevitably impact the industry in one way or another. This can even escalate to absolutely deranged state of corporate ideology like Activision-Blizzard has (which is perfectly illustrated by "Do You Guys Not Have Phones?").

JoeJ said:
From what you say i conclude a simple standard method to trace dynamic scenes with still reusing older sampels was never defined, and nowadays we need to dig into denoising sooner or later anyway.

For real time - it's either denoising, or basically doing what I did for DOOM challenge here - that is setting up scenes in a way that it won't be needed (which might be possible for specific art direction, but not in generic way). Eventually we might be able to get rid of denoising, but that may need a decade or more of graphics hardware development (and it still depends). I don't see any massive game changer that could improve ray tracing performance in order of magnitude or more.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

OK, so back to the rainbow stuff that I'd like to implement…

Instead of generating 3 rays per main() function like I'm doing now (one per colour channel), I would make like 10 or 20 channels. Now, how do I go about converting those 10 or 20 channels back down to 3 channels. Hmm.

Vilem Otte said:

For real time - it's either denoising, or basically doing what I did for DOOM challenge here - that is setting up scenes in a way that it won't be needed (which might be possible for specific art direction, but not in generic way). Eventually we might be able to get rid of denoising, but that may need a decade or more of graphics hardware development (and it still depends). I don't see any massive game changer that could improve ray tracing performance in order of magnitude or more.

Yes, these experiments with path tracing are GPU-intensive, to the point where it's not real time – a 7000x5000 screenshot at 1000 samples per pixel takes a minute or so.

This topic is closed to new replies.

Advertisement