Modern rendering and data creation pipeline

Started by
0 comments, last by J. Rakocevic 4 years, 3 months ago

After getting some good feedback so far on this forum, I'm back again. This time it's about preprocessing rendering data. Sorry for the long rant but this is a complicated topic to me and I wanted to explain the issue well.

Currently, I'm rewriting my mishmash of 500 billion (ok more like 15, but planned to grow) separate shader classes into a generic system. So far, creating a generic class to represent a set of drawing states (everything you need to call draw() and set textures, constant buffers, VBs, IBs etc…) is getting clear, and, having put some effort towards it, I see the benefit of this approach, and the urgency! As more shaders get added… I already have a big codebase repeating same old same old… map copy unmap, set this set that… it's unmaintainable!

However, a lot of replies here assume the existence of what is referred to as “data compilation” tool that processes typical 3D data from software packages (.obj, .dae, .fbx) and turns them into a format with extra meta-data and possibly a binary representation for faster loading. Although this seems really useful, it is also a lot of work. Not only writing the tool, but importing, editing and exporting to a new format for all assets, generating permutations, writing material settings PER MESH (some models that seemed simple had 10+ meshes after all reeeee!) seems like… hella work.

If this seems disconnected, I will simply say that drawing options needs to be fully (and correctly) filled if we want to really use this approach, for all renderable items. We can't always magically know, without manually setting them in code, which materials want which shaders, or even if they are opaque - some objects that are fully opaque use a 4 channel texture and the only ways to check is a) unreliable random sampling or b) brute force searching for alpha < 1… unacceptable! (Sadly, I'm working alone with random online assets although I guess I could reformat these but it seems like a hacky approach and not a proper solution)

So far, in favour of having this kind of tool:
- Faster loading times - skip all the processing that (in my case) Assimp has to do on each load and just load to RAM
- Possibility (yep, some things just aren't possible to automate without it as is, at least for me) to specify materials and shaders/passes/techniques that they use, as model metadata and removing all the incredibly irky code that keeps patching material information that I can't infer from the model loading process alone
- Freedom to create new materials without recompiling, possibly even hot reloading
- Obviously, letting non-programmers make content

1) This is all great, however it seems like a big task. For you that did it, does this way lie madness? How worth is it (solo developer, sick and tired of manually specifying material properties in code).


2) Do you reckon that one could really infer all the required data from models without having this sort of tool (aka, just “guess” which shaders to use based on present textures and vertex data in a model or something? For some I can think of a way, for other, custom, things I don't see a way for this whatsoever!). Because as reluctant as I am to get into this, it just seems that every day I bump into something that makes me think I just can't go without it.

I have the tools to make importing work already, my assimp code is pretty thorough and I have IMGUI on hand. It wouldn't be that hard to move that code and make the GUI but I'm afraid there are difficulties I can't anticipated based on limited experience.

Whether we are talking about an editor like in Unreal or a separate tool is not the point of the discussion but if there are any pitfalls or benefits towards either feel free to have at it.

This topic is closed to new replies.

Advertisement