Directx 12 resource management structures

Started by
4 comments, last by martin.gee 1 year ago

Hi!

I'm currently building my own 3d game in directx12 and c++. I don't have much experience in either programming or game engine architecture, but I've used different source to get some understanding. Thus far I've been able to build a renderer that is able to load obj files from blender with vertex, uv and normal info. Besides that i have a basic combined material and texture system with DDS support.

I use hlsl 5.1 and dynamic indexing and have implemented instanced drawing by using structured bufferers to handle instance data. Besides that i have made a dynamic descriptor heap and a ringbufferer and basic classes for different resource types. Shared memory bufferers are possible to update on the fly, for example after game update logic. Currently i have implemented basic input tracking and a camera.

The problem is that i have been having a hard time wrapping my head around who to structure the resource management for the rendering pipelines. I mean, i can create everything statically and create a scene with everything, but it seems a bit inflexible to hardcore all combinations of meshdatapointers, material and instance bufferers and object spawning/handling (and linking it all to a “renderobject” of sorts). So I've been thinking about a directx side “resource manager”, but I would like to know how others write object management systems in terms of the renderer?

I'm not asking for actual code since i like diving into that myself - but I would love some structural advice?

Hope i haven't broken any rules, since I've been trying to find the answer here already

Advertisement

I'm not really a DirectX12 expert, but here's my system. A material in my system is very general. It holds two other classes a “property set” and a “shader set”. The “property set” can contain any basic resources like a texture, constant buffers etc. The shader set is any combination of shaders. The property set stuff goes into descriptor tables. I also have a couple generic constant buffers that are always there, which go right into root signature. There is actually a manager for root signatures that I'll describe later.

Since there can be several “in flight” frames (usually 2 or 3) I have one descriptor table for each possible frame, and I have a system to duplicate any changes before using that frame slot. I probably could have done this with one descriptor table since you can use part of a table while writing to another part, but conceptually it's easier for me this way. Each “material” uses a contiguous set of descriptors allocated with the allocator and that becomes my descriptor table. I use a power of 2 hash allocation system just to keep things sane. I also have an allocation system for root signatures. Since different descriptor tables require different root signatures, I have a big table of root signatures that I can call upon. I don't actually fill a slot in that table until I know I need that root signature, but once I create it, I cache it for later use.

The other thing I have to deal with is mesh destruction since I create and destroy many meshes on the fly in my LOD system. My meshes are split into two parts “mesh” and “mesh data”. Mesh data is referenced counted (as are most things in my engine). When I render a mesh, it's mesh data goes into a list that saves it until the frame is done. There is one list for every “in flight” frame. So if my engine deletes a mesh before it's finished rendering, the mesh data (i.e. buffers) for it is not destroyed until the last frame that uses that mesh is done rendering. I have a lot of muti-threading in my engine, so this is necessary. I also have something called a “mesh cemetery”. This is a list of meshes to be deleted, but I do it in a separate thread as not to slow down the rendering. This is an optimization really.

I'm not sure if any of this is useful for you. It's kind of a work in progress and most of this stuff is off the top of my head. I didn't really use any standard methodologies here. But it seems to work OK.

@undefined thanks for your input. I'm curious how you handle object/mesh creation in terms of data.

Do you have some sort of object manager class that makes sure that mesh data, material data and such get linked? And how are buffer structures allocated and linked?

For example, to create 10 boxes i have to do the following:

Load the obj data into a vector. Transfer data to vertex and index bufferers. All this is automated.

Next i assign materials, textures, transformation matrices into a structured bufferer. This is manual, and where i wonder how others structure the systems

Last i create a “batch”render object which describes a pointer to the meshdata, the amount of instances and lastly a pointer to the instance data. This structure does the final drawcall. In a more automated system this should also be handled. Right now it's more manual

martin.gee said:

@undefined thanks for your input. I'm curious how you handle object/mesh creation in terms of data.

Do you have some sort of object manager class that makes sure that mesh data, material data and such get linked? And how are buffer structures allocated and linked?

Well my system is quite specific to my needs. A lot of my meshes are generated by voxels through a variant of marching cubes. Those are in double precision since I'm generating large worlds. What I have is a bunch of different CPU mesh formats with the ability to add more. Then I have GPU mesh formats, which can have different data depending on the object I'm rendering. I can also add more of those.

For instance, my CPU voxel meshes are a list of voxels. This way I don't have to copy the data out of the voxels and then do a second copy to the GPU. Also in some cases the data inside the voxel can change, but I still want to keep the same voxels in a given mesh. This is for supporting LOD. A more normal CPU mesh will simply be an array of vertexes and one of indexes. In any case I have a bunch of templates that can be used for doing easy conversion from a CPU mesh to a GPU mesh. Note that mesh generation and download to the GPU, is done in its own thread so it doesn't slow down rendering. I think this much is pretty standard from what I've seen.

I have the concept of a pipe which lower down is really a command list. There are “view” pipes for rendering, and “copy” pipes. With copy pipes I open a pipe, add a bunch of meshes, and close it. I can either tell it to block at close time until its done downloading, or I can tell it to continue and do the block later before I need to actually use the mesh. This way I avoid blocking at all much of the time.

Objects can store meshes in any form that makes sense. I'm not really going for ECS right now since it doesn't really seem to fit what I'm doing. I do have the concept of a chunk which is generally used for terrain. But I've generalized it to mean a set of meshes that is part of an object. An object can have several chunks. Most will have only one, but planets may have hundreds, and chunks can be added or deleted at any time (usually for LOD).

Chunks also have their own transformation matrix. These are sent down every frame for any chunk that's rendered. I have to do this to support large worlds. Unlike a lot of systems, I can never go to “world” coordinates on the GPU because I would lose precision since GPUs only support float (at least at a decent level of performance) and my planets would look like garbage. So I always go directly the view coordinates which keeps anything that is close to the camera in high precision.

Next i assign materials, textures, transformation matrices into a structured bufferer. This is manual, and where i wonder how others structure the systems

In my case materials are kept with each mesh. However, “mesh data” can be shared by more than one mesh so I can apply different materials to the same model.

My objects themselves are in a tree, so objects can have sub-objects. From what I gather what I call an object is what other engines call an actor. In between objects and sub-objects, I have object references. There are different kinds of references. A simple one is a static reference. But for instance, I also have an astral mechanics reference which does obits and so forth. For character control I have a walk reference that connects to keyboard and mouse inputs. And for the camera (also an object), I have follow references, which can track any object under the same parent as themselves.

Physics is a whole different thing. Most of the stuff is run time procedurally generated. This is so I can have earth sized planets without actually storing most of the data. The problem is since I don't have everting built at once there is nothing to collide with. So each object has the option of having a sister physics object. The sister object builds meshes just around the player as you move, so it's kind of like “Just In Time” collision. The problem with using a graphics mesh for physics is there is no guarantee that a mesh will be built by the time a player arrives at a given location. If something lags, a player could fall through the geometry since LOD is done in its own time. However, the physics geometry I build in the main loop. If something lags, it lags, but at least the player won't ever fall through the earth, since this gets rid of race conditions. Since the physics mesh is very small, I haven't found lag to be a problem anyway.

But again, this is all very specific to what I'm doing so don't take this as any kind of template.

Edit: BTW if you are just trying to make a game, DirectX12 is far from the simplest way to do it. I mean there are game engines that are mostly or fully free, which will save you thousands of hours. Most people starting out would never jump right into DX12. The fact that you have something working at all is pretty impressive for someone new. I'm not saying you defiantly shouldn't work in DX12, just pointing out there are easier ways. If you want to lean lower level development, or are doing something really special, it might be worth it.

@Gnollrunner

Thank you for sharing your overall structure. I helps to get a grasp of how others do their stuff.

Gnollrunner said:

Edit: BTW if you are just trying to make a game, DirectX12 is far from the simplest way to do it. I mean there are game engines that are mostly or fully free, which will save you thousands of hours. Most people starting out would

never jump right into DX12. The fact that you have something working at all is pretty impressive for someone new. I'm not saying you defiantly shouldn't work in DX12, just pointing out there are easier ways. If you want to lean lower level development, or are doing something really special, it might be worth it.

I find the challenge fun, and since everything is working so far, i'm not discouraged by the directx12 complexity.

I've added a rough graph of what i'm wondering on how to handle. This graph doesn't contain all of the engine elements, as they are listed in my previous post. But this illustrates the proces that i wanna figure out how to do. Below black are function-links, red are pionters (or ids/other kind of references) and green are my current focus

The problem isn't that i don't have any ideas. For example I think i could just create a structured bufferer allocator class and create the allocation for each frame.

But I don't think this is optimal. It means parsing data for all objects each frame. It means desctrution of bufferers between frames.

I guess i could reuse buffers and update the changed ones and just reallocate if the amount of instances grow over the amount of allocated space.

How would you manage DirectX12 Object Data (Structured buffer : instance data) creation/allocations, reuse or destruction?

Thanks for all your time thus far :-)

This topic is closed to new replies.

Advertisement