The 'correct' way to design a modern game engine

Started by
2 comments, last by Andrispowq 3 years, 10 months ago

(I'm new to this site, please let me know if I've posted to the wrong place)

I've been creating games for 4 years, and 3D games for 2 years. I've only used Unreal Engine 4 and Unity for a few weeks. So the job of deciding on a design and patterns for my engine was kind of up to me, and my experience. I've came up with a - in my opinion - very good and expandable design, but I need your help to improve it and make it better, because of course, I can't know every little detail.

The engine is written in C++, and uses the OpenGL and Vulkan API to interact with the GPU.

Here's an overview of the system:

When the engine starts, it loads every asset. This won't remain like this, I'll make it load in the background, but that's a task for a future time, as I need to start working with threads and thread safety. It loads the whole scene from a so called ‘world file’, which basically tells the engine where are the models, textures, and sounds, which one to load, how to compose materials, and then objects from them.

The engine uses an Entity-Component-System architecture. Everything in the game is a GameObject. It has a transform, a parent, and a list of children, and a list of components. Every component adds some data to the object. The systems in the engine are called engines. There are multiple engines, like RenderingEngine, AudioEngine, PhysicsEngine, etc. The RenderingEngine has a list of Renderer components and Light components. When a GameObject with a Renderer component gets updated, it adds itself to the list of renderer components in the RenderingEngine. So basically, the components have some data, and register themselves to the corresponding engine, so they can get updated, or used upon rendering.

What this means is if you wish to add a new functionality, you have to create a component for that purpose, and if non of the existing engines do what you need, you have to write a new one. This is not a very complicated process, and all of this is API independent, so you don't need to bother writing everything twice because of the two APIs.

The rendering process is like this:

The smallest elements of the rendering are VBO, Shader, and Material. A VBO and a Shader make up a GraphicsPipeline (there are more Pipeline types, ComputePipeline, and RayTracingPipeline, but they're not relevent right now). A GraphicsPipeline and a Material make a Renderer component. What's important here is that we only create one VBO for every model, one Shader for every program, and one Material for every different type. So if we render the same model twice, we will still only have one VBO. They're raw pointers, so I took special care of that, namely collecting them into a static list, and deleting them once on the cleanup process. So, we do not duplicate anything. We assemble every single Renderer object from the same GraphicsPipelines and Materials. It's useful because: 1) we don't duplicate anything, so less memory consuption, and 2) I can batch them together easily.

I use a batch renderer, so I bind the GraphicsPipeline (in Vulkan, it's the VkGraphicsPipeline object and the VBO, and in OpenGL, it's the VBO and the Shader) only once, and render every object using that VBO and shader pipeline. Remember how I mentioned that the components add themselves to lists? The RenderingEngine has a models map, which maps Pipelines to lists of Renderers, so we know which Renderers use which pipeline. This is why we only create one instance only, because we can compare the already added Pipelines as pointers, which is not a heavy test.

Every other engine works in a similar way. I hope you understood my design, and please let me know if you found anything that could be optimised far better because I overlooked something or if you didn't understand a part of my explanation.

Advertisement

The smallest elements of the rendering are VBO, Shader, and Material. A VBO and a Shader make up a GraphicsPipeline (there are more Pipeline types, ComputePipeline, and RayTracingPipeline, but they're not relevent right now). A GraphicsPipeline and a Material make a Renderer component. What's important here is that we only create one VBO for every model, one Shader for every program, and one Material for every different type. So if we render the same model twice, we will still only have one VBO.

This is a good representation of individual objects, but not of everything you are going to draw each frame: how do you manage dependencies between objects and pipelines? For example, rendering a secondary scene to an offscreen buffer before using the buffer as a texture for a CCTV screen, compositing layers in a specific order (e.g. fullscreen flashes above a HUD above a 3D scene) but rendering them in parallel if possible, or finishing a compute shader that processes a particle system before using the results to draw the particles?

Omae Wa Mou Shindeiru

@LorenzoGatti Wow, well actually I haven't thought of that, but I think that I could easily extend the Renderer system with flag that indicates if it's needed multiple times, and then creating multiple lists in the RenderingEngine, but maybe I'll have to do a little bit of redesigning. I'm just about to grab a pen and a paper and work out where to go next and how to expand the system, because till this day, when I wanted to do something, this is what happened: I would straight up implement the feature with small thinking, and it worked or it didn't work. Then I took a break, looked at things, and worked out what could be done better, how could I fix errors if there were any, and I then made the desired changes. And this worked surprisingly well, I haven't come across any difficulties that I couldn't work out with some raw, top of my head coding and some thinking later. But of course this is not the way to design an efficient engine, so I'm gonna design things, maybe rewrite some things that I think needs changes, specifically, since I'm using Vulkan, I'm gonna try and insert a ThreadPool system with some multithreading to load the scene from a different thread, schedule more and less important loadings, use multiple threads for recording command buffers, etc, so taking full advantage of the hardware.

This topic is closed to new replies.

Advertisement