ECS integrating cameras/viewports with renderables/rendering

Started by
2 comments, last by ballzak 3 years, 8 months ago

So my rudimentary ECS based engine has different renderable components, e.g. mesh and text, it also has camera entities with components for transform & projection, e.g. for world/player vs GUI. Now i need tips on how to integrate both in a flexible way, i.e. which camera renders what, preferable supporting off-screen rendering/viewports?

The only solutions i can think of:

  • Every entity having one or multiple components telling which camera/viewport it should appear in.
  • Separate (entity) "worlds", one for each camera, or world vs GUI.
Advertisement

If you really want to keep it the way that different cameras decide how things are rendered (world vs GUI), which IMHO is more of a hack than a good solution… than you have a few options.

Let me just quickly present you the unity-way, which is assigning each Entity/GameObject a “layer” that is is on (Default/UI/Postprocessing), and each camera has a mask where you can set which layers it should render.

If you want me to go into detail why I don't think thats a good design, I can do so (but I'm going to sleep now and not going to bore you with something if you already decided that you want to do it exactly that way).

@Juliean I've made no decisions yet, just hoping someone have up with clever solution on how to associate a renderable with camera during the render phase.

So the Unity way is to assign renderables to an (indirect) “layer” concept instead of directly to a camera, stored as a bitset, possibly in an existing component. I expect working with a bitset is easier and faster than lots of “camera renderable mapping” components, but it seems less flexible. I haven't got to post-processing yet, but i guess that's where the “layers" is preferable?

I'd like enough details to avoid making bad decisions ?

This topic is closed to new replies.

Advertisement