C++ Core Engine with C# Entity Component System

Started by
7 comments, last by TestHuman1 4 years, 3 months ago

As the title suggests, I'm wondering what the performance implications would be about doing this.

Some context first, to explain why I would even want to do this. I have a pretty simple game engine project that I've recently brought back out of moth balls. It's not much yet. All I've built with it is a recreation of space invaders, which I did mainly so I could create the basic, reusable components (TransformComponent, SpriteComponent, RenderingSystem, etc), and then merge them into the core engine dll.

The whole thing is in C++ at the moment, and to be honest, I'm pretty rusty since I haven't done much active development in C++ for quite a while. C# on the other hand is my day job, and something I use in other hobby projects, so I'm quite comfortable in it.

As a result, I'm thinking about writing a C# layer on top of the core C++ library, which would consist of the Entity Component System, and anything else that would be required to facilitate that. Part of the appeal is that I can't really be bothered to implement an entire scripting system, nor do I want to deal with the nonsense of C++ when I'm just doing basic gameplay programming, and I think C# is a nice middle ground.

So, here's the question: how ill advised would that be from a performance perspective? I don't want to spend all this time rewriting the engine just to profile it at the end and realize I've been marching towards a dead end.

An entity component system typically consists of many small objects, so would C# just have too much of an overhead to deal with it?

And what about P/Invoke? Everything I've read about it suggests that it may be a real performance bottle neck, especially if I make managed to unmanaged calls frequently.

I know that Unity uses C#, but lets face it, they have far more time and resources to overcome any problems that would arise.

Any input on this approach would be greatly appreciated.

None

Advertisement

Well, you are describing the future-looking branch of the Unity engine. They have an ECS written in C++ with the basic gameplay code in C#. Their performance is good, so it can be done. The problem is, their code is a MESS. The C# stuff is hyper-verbose and the C++ implementation details leak out everywhere, right down to their choice of ‘chunks’ for storage.

So when you suggesting using C# instead of a scripting system, that raises a red flag for me, as not only is C# not a particularly high-level language anyway, but when you're interfacing with high performance C++ code you will have to work doubly-hard to abstract away those aspects.

If you do want to stick with C# then consider using unsafe code which will let you do more with stack allocated memory and avoid the ridiculous GC churn that you are likely to get otherwise.

Unity is driven to get more and more C# hull around their real C++ layer, especially with Unity 2018 and further using the data driven asynchronous systems you have finally access to arrays allocated in C++ code rather than need to bring your C# arrays from one end to the other. They are also rewriting their ECS to work with that system, components aren't classes anymore as they should and instead use data that is processed from systems.

C++/CLI is not an option here because it still is Windows only and a pain to write so instead you could concentrate of just transfering data and let the C++ layer do anything else. You can use unsafe code as already mentioned or pinned memory that is never changing it's address in the GC and involved for cases like this, or simply let the C++ code manage and allocate the memory for you and only pass pointers to C#.

Last is what I do, I hand wrote code to bridge my Engine and Editor code together. Engine in C++ and Editor (as same as all of my tools) in C#. It is similar to UnityEngine.dll Assembly referenced into C# code. This is simple for classes that are managed on the C++ side, you just store an IntPtr in C# that is targeting the native memory location and provide some p/invoke functions to work with it. The overhead is not noticeable however as long as you don't marshall data from one end to each other.

The CLR is quite good in moving native data from one end to the other without any noticeable overhead, it gets more difficult when structs are involved. They have to be marshalled from one end to the other and so cause overhead when copying them into C# managed memory and vice versa.

If you still need to work with structs, you could also pin an amount of managed memory and use this memory as a transfer layer between C++ and C#. This works quite well as I'm using this method to speed up my editor drawing in WinForms. I have a Bitmap object that is pointing to the pinned memory and have GDI render that bitmap object on every update instead of calling the costly GDI functions. Any subitem gets a copy of the pinned memory with some offset related to it's position and GDI draws to the same memory that is used to buffer my general rendering. Performance boost to 70fps in WinForms.

TL;DR using pinned byte array in C# (or at least one for every thread to not hustle with locks) is a good choice of bypassing the marshalling process. You can pass the IntPtr to native C++ code, it is fast to transfer primitve types (a pointer for example just covers a void* field) and let your C++ code work with those pointers, for example memcpy data to the memory reagion. On the C# side, as far as you know what data is written into your managed byte array, it is very easy to use BitConverter class to extract the primitive fields to be usable in managed C# code. Maybe you want to add a

public static MyType Clone(byte[] data)

function to those structs but that is up to you.

Last but not least, one word to p/invoke (eve if you know, someone else coming accross this thread might not): Choose the correct calling convention on the C# side is important as same as having the library you need to build of your engine be in release mode or else you'll watch starnge behavior in C#. The CLR isn't able to mix native release and debug code at all. Also prevent using namespaces and write your abstraction layer in pure C-Style is recommended to not fall through the name-mangling trap

This is something that might interest you. Using mono as a scripting layer. Seems pretty easy and straightforward to set up.

https://www.mono-project.com/docs/advanced/embedding/scripting/

https://www.mono-project.com/docs/advanced/embedding/

I haven't tried it myself but I would if I didn't already have a kick-ass scripting layer in the form of AngelScript.

Embedding mono is an ugly thing. We worked with Edge.js (a transfer layer between javascript and .Net) and had trouble with the mono embedded runtime, especially when marschalling data because mono is marshalling data sometimes different to .Net and throwing too much data through the layers is slowing down the application.

Just my two cents

Why not write the engine (ECS) in C# and only use C++ for really performance sensitive stuff like rendering or physics.

Honestly, if you're just looking for the best path for the project, the thing to do with be to simply go pure C# across the board. There's nothing functional that C# will prevent you from doing, and you can consider moving specific things out to C++ if you find that performance isn't what you need it to be. From your description, it doesn't sound like the C++ base is especially large or hard to reconstruct, and it's almost certainly going to be less painful to port it rather than merge the two languages. Been there, done that, would prefer not to return.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

Shaarigan said:

[…]

TL;DR using pinned byte array in C# (or at least one for every thread to not hustle with locks) is a good choice of bypassing the marshalling process. You can pass the IntPtr to native C++ code, it is fast to transfer primitve types (a pointer for example just covers a void* field) and let your C++ code work with those pointers, for example memcpy data to the memory reagion. On the C# side, as far as you know what data is written into your managed byte array, it is very easy to use BitConverter class to extract the primitive fields to be usable in managed C# code. Maybe you want to add a

public static MyType Clone(byte[] data)

[…]

The IntPtr idea sounds a lot like how I had envisioned approaching this (I thought I had mentioned it in the opening post, but apparently not).

Dirk Gregorius said:

Why not write the engine (ECS) in C# and only use C++ for really performance sensitive stuff like rendering or physics.

Perhaps my intent got lost in all the waffling of my post, but that's the essential idea I was going for; a C# ECS, and a C++ core engine.

Promit said:

Honestly, if you're just looking for the best path for the project, the thing to do with be to simply go pure C# across the board. There's nothing functional that C# will prevent you from doing, and you can consider moving specific things out to C++ if you find that performance isn't what you need it to be. From your description, it doesn't sound like the C++ base is especially large or hard to reconstruct, and it's almost certainly going to be less painful to port it rather than merge the two languages. Been there, done that, would prefer not to return.

Interesting. I hadn't even considered the idea of doing the whole thing in C#. You're right that the engine isn't particularly large, but I still don't think this would be a good idea for me. First of all, rewriting it seems like more work than just writing a C# layer, and secondly, while I would rather work in C# for the day to day gameplay programming, I don't want to cut C++ out entirely. Keeping the C++ core at least gives me an area of the engine to sharpen my C++ skills.

Additionally, even if it's a bit of work to combine the two layers, doing so would be a useful learning experience.

Having read the replies here and thought about it a bit more, I've decided I'll proceed with the plan to write the C# layer. The right decision? Maybe, maybe not, but this is a hobby project so even if it fails it's not going to cause any studio to go belly up.

None

This topic is closed to new replies.

Advertisement