Alternative to Task Manager for a Game Engine Kernel

Started by
6 comments, last by Alberth 8 months ago

Hello,

I have a question about Task Managers and Parallel Computing. I am reading the book Parallel and Distributed Programming using C++; while entertained I came across an idea of just sending functors, I call them IStep Objects, but have not had the chance to explore the idea yet. This can be done by grouping the IStep objects and requesting them in parallel from the distributor (aka master parallel object) which contains the lists. This way the IStep Objects can be Analyzed, Sorted and then Processed by some algorithm, or set of algorithms. For example one can have a graphics group where objects are analyzed for there z-axis position value and sort in z order if the orientation is correct for your perspective, then culled and dynamic tessilation depending on distance. Then they can be passed to a sequence object that renders the IRenderables. That is a simplification i hope you understand. However, one would want to perform these steps on a batch of objects though.

Does anyone have any idea on the subject and all you need to do is signal new frames to perform batch operations. This is parallel computing so I know its an advanced topic.

Advertisement

Sounds interesting! Where will the sorting take place, and how frequently will it need to be done?

I was thinking of have a thread function that might look like the following:

  void* ThreadSequencer(void*) {
    CSequencer _cSequencer;
    sResult _sResult;
  
    _sResult = _cSequencer.Initialize ( &(CDistributor::getSingleton()) );
    while (_sResult > SCRUT_RESULT_RELEASE)
      _sResult = _cSequencer.Update ();
  	
    return nullptr;
  }

where sResult is a signed int that if positive means success.

CDistributor is a simple class that holds the IStep Objects in classic std::vector arrays; where each list is protected by a mutex. CSequencer is the class that requests IStep Objects from the CDistributor class. When you call the Update method it will request the entire array and store the results in a array that is a member of it (IStep Object Interface Pointers Array). CSequencer does not need to be protected by a mutex because its only manipulated by the containing thread and not accessed in parallel. After requesting the array it has methods called Analyze, Sort and Process that are called in that order.

In the example CSequencer is not the real class name of course that I am programming for my CRenderer i have in OpenGL. But is CGraphicsSequencer and it inherents ISequencer Interface. Reply if you need any more info. And of course this is just in the makings…

You're building a VM in terms of IStep Objects and things that convert/process such objects?

(In my view, a VM is just an abstraction level in expressing a computation so you can construct computations at that abstraction level, and/or reason about what they do). As such, VMs exist in many forms, like CPU instructions, JVM instructions, a collection of interfaces or a collection of classes, or even a programming language.)

I would not call it a scripting language, but I started a project here on gamedev.net for a tool set and library to include my language in my projects; my game engine will use a modified scripting engine and high level programming language. except I am building my language into machine code, that is why I would not call it a scripting language. IStep interface in my engine inherits from IObject, a managed object, and I might make IDataResource inherit the same interface such as IResource, because I consider IStep interface an executional resource in my engine.

It would have some abstraction layer to be honest, so I assume it can be a virtual machine. I do see what you are saying with your post. But I would use smart execution of the IStep for script, running a certain amount of machine code then place back on my CSequene list and continue next time it reaches that IStep Interface Object. And will remove the IStep Interface Object from the list when its complete.

You can lock until the IStep Interface is manually removed from the CSequence list; it would then re-run unless I had a setup on the abstraction layer to update the script to call a Update exported function that would be located in the Object File when loaded. Using Scrut Object Files, my own file format and type, to store all my IObject data, and when I say Object file (not Scrut Object File or .sof) I mean a file containing machine code so it is both an Object File and has the file type .sof

The IStep Object Interface would then be inherited by CScript and other classes that have either a functor or is hard coded. I convert my network tokens into objects that inherit IStep Object Interface and pass them to the distributer during updating the layers of my engine, as I give each layer in my engine input and output streams; this allows me to group my tokens in the layers and have an array (very important to index as one needs to make it efficient) of IStep Interface Object Smart Pointer then index the array to get a shared pointer of the IObject Interface; the Interface would not contain the data to be acted upon by the IStep that is hardcoded in the derived classes, except CScript to say again.

So to answer your question it is not purely a virtual machine for script; I compare it more to a task manager acting as a dispatcher and threads that execute the ITasks (aka IStep) in parallel. So no.

scruthut said:
For example one can have a graphics group where objects are analyzed for there z-axis position value and sort in z order if the orientation is correct for your perspective, then culled and dynamic tessilation depending on distance. Then they can be passed to a sequence object that renders the IRenderables. That is a simplification i hope you understand. However, one would want to perform these steps on a batch of objects though.

IStep interface in my engine inherits from IObject, a managed object, and I might make IDataResource inherit the same interface such as IResource, because I consider IStep interface an executional resource in my engine.

For parallel processing, the reason for doing it in parallel is paramount, and as quite as important as the algorithms you choose, often the same core issue.

In parallel processing, data mapping and process communications are usually the primary drivers. There is often a tremendous effort in discovering the limitations, the longest spans of sequential work, and minimizing the data dependencies between them. It's about building algorithms that allow actively using all your processors or memory space to solve the problem with parallel speedup. In your early example you've got a situation where the sorting plays a significant factor in the processing, you're looking at reordering work and basing communications between work units relative to the viewpoint, saving work by re-ordering versus the work that would be done if kept serial. This doesn't hold for what you wrote in your later posts where it is just being used as a generic placeholder for a unit of work rather than something that leverages parallel algorithms.

In games generally remember that your consumer-facing system needs to work just as well with 4 cores as it does with 64. Server-side systems are generally constrained to VM's and low logical CPU counts. We're not building or using supercomputers. As a result, often we build systems that are tasks based, and also build them up with priorities in several sets, the "must-process" tasks that get done first, the “nice to have” like fine-step processing, and the “because we can” like extremely detailed cloth simulations or whatever. Those aren't parallel processing algorithms, they're merely tasks that happen to be scheduled in parallel.

In game simulation processing, that is exactly why the concept of a task manager is powerful and common. It is rare to use algorithms that rely on parallel processing for computational benefits, outside of a few specialty cases like hardware-accelerated physics or cook-time mesh refinements. We rarely need tasks like parallel large sparse matrix manipulation, parallel searching of enormous data sets, parallel all-pairs processing, or others that see significant benefit by parallel algorithms. However, it is quite common to build up huge collections of tasks as small bundles of work, where the small independent tasks can be scheduled or ordered in no particular way with no particular benefit other than being more convenient to schedule, and exposing the opportunity for optional processing tasks on hardware that has many cycles to spare.

You might be confusing abstract interface definition (ie an VM) with an implementation of that VM (ie a program or set of code that provides the service promised by the VM definition).

You are sitting on the chair of an implementor of the VM. That chair comes with a lot of details how all the internals work, what data should be saved in what format, etc. So of course your mind is filled with all the internal stuff, and that's fine, it should be! (Or you won't be able to realize an implementation.)

A VM definition is however **not** about implementation details. It just describes the things that you can use, together with how different things in the VM relate to each other such that you can get a particular effect.

As a more concrete example of that difference, take the language definition of any programming language. That definition is a VM, describing what classes/statements/interfaces/modules/etc exist to perform a computation. None of those definitions explicitly state how an implementation is actually realizing its functionality. For example, the C programming language book doesn't say C **must** be compiled. It also doesn't tell you how to build a C compiler. That book is a pure interface description (with a strong link to what will happen at CPU level if you perform an operation with certain C types). It tells that “int + uint” gives an “uint”, but not how to perform that “+” in a computer.

In my experience it helps if you split your interfaces/classes in “VM level” (that is, the primitives that an arbitrary user of your VM should know and use), and “all the stuff below it” that you need to realize an implementation. The latter may include file formats as well (ie the C programming book doesn't tell you about “.obj” files, while all C compilers that I know do produce them.)

It also helps if you make a manual of the VM (lot of work though). Basically all this is about fixating the public interface to reduce the danger of creep of the interface when you start digging in the details of the implementation. It also raises important decisions, eg your “task manager”. Is that public or internal? or it “dispatcher” a better name for it if it is public? (My 2 cents here, avoid the “manager” word and try to find a more specific name for what an object does, to reduce temptation of sticking unrelated functionality under the umbrella of a too wide coverage suggested by the name.)

This topic is closed to new replies.

Advertisement