Juliean said:
I cannot really agree that this is an unanimously good use - sorry to say. But it does sound like its a list, at least :D It does have exactly the same issue that normal lists have, namely, that iteration-overhead is going to be pretty bad. Now, it will outperform O(n^2) brute-force solutions at a reasonable number of elements, which I'm just going to assume you have. Though I'm still not convinced that its the best possible solution.
I didn't say it's an optimal solution, just a trivial example of putting a list to gut use.
But i wouldn't say iteration overhead is an issue. Complexity on iteration is minimal. Following a pointer to the next object is O(1) just as much as increasing your index when iteration an array sequentially is O(1).
The random memory access isn't an issue either, because you will only iterate a small number of objects in practice.
If you go brute force, testing each vs. each other object in O(n^2), random access would be an issue ofc., but that's no more argument, now that we have an acceleration structure to prevent large iteration counts.
So you have to agree the example demonstrates good use. You have to, because i did not specify how large N is. Thus, big O notation applies.
In questions and discussions brought up here, N mostly isn't specified. In those cases the general advise should be towards a solution of better time complexity, not a solution of maximized simplicity, imo.
To propose a brute force solution, we have to build up additional context introducing such specification of N. Which you and others do, but often only after guy like me complains.
So i observe a culture of brute force mentality in game development. And it becomes worse, now that most functions requiring some serious engineering are abstracted away in U engines or libraries.
This goes so far that game developers degrade to content creators. And i don't think this does our medium a favor, where pushing tech always was a dominating factor and road to success.
If we loose this factor of tech progress, we will see a stagnation in games, including game design. One game will be like the other. And we will see the same downfall which happened to the music industry: Out of ideas, selling out by streaming services, game over.
We depend on this tech progress. We must keep it upright, no matter what's the cost. But we can't move the cost to our customers, letting them pay for unaffordable HW. It won't work.
So we have to move the progress from the HW side to the SW side. It's the only way i personally see.
You see my motivation to bring those things up again and again has nothing to do with programming topics. I try to discuss the growing disappointment about games, and what we could do about it.
But for reasons i do not really understand, rarely somebody responds. It feels like the industry is ignoring the issue. So i become more aggressive and insist, but well - it doesn't help. Nobody is willing to talk about it.
Watching game awards but ignoring the iceberg in front…
However, please notice i do not criticize your opinion or work. You make a 2D RPG which does not need pushing tech. Brute force collision detection is fast enough, you do not even need multi threading.
That's fine. I'm not disappointed because you don't offer photorealism, massive destruction, or insane details, etc. I don't think any game should have this, just some of them.
Impressive tech is not what makes a good game. I only think we need this constant progress so the medium doesn't get stuck on the long run.
But currently, in a time of global crisis, keeping min specs low should be the priority i think. Your 2D RPG feels right, Portal RTX feels wrong.
Juliean said:
I really don't find separate std::vectors per cell to be worse than using lists.
That's interesting. I'e tried std::list just once. I saw it's very slow, and never considered it again.
But vector<vector<>> is even worse. Much worse. Just, it's hard to avoid it in cases, so i have to use it sometimes for my tools. But it's a real performance killer. Often i got better results with using some map.
Juliean said:
the fact that you rebuild the entire tree each frame screams to me that you would be able to compact the “list” or “vectors” from each node into one linear area of memory
Yeah, that's towards a ‘optimal solution’, as you said before. But that's much too complicated for a minimal example about using a list efficiently.
Btw, the real reason of complexity here is multi threading. I have such efficient sparse grid for my fluid simulator, for example. Updating the grid, so most processing remains lock free, becomes the major challenge.
That's also the reason i can not create reusable code for such things. I have building blocks such as pools for sparse blocks, traversal of the sparse grid, etc. (I even used inheritance for those things, and templates! \:D/ You would like it.)
But it's just a little bit of code. Trivial stuff, not really reducing the work if i'll reuse the sparse grid for other things in the future.
Updating the grid is currently too specific to the fluid simulator to make it general functionality. It depends on assumptions, e.g. that many particles are in a cell, and that only a small number of them moves to another sparse block in one step.
Maybe, if i could get 200 years old, i would become good enough to turn such things into a good, reusable code base. Or if i would spend years on just such specific problems.
But currently i'm faster by doing things from scratch and specific to current needs. I feel incompetent, but it works. : )
Coming back to the grid, it's interesting that the final reordering of particles in memory so they are sequentially ordered to a grid cell, takes the same time than all the complex stuff (updating octree for the sparse blocks, binning particles to blocks and cells, etc.) The reordering is just a simple copy operation per particle. It's bandwidth limited, thus multi threading barely helps to make it faster.
The interesting observation here is this: For a real time fluid sim, you would not use a sparse grid. You would use a simple regular grid, causing the restriction of ‘Yes, you can do realtime 3D fluid sim now! But only in a small box. So you need to design around this limitation.’ That's the current state of the art. Regular grid avoids all the complexity and cost of a sparse grid.
But the reordering is needed no matter what, otherwise your performance will drastically decrease over time.
Thus, if the sparse update only has the same cost as the reordering step, we could actually consider to use it for realtime too, lifting the box limitation, and enabling large scale fluid simulations.
So that's an area where i expect some progress happening in the next years. Which is then an example where replacing brute force with complexity will give us new things.
Juliean said:
So if there is any way to keep the same time-complexity, even if amoritzed, but use continous memory, it is very likely to be faster
That's what i meant when talking about ‘tuning for granularity’ (lacking a better word).
Basically, to beat brute force with complex solutions in realtime, we want to generate small batches of brute force work on the lowest level. So we still have advantages like continuous memory, low divergence, etc., but we also manage to reduce the work.
That's a general pattern i see. Better time complexity is still the highest goal, but we need to respect how modern HW works on the low level. HW wants brute force.
So we seek for the best compromise, which works, but ofc. N still needs to be large enough to be worth it.
Juliean said:
Nanite from my limited understand is a very complex system that does achieve some impressive things, and might not be possible without the features that you describe. But its also probably more than the sum of the parts, and not a reason to now use trees and stuff everywhere when you don't need to
But you don't know that you might need Nanite before you have it. Only after you took the risk to invest serious effort to solve an open problem, and then it actually works after a path of failures, only after that you see what it gives.
So what i miss from the industry is the will to invest into risky research.
Maybe it's indeed a good thing Epic became such a tech behemoth, so they can do such investments. Not sure.
Regarding trees, that's just a main subject for me personally. There are sure other promising things worth to explore. But trees rock, and i love them. So you can't talk down on precious trees without expecting my intervention. :D
Juliean said:
I do find many things about the larger gaming landescape disturbing too, like how a shitpile like Unity got so much traction, and how horribly (and IMHO unnecessarily) bad performance-wise Blueprints in an Engine I otherwise enjoy (Unreal) ist. But I don't see things as pessimistic, I guess.
Ahhh… thanks. : ) I feel better now.