2D acceleration on modern cards

Started by
5 comments, last by Digitalfragment 12 years, 6 months ago
I'm wondering if anybody knows whether or not modern consumer GPUs on PC (ATI, Nvidia, Intel) still have dedicated hardware for 2D rendering operations, or if the drivers repurpose the 3D hardware for 2D operations now? It seems like it would be easy enough to emulate most 2D blitting operations with screen aligned 3D primitives.
Advertisement
3D operations end up being 2D operations by the time pixels are being processed. Yes, the common way of performing 2D processing is with flat "3D" primitives.
Considering that 2D is really the very same as 3D but with everything having the same Z, I don't see any reason why dedicated hardware would even be required.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

There used to be dedicated 2D hardware once upon a time, when 3D graphic accelerators were out of the questions and blitting with a reduced CPU and bus workload was a godsend, until around 1995 modern 3D acceleration transitioned from really bleeding edge (e.g. early Voodoo add-on cards that worked alongside a graphics card) to a barely more expensive alternative to a less capable mainstream 2D only graphics card.

Omae Wa Mou Shindeiru

I don't know if this is still relevant in 2011, but out of interest -- at work, there's one particular GPU from ~5 years ago that I've got complete control over (as in, I can manually write words to it's instruction stream and control it's program counter, instead of using an API like D3D).
On this GPU, there are still commands to switch it between 3D mode (which allows it to act like a GPU) and 2D mode (which allows it to perform raw memory copying operations in VRAM).
However, it also has a very deep pipeline, and switching modes causes a pipeline flush (which is almost the worst thing you can do to harm performance). So, in our new engine, we avoid using the 2D mode *at all* and perform any "2D" type operations in 3D mode.
Thanks guys. That's one suspicion verified :).

As a further question, what about acceleration of primitives that don't have any easy 3D analogue? Blits and lines are pretty easy to do (as you say, rasterize in "3D" hardware without any perspective correction), but I'm curious about more complex 2D primitives such as filled multi-point polys and arcs. I think (although I'm not positive) that older cards had real 2D hardware dedicated to raster operations such as arcs and complex polygons that the GDI could use. I can see how these things could be emulated using line and triangle strips with a bit of CPU preprocessing in the graphics driver, but is that how it's actually done?

Hodgman, out of curiosity what chip is it?

As a further question, what about acceleration of primitives that don't have any easy 3D analogue? Blits and lines are pretty easy to do (as you say, rasterize in "3D" hardware without any perspective correction), but I'm curious about more complex 2D primitives such as filled multi-point polys and arcs. I think (although I'm not positive) that older cards had real 2D hardware dedicated to raster operations such as arcs and complex polygons that the GDI could use. I can see how these things could be emulated using line and triangle strips with a bit of CPU preprocessing in the graphics driver, but is that how it's actually done?

3D video cards only rasterize triangles (some /may/ have dedicated quad rendering paths, but most will still split these into two triangles). Concave polygons are required to be pre-tesselated, and from what i've read, that functionality is no longer hardware accelerated. Though you could quite easily do so via compute shaders.


Hodgman, out of curiosity what chip is it?

Vendor NDA will prevent him from being answer that, im pretty sure. The hardware in question is both a blessing and a curse to work with though.

This topic is closed to new replies.

Advertisement