Accessing video memory in c++

Started by
33 comments, last by phresnel 14 years, 8 months ago
Quote:Original post by Phynix
And the reason I wanted to stay away from D3D is because it would be redundant. I don't find any point in creating D3D from D3D (creating a graphics library that has the same or less functionality than the thing it was based upon). Direct3D is already a 3D library, and it doesn't make sense to make a 3D library out of a 3D library.
Thus why you should use PixelToaster, rather than mucking about with GDI. PixelToaster will abstract away all the little details that don't matter to your 3D engine, and probably perform better than GDI into the bargain [smile]

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Advertisement
ok, ill use pixeltoaster then. It seems like the best option around, even if it does use d3d.
Why not simply use SDL?
My iOS 3D action/hacking game: http://itunes.apple....73873?ls=1&mt=8
Blog
Quote:Original post by EngineCoder
Why not simply use SDL?


If you don't know any of them, and are only looking out to stuffing some surface with pixels, then pixel-toaster might be the easier and more appropriate choice.
So you're going to use D3D, via pixeltoaster, to write a D3D-like API/library? Why?

Quote:is it possible to draw individual pixels in the WinAPI, without having to get the DirectX sdk, wasting a bunch of RAM, etc?


Your plan is hardly going to improve your memory usage. I don't think anyone really understands why you want to do this. If it's for your own understanding, then why are you so averse to using D3D? If it's to try to create a usable 3D API then it's hardly worth bothering, as you're not going to be able to better D3D or OpenGL, which have access to the graphics hardware through the graphics drivers.

Basically you are going to have to use D3D or OpenGL at some level to get your graphics data to the graphics card's display memory, even if you do all the rendering with the CPU.

EDIT: If you really want to write your own software renderer, then use D3D with a lockable backbuffer, and just change the backbuffer pixels in memory (perhaps using a simple #define macro). It's not a very good way to draw 3D graphics, but then ignoring the graphics card's capabilities isn't a very good way to draw 3D graphics either. That way, you'll be only be using D3D to get a pointer to the video memory. You can't really get any further from using it than that.
Quote:Original post by EngineCoder
Why not simply use SDL?
Because PixelToaster is optimised for software rasterisers/ray-tracers, while SDL is designed for sprite blitting. The performance difference may be negligible, but PixelToaster is a lot simpler to use (don't have to worry about surface formats, etc.).

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Or better yet look at the code in SDL and pixeltoaster(if it is open) and aim for as low a level as possible.
Quote:Original post by stonemetal
Or better yet look at the code in SDL and pixeltoaster(if it is open) and aim for as low a level as possible.


Why would that be better?
Quote:Original post by stonemetal
Or better yet look at the code in SDL and pixeltoaster(if it is open) and aim for as low a level as possible.
Not sure why you'd want to do that. If the goal is a software rasterizer (which is what it seems to be) then doing all the "low-level" stuff would just be a waste of time. You're much better off using something like PixelToaster to handle that for you and just work with the flat buffer it gives you. Then you can concentrate on all the "interesting" stuff!
One downside to PixelToaster is that it only supports 32bit ARGB and 128bit (4xfloat) ARGB color formats. Granted 32bit is pretty standard, even for software renderers, but going to 16bit color can essentially double your fill-rate for free.

If you want to support 16bit (or even 8bit CLUT) color formats, you'll need to go through GDI/Win32.

throw table_exception("(? ???)? ? ???");

This topic is closed to new replies.

Advertisement