Encapsulating an API, and using a team of programmers
Ok, here''s the gist of my problem. I''m working on a game, and would like to be able to switch to the newest API quickly and easily. Now, to do this, my friend suggested I use a DLL to encapsulate the API. Now, I''m not 100% clear on what he meant by that, but I have an idea. What I was thinking I''d do is to create a DLL which has many defines inside of it. It would basically change the names of every DirectX feature to a name that I could use inside my engine, then in future versions I could change those defines to the new DirectX function names, and add functions where necessary. The thing is that I''m not sure whether this will work for OpenGL, or even just future versions of DirectX. Also, is it even possible to attach a whole API to a DLL, and allow it to draw things to the screen??
Another thing that concerns me is having a team of programmers working on this project. Right now I''m thinking that I could just create DLLs for such things as the AI, and Networking... But I have heard that other programmers (in specific John Carmack) get other people to handle the game play. This is something I simply can''t comprehend, how would someone else program the game play?
Thanks, and please feel free to make any comments and/or suggestions (I could really use them at this point in time)
You shouldn''t "encapsulate an API" due to vast differences between different APIs. There are different ways to perform similar tasks in DX and GL, and simple defines won''t help. You can''t provide a DX interface to GL or a GL interface to DX because such interface will be ridiculously ugly and slow. Similarly, you can''t provide a common interface to both because they do the same thing in different ways.
Think in higher-level terms. Instead of wrapping a "triangle" or a "vertex", create a "model" class. It should provide methods for loading and displaying models, but its whole implementation will be in renderer DLLs and will most likely be highly specific to a particular API. The same thing with particle systems, billboards, and so on.
Your game engine should think in objects like "fog", "fire", "character". It should ask the renderer to draw these entities, and the renderer''s job is to translate generic high-level requests into API-specific low-level commands.
DLLs aren''t always the right answer. Quake3, for example, has one EXE file and no DLLs. You don''t have to have different people create their small DLLs; instead, you can use stub/proxy classes to handle missing features/drivers, respectively.
I''m not sure what you mean by programming gameplay.
Think in higher-level terms. Instead of wrapping a "triangle" or a "vertex", create a "model" class. It should provide methods for loading and displaying models, but its whole implementation will be in renderer DLLs and will most likely be highly specific to a particular API. The same thing with particle systems, billboards, and so on.
Your game engine should think in objects like "fog", "fire", "character". It should ask the renderer to draw these entities, and the renderer''s job is to translate generic high-level requests into API-specific low-level commands.
DLLs aren''t always the right answer. Quake3, for example, has one EXE file and no DLLs. You don''t have to have different people create their small DLLs; instead, you can use stub/proxy classes to handle missing features/drivers, respectively.
I''m not sure what you mean by programming gameplay.
But how exactly would you program the "model" classes, etc? and wouldnt this approach create a massive amount of DLLS?
Also, why do you say it''d be slow and ugly?
Also, why do you say it''d be slow and ugly?
quote:Original post by hello2k1
But how exactly would you program the "model" classes, etc?
Well, think about what you need your model class to do. In the simplest model class, you need methods to load models from file and render them. For keyframe animation, for instance, you need to tell the model class to draw frame X. For the fog class, you specify fog parameters, such as density and color, and ask the fog to apply itself. For the particle systems, you specify the position and characteristics of the emitter and tell the particle system to "update" itself each frame. The game engine doesn''t know, and doesn''t care, about how exactly these actions are performed. This will allow the renderer to perform them in any way it wants.
For D3D, you might be using ID3DXMesh class for model manipulation, while in GL you''ll most likely end up implementing loading and rendering everything from scratch. Particle systems in D3D might be implemented using point sprites which don''t exist in GL, and in GL you might use quads that can''t be rendered in D3D.
You might want to read some stuff on object-oriented design.
quote:
and wouldnt this approach create a massive amount of DLLS?
Stop thinking "can I use a DLL for this class?" DLLs are not for everyone. The most important reason for DLLs is that they allow code to be reused among different applications. For example, the CRT code is in msvcrt.dll, and all applications that use some of it can like to that DLL instead of including that code in their own EXE. This results in smaller executable sizes, smaller memory footprint, and faster program executing (less paging, for example). You might want to create a DLL if you''re giving someone your technology but don''t want to share the source code. But it''s pretty much pointless to write a DLL that stores a class that will be used by only one exe file -- yours. In fact, if you''re writing a game and all code is written by you, you shouldn''t make DLLs at all.
quote:
Also, why do you say it''d be slow and ugly?
To set up a light, for instance, you need one D3D call but several GL calls. You can''t provide GL-style interface to setting up light in your engine because it won''t work with DX. At the same time, D3D doesn''t support quads, so if you want to provide a common low-level interface to D3D and GL, you can''t use quads anywhere. However, they are very fit for a particle system implementation in GL. D3D allows you to specify flags for vertex buffers to make them static or dynamic, and place them in system or video memory; at the same time, you must use VAR/VAO GL extensions to put your geometry in video memory. In D3D, you replace current transforms with the new ones; in GL, you combine them. These and other differences will make the common interface, if you implement it, look like half-D3D and half-GL. You won''t be able to use the features that are present in both APIs but implemented differently. So, you''ll end up with a slow and ugly engine.
First of all, DirectX does have quads.. just make 4 verts and use a triagle strip
Anyways.. do you think I sould even be using OpenGL? The only reason I thought of OpenGL was to get it to run on a PS2. Also, not all the code will be written by me. I want to have the GUI, the AI, and the network code all in different libs (trying to think of more though, since that leaves me with doing the gameplay, the gfx, and the network code.) That reminds me.. how would I go about making the GUI in a seperate lib? Basically what I want the gui lib to do is to manage, and draw the windows.
I would also like to hear any other suggestions you have, and how many large scale (20k lines+) projects you''ve worked on.
Anyways.. do you think I sould even be using OpenGL? The only reason I thought of OpenGL was to get it to run on a PS2. Also, not all the code will be written by me. I want to have the GUI, the AI, and the network code all in different libs (trying to think of more though, since that leaves me with doing the gameplay, the gfx, and the network code.) That reminds me.. how would I go about making the GUI in a seperate lib? Basically what I want the gui lib to do is to manage, and draw the windows.
I would also like to hear any other suggestions you have, and how many large scale (20k lines+) projects you''ve worked on.
quote:Original post by hello2k1
First of all, DirectX does have quads.. just make 4 verts and use a triagle strip
Ack! I didn't even suspect this was possible!
quote:
do you think I sould even be using OpenGL?
It depends on what you want, obviously.
quote:
The only reason I thought of OpenGL was to get it to run on a PS2.
I can't comment on PS2 mostly because the only console I owned was Dendi (that was before Nintendo put their consoles on market). So, I'll just say this: if you have little or no knowledge of PS2 and/or GL, and are trying to learn D3D, then you might concentrate on just D3D. Since D3D has no extensions, and the latest features are built into the API, to me it's easier to use than GL. D3D has very good documentation, so you might wan to stick to it and forget about GL for a while. Now if you do have D3D experience and are creating a game that specifically should run on non-Windows platforms, then it's a whole different story.
quote:
Also, not all the code will be written by me. I want to have the GUI, the AI, and the network code all in different libs (trying to think of more though, since that leaves me with doing the gameplay, the gfx, and the network code.) That reminds me.. how would I go about making the GUI in a seperate lib? Basically what I want the gui lib to do is to manage, and draw the windows.
Suppose you are implementing feature X and someone else is implementing feature Y. Now regardless of whether the other programmer is writing a DLL or his code is linked with your code into a single exe, you can't run your app without feature Y. You can write a small stub for feature Y that does nothing but allows you to build your program. However, it doesn't matter whether that stub is in a DLL or not because it doesn't do anything anyway and you write it. So, just because several people work on the same project doesn't mean you should use DLLs. Think in objects and classes, but not in libraries.
quote:
I would also like to hear any other suggestions you have
Well, the rest of this post is my personal experience. I originally started with DX. I had this nice idea for my programs: have a "core" library (DLL) to contain the common code, have a driver program to handle window creation and message processing stuff, and a number of plugins (implemented as COM DLLs) to do the interesting stuff. It was sensible, except I wrote my classes for all DX ones that basically called the appropriate DX function, checked for any errors, and converted errors to exceptions so that the client code didn't have to check HRESULTs. Soon I had my core DLL exportinig 800+ functions of which maybe 100 had more than five lines. I thought, "But my code checks ALL return values!" Then I realized that the debug runtime checks errors for me anyway. If I mess up, I usually hit two breakpoints inside D3D8.DLL and get an error message before a D3D call returns D3DERR_INVALUDCALL. At the same time, the release runtime didn't check most errors at all, so my error-checking code is useless if I'm not running the debug runtime. So I removed all my do-nothing wrappers and went to use plain D3D calls with CComPtr's.
Then I discovered OpenGL. I didn't believe that is was easier than D3D until I tried it, and in fact it turned out to be! I remembered someone saying this on GameDev: I know many people who went from D3D to OGL, but none who went the other way. Must be so, I thought to myself. Now I had to make my driver-plugin architecture handle both APIs.
The way I accomplished it was pretty simple. The core library was just that: the library. I added functions and classes to handle GL stuff, and left D3D as they were. The driver's CMainFrame (I was using MFC) got another data member, m_Renderer, that could be either D3D or OGL. Each function had two implementations, and the code looked like so:
void CMainFrame::CreateRenderer(){ if (m_Renderer == R_D3D) CreateD3DRenderer(); else if (m_Rendeerer == R_OGL) CreatOGLRenderer(); else ASSERT(FALSE); // shouldn't be here}
The plugin was either D3D or OGL; it could use either API, but not both. I didn't provide a common interface; I provided the code to handle common tasks for both, but the plugin must choose only one API and use the low-level calls to do stuff. That worked. I didn't need a common interface because I was learning, and I learned one thing at a time.
Once I got to the advanced GL stuff, however, I became seriously disappointed with the extensions. ignoring the code needed to set up extensions (there are many libraries available that make this process transparent), I had to write the code three times (once for NV, once for ATI, and once for neither) and there was no way I could test ATI code with my NV hardware. It was then when I realized the beauty of D3D and proprietary specs: I need to write my code only once, and all the latest features are right in the API, with no clumsy extensions. Now I know a person who tried both D3D and OGL and stayed with D3D.
This is how I got to my present application. I decided to drop GL support, since GL wasn't easier than D3D anymore. I switched from MFC to WTL, which is more lightweight and undocumented but offers approximately the same functionality and is much less intrusive. Presently, I have classes for "skybox", "model", "keyframe model", etc. I have base window classes that do D3D and DI init in WM_CREATE handler. The main class is quite small, and mostly it only calls other classes:
CSkyBox m_SkyBox;
...
m_SkyBox.Load("left", ...);
...
D3DXMATRIX mat = ...;
m_SkyBox.Render(mat);
And similar code for loading and rendering models and text. I think this code organization is the best for me to learn new features and incorporate them into my engine. While my app is built around DX, my main class itn't using much of D3D directly. Instead, it calls high-level methods of other classes that do all the work.
While my driver-plugin project used a core DLL and one DLL per plugin (I wrote four), my present project doesn't use any. I concluded that DLLs didn't offer any significant benefits to me. For example, each DLL using D3DX must have its static copy of D3DX code, so if you have several DLLs that use D3DX you're wasting space. Also the linker can remove unused functions from exes but not from dlls, if these functions are exported. This is why I urge you to consider whether you really need DLLs; most likely, you don't.
quote:
and how many large scale (20k lines+) projects you've worked on.
I'm afraid the answer to this one is none. My current project (not really a game, and not a demo) is about 10k lines, most of them in "common" code (model loading, text rendering, etc). My biggest problem is completing projects; I know many things, but don't have the patience to put them all together into something big. I'm working on it, though.
Heh, that was pretty long.
Edit: had to make typos in this.
[edited by - IndirectX on June 2, 2002 3:22:57 AM]
LOL You could make an article out of your post
But, I did read it all. The fact that you haven''t done any large scale projects scares me. My last 2 projects have all been about 5-10k lines (the code is spread out, so I can''t really tell), and taking on something this big baffles me.
Also, I have classes much like you are saying. My friend taught me that I should start doing it that way after my code was getting really messy. My old code was about 100 lines in the Render() function, and now it''s just one call which is flexible enough to allow me to add a new object type within 5 mins. This also proved to be very benficial with my scripting, it only uses 3 functions: AddScript(), Execute(), and DeleteAll(). The code is very clean, and because of that it is very flexible.
But back onto my main concern.. When I made this post I was mostly thinking about how I would convert my app to newer versions of DirectX (ie DX8.1->DX9 for now). I had an idea to make defines so that I wouldn''t have to change the names (ie D3DX8Func()->D3DX9Func()) Then an idea popped into my head to support OGL, since it''s multiplatform - you have pushed me away from this idea. I don''t know many people that know both OGL, and DX fluently, but your saying that OGL and DX are very different scares me. I want multi-platform support, but not at the cost of an extra level of complexity, and a shotty game..
I guess there is only one question left (well two because one must first confirm you have done it before); Have you ever had to convert one of your projects to a higher level of DirectX? If so, then what exactly did you have to do?
Thanks for all your help, I really apreciate it.
But, I did read it all. The fact that you haven''t done any large scale projects scares me. My last 2 projects have all been about 5-10k lines (the code is spread out, so I can''t really tell), and taking on something this big baffles me.
Also, I have classes much like you are saying. My friend taught me that I should start doing it that way after my code was getting really messy. My old code was about 100 lines in the Render() function, and now it''s just one call which is flexible enough to allow me to add a new object type within 5 mins. This also proved to be very benficial with my scripting, it only uses 3 functions: AddScript(), Execute(), and DeleteAll(). The code is very clean, and because of that it is very flexible.
But back onto my main concern.. When I made this post I was mostly thinking about how I would convert my app to newer versions of DirectX (ie DX8.1->DX9 for now). I had an idea to make defines so that I wouldn''t have to change the names (ie D3DX8Func()->D3DX9Func()) Then an idea popped into my head to support OGL, since it''s multiplatform - you have pushed me away from this idea. I don''t know many people that know both OGL, and DX fluently, but your saying that OGL and DX are very different scares me. I want multi-platform support, but not at the cost of an extra level of complexity, and a shotty game..
I guess there is only one question left (well two because one must first confirm you have done it before); Have you ever had to convert one of your projects to a higher level of DirectX? If so, then what exactly did you have to do?
Thanks for all your help, I really apreciate it.
quote:Original post by hello2k1
Have you ever had to convert one of your projects to a higher level of DirectX?
No. The thing is, differences between DX versions may be even bigger than differences between DX and GL. I heard that DX9 doesn''t use fixed-function pipeline at all, which is very different from DX8. I''m not going to start thinking about conversions until I get DX9 SDK and actually see what the new code is like. I think that it''s pretty much pointless to guess what I can do in DX9, so I for now just write DX8 code without concerning myself with upward compatibility.
Seems logical.. But have you read the .ppt on DX9? It tells of cmp/jmp instructions in shaders, and the 2 displacement mapping techniques. Using such info should be able to aid you in which features you should be adding.
For example, I was thinking about makign a tool like what I heard Carmack made, where he takes high poly models, and makes the low poly pixel shaded models. In DirectX 9, however, there is a tool which will do this for me using displacement mapping.
Being concerned about what will come next has saved me at least a month of work in that instance. Also, I was planning to implement terrain which would be shaped using vertex shaders.. but in DirectX 9 it will be possible to do that with displacement mapping. In that instance it has saved me at least 2 months worth of work, and i will most likely end up with terrain the runs 2x as fast, and looks better.
You should never "just write DX8 code without concerning myself with upward compatibility" I can garuntee it'll end up screwing you in the end (unless you're working on a project that'll only last about a year)
ADD ON: I know this is a little off topic.. but it's driving me nuts and I don't feel like making another post. How do I make the red value of a texture become the alpha value in a pixel shader? According to the docs, it's supposed to be "mov r0.a, t0.r".. but that doesnt work.
[edited by - hello2k1 on June 2, 2002 6:11:03 PM]
For example, I was thinking about makign a tool like what I heard Carmack made, where he takes high poly models, and makes the low poly pixel shaded models. In DirectX 9, however, there is a tool which will do this for me using displacement mapping.
Being concerned about what will come next has saved me at least a month of work in that instance. Also, I was planning to implement terrain which would be shaped using vertex shaders.. but in DirectX 9 it will be possible to do that with displacement mapping. In that instance it has saved me at least 2 months worth of work, and i will most likely end up with terrain the runs 2x as fast, and looks better.
You should never "just write DX8 code without concerning myself with upward compatibility" I can garuntee it'll end up screwing you in the end (unless you're working on a project that'll only last about a year)
ADD ON: I know this is a little off topic.. but it's driving me nuts and I don't feel like making another post. How do I make the red value of a texture become the alpha value in a pixel shader? According to the docs, it's supposed to be "mov r0.a, t0.r".. but that doesnt work.
[edited by - hello2k1 on June 2, 2002 6:11:03 PM]
What Carmack''s doing with Doom3 isn''t displacement mapping, it''s normal mapping. Displacement mapping will only work on DX9 (pluss the new Matrox, which is DX8.5 class hardware, which puts quite a high barrier of entry on your project...
Essentially, Normal mapping is a close relative of bumpmapping. Here''s a blurb from CryTek (which has a related system running)''s forum, by Marco
"
A normal map is different from a greyscale bump map. A normal map is similar to the usual texture map but instead containing color values (as rgb) it contains normal values (as xyz). A normal map is used for hw dot3 bump mapping. Of course you can calculate a normal map from a grey scale bump map by simply treading each pixel in the texture as a vertex of a triangle and calculate the difference of the color values between these three points - but polybump creates directly a normal map with highest quality than just a conversion from a greyscale bump map. Anyways if you''re interested in all the details about bump mapping and per-pixel lighting I suggest you to download some paper and example from Nvidia, ATI or Matrox web site
"
You can have a look at the routine in action over here:
http://www.crytek.com/screenshots/index.php?sx=polybump
Odin
Essentially, Normal mapping is a close relative of bumpmapping. Here''s a blurb from CryTek (which has a related system running)''s forum, by Marco
"
A normal map is different from a greyscale bump map. A normal map is similar to the usual texture map but instead containing color values (as rgb) it contains normal values (as xyz). A normal map is used for hw dot3 bump mapping. Of course you can calculate a normal map from a grey scale bump map by simply treading each pixel in the texture as a vertex of a triangle and calculate the difference of the color values between these three points - but polybump creates directly a normal map with highest quality than just a conversion from a greyscale bump map. Anyways if you''re interested in all the details about bump mapping and per-pixel lighting I suggest you to download some paper and example from Nvidia, ATI or Matrox web site
"
You can have a look at the routine in action over here:
http://www.crytek.com/screenshots/index.php?sx=polybump
Odin
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement