Comments
Hi,
I believe it would be helpful for newbies to elaborate a little on the use of two colorbuffers for rendering.
something along the lines of:
If one was to use a single colorbuffer the scene would be composed directly on the screen (the user would see objects "pop up" as the 3D scene is being rendered). This is why two buffers are used: The current frame (that which is currently being shown on the display) is held in the frontbuffer. The frame that is being rendered is drawn to the backbuffer.
when the task of rendering the scene to the backbuffer is completed the front- and backbuffer are switched so that the backbuffer is now the front (that is, the scene you rendered is shown on the display) and the buffer which *was* the frontbuffer is now a render target for the next frame. This process of "swapping" the buffers is the reason for the name "swapchain".
I know you mentioned it in the article, just my 0.02 - Good work though
Why did you leave the blend stage out as it is of impact on even rendering a simple triangle, I can see why you left out the newer shader stages, but the blend one I can't.
I do agree that the blending stage is of high importance in the current graphics and I was in doubt of writing something or not. In the end, I decided not to.
The blending stage can be disabled in OpenGL, it isn't mandatory and I felt that it would therefore add an additional layer of needless "complexity" in how pixels "come to be" (which was the main goal of this article)
I don't think we should compare the shader stages to the blending stage, as for example the geometry shader certainly has its merits!
Hi,
I believe it would be helpful for newbies to elaborate a little on the use of two colorbuffers for rendering.
something along the lines of:
If one was to use a single colorbuffer the scene would be composed directly on the screen (the user would see objects "pop up" as the 3D scene is being rendered). This is why two buffers are used: The current frame (that which is currently being shown on the display) is held in the frontbuffer. The frame that is being rendered is drawn to the backbuffer.
when the task of rendering the scene to the backbuffer is completed the front- and backbuffer are switched so that the backbuffer is now the front (that is, the scene you rendered is shown on the display) and the buffer which *was* the frontbuffer is now a render target for the next frame. This process of "swapping" the buffers is the reason for the name "swapchain".
I know you mentioned it in the article, just my 0.02 - Good work though
You are absolutely right, the subject has been treated rather dodgy. I remember how the OpenGL Red Book, 7th edition had the example of a rotating square, where a "ghosting"-effect was visible when you rendered with only one colorbuffer. A double-buffered framebuffer resolved this quite nicely.
I'll have a look if I can improve this article with some pictures/figures to make it somewhat clearer. In the meantime, I'll leave it as an exercise to the reader :)
Hmm, but if you consider two colour buffers and alpha blending then you need stencil states and gamma ramps, fog, slope scale, near and far planes and suddenly the permutations have become so large that you need to write a 10 page article ... I would just add the GS and tessellation shaders then mention compute shaders as a non cooperative part, for completeness. However I suppose that as fixed function is basically obsolete you might be right to just mention the minimal vertex and fragment shaders.
I was able to follow along vaguely as a beginner, but I have to admit that if I didn't already understand a buffer from 2D game programming I'd have been utterly lost. A little explanation of terms like that can be a huge difference maker.
Also, be careful of repeating yourself needlessly.
In graphics programming, we tend add some more meaning to a vertex then its mathematical definition. In mathematics you could say that a vertex defines the location of a point in space. In graphics programming however, we generally add some additional information.
Otherwise, it was beautifully concise and clear. No dragging on and on. I was actually surprised when I realized I'd finished the article. I was ready for 2,000 words. But you were able to have the article end before my attention span split in two and started beating the hell out of itself.
Thanks.
Hi,
I believe it would be helpful for newbies to elaborate a little on the use of two colorbuffers for rendering.
something along the lines of:
If one was to use a single colorbuffer the scene would be composed directly on the screen (the user would see objects "pop up" as the 3D scene is being rendered). This is why two buffers are used: The current frame (that which is currently being shown on the display) is held in the frontbuffer. The frame that is being rendered is drawn to the backbuffer.
when the task of rendering the scene to the backbuffer is completed the front- and backbuffer are switched so that the backbuffer is now the front (that is, the scene you rendered is shown on the display) and the buffer which *was* the frontbuffer is now a render target for the next frame. This process of "swapping" the buffers is the reason for the name "swapchain".
I know you mentioned it in the article, just my 0.02 - Good work though
You are absolutely right, the subject has been treated rather dodgy. I remember how the OpenGL Red Book, 7th edition had the example of a rotating square, where a "ghosting"-effect was visible when you rendered with only one colorbuffer. A double-buffered framebuffer resolved this quite nicely.
I'll have a look if I can improve this article with some pictures/figures to make it somewhat clearer. In the meantime, I'll leave it as an exercise to the reader
I decided not to modify the article, nevertheless, you can see the effect of double-buffering (No Ghosting) versus single-buffering (Ghosting) in the following 2 youtube videos I just uploaded:
This article is mainly intended to give some introductory background information to the graphics pipeline in a triangle-based rendering scheme and how it maps to the different system components. We'll only cover the parts of the pipeline that are relevant to understaning the rendering of a single triangle with OpenGL.
Why did you leave the blend stage out as it is of impact on even rendering a simple triangle, I can see why you left out the newer shader stages, but the blend one I can't.