[SOURCE]
struct MCtri
{
int vi[3]; // Vertex index
float dist; // Distance from camera
};
struct MCscene
{
int numVert; // Number of verts
float *vert[3]; // Vertices
int numTri; // Number of triangles
MCtri *tri; // Trianges
float camPos[3]; // Camera position
};
// Compare function
int mcCompare(struct MCtri *elem1, struct MCtri *elem2)
{
if ( elem1->dist < elem2->dist) // If First Structure distance Is Less Than The Second
return -1; // Return -1
else if (elem1->dist > elem2->dist) // If First Structure distance Is Greater Than The Second
return 1; // Return 1
else // Otherwise (If The distance Is Equal)
return 0; // Return 0
}
// Calculate distance between two points in 3d
float mcDistance3d(float[3] ptA, float[3] ptB)
{
float tempa = ptA[MC_X] - ptB[MC_X];
float tempb = ptA[MC_Y] - ptB[MC_Y];
float tempc = ptA[MC_Z] - ptB[MC_Z];
return sqrt((tempa*tempa)+(tempb*tempb)+(tempc*tempc));
}
// Returns the smallest of a group of three numbers (same as min() found in <algorithm>)
float mcSmallest (float A, float B, float C)
{
float smallest;
if (A <= B)
{
if (A <= C) smallest = A;
else smallest = C;
}
else if (B <= C) smallest = B;
else smallest = C;
return smallest;
}
void mcMeasureTriDist( MCscene *s )
{
MCuint32 mc_t;
float dist[3];
for ( mc_t = 0; mc_t < s->numTri; mc_t++ )
{
dist[0] = mcDistance3d( s->camPos, s->tri[mc_t].vi[0]]);
dist[1] = mcDistance3d( s->camPos, s->tri[mc_t].vi[1]]);
dist[2] = mcDistance3d( s->camPos, s->tri[mc_t].vi[2]]);
s->tri[mc_t].dist = mcSmallest( dist[0], dist[1], dist[2] );
}
}
void mcRender( MCscene *s )
{
// Measure distance to camera for each triangle
mcMeasureTriDist(s);
// Sort triangles from back to front
qsort((void *) s->tri, s->numTri, sizeof(struct MCtri), (compfn)mcCompare );
// Render each triangle
for ( mc_t = 0; mc_t < s->numTri; mc_t++)
{
glBegin(GL_TRIANGLES);
glVertex3fv( s->tri[mc_t].vi[0]] );
glVertex3fv( s->tri[mc_t].vi[1]] );
glVertex3fv( s->tri[mc_t].vi[2]] );
glEnd();
}
}
[/SOURCE]
qsort - what am I doing wrong? (code included)
Hi,
I am trying to sort from back to front because of several transparent surfaces, but I geet this:
here is the code:
Thanks for your thoughts,
Keer:-)
??? The edit button is gone, so here's the image:
As you can see, one of the back triangles is not being rendered as it should, and when the camera moves around, triangles switch from back to front of each other, which clearly means they are not being sorted right.
As you can see, one of the back triangles is not being rendered as it should, and when the camera moves around, triangles switch from back to front of each other, which clearly means they are not being sorted right.
Unfortunately sort-by-closest-point is not sufficient to determine a draw order. Consider the bottom of the nearest part of your arch. There are three polygons for which the nearest point is the same, yet those three polygons have a definite sort order (the base must be rendererd first followed by the front and side polygons in any order). Take a look at binary space partitioning to see one way to correctly z-sort polygons.
Enigma
Enigma
In case BSP comes out to be unpratical (sort entities for example), consider dropping qsort for insertion sort.
Wait, did you read "insertion sort"?
Yes I really meant to write insertion sort.
Here's why: the whole theory of asyntotical analisys, while useful to research better algorithms is totally blown for some real world uses, especially in computer graphics.
Think about it. It makes the assumption you have a "big enough" data set. It assumes you're not interested in the c constant, which basically is the same to say that sqrt has the same expense as ADD.
Both those assumptions does not hold in CG, especially in gaming oriented applications.
The question is: how big is our dataset?
If you spend some time on nVIDIA developer website, you'll find a paper in which they say over 90% of the renderings does have less than 8 transparent layers.
With 95% you get 12 layers and with 99% you have 15-18 layers (can't remember exact numbers).
Put this into asintotical analisys.
It happens that on my Athlon, InsertionSort beats QuickSort when the dataset is smaller than 60-70 elements. On PIVs, I expect this to be even higher. On newer CPUs (I am testing on XP1800) this will be higher.
Now, say you have 40 transparent layers. This is overkill.
QuickSort will end up giving you worse performance. Not much, I hardly believe it'll be noticeable.
This was posted because everybody speaks about QuickSort and MergeSort but they're both overcomplicated for small datasets like this. By ibridating QuickSort with InsertionSort, you'll still get a 20-50% performance increase. Not bad for an algorithm nobody speaks about!
Wait, did you read "insertion sort"?
Yes I really meant to write insertion sort.
Here's why: the whole theory of asyntotical analisys, while useful to research better algorithms is totally blown for some real world uses, especially in computer graphics.
Think about it. It makes the assumption you have a "big enough" data set. It assumes you're not interested in the c constant, which basically is the same to say that sqrt has the same expense as ADD.
Both those assumptions does not hold in CG, especially in gaming oriented applications.
The question is: how big is our dataset?
If you spend some time on nVIDIA developer website, you'll find a paper in which they say over 90% of the renderings does have less than 8 transparent layers.
With 95% you get 12 layers and with 99% you have 15-18 layers (can't remember exact numbers).
Put this into asintotical analisys.
It happens that on my Athlon, InsertionSort beats QuickSort when the dataset is smaller than 60-70 elements. On PIVs, I expect this to be even higher. On newer CPUs (I am testing on XP1800) this will be higher.
Now, say you have 40 transparent layers. This is overkill.
QuickSort will end up giving you worse performance. Not much, I hardly believe it'll be noticeable.
This was posted because everybody speaks about QuickSort and MergeSort but they're both overcomplicated for small datasets like this. By ibridating QuickSort with InsertionSort, you'll still get a 20-50% performance increase. Not bad for an algorithm nobody speaks about!
Previously "Krohm"
Perhaps try using the center of triangles for sorting distance rather than the closest or futherest point...
It doesnt always work but it can be a good compromise!
Then consider Rendering the Thing with your current order but enable front face culling...
Then Re render with backface culling enabled
But if u want perfect or uncompromised results u need a BSP tree
It doesnt always work but it can be a good compromise!
Then consider Rendering the Thing with your current order but enable front face culling...
Then Re render with backface culling enabled
But if u want perfect or uncompromised results u need a BSP tree
----------------------------http://djoubert.co.uk
Or even worse, depth-peeling for order independant transparency. Just in case you have two transparent entities roaming around the world and you don't want to look at their triangles each frame.
Previously "Krohm"
Hi Keermalec, i can see what you trying to do, that stuff is hard to do and even proffesionals avoid this if they can.
So i am going to give you some advice on dealing with transparent polygons in a 3d engine.
* Allways draw all opaque polygons first, even all opaque areas of an transparent polygon should be drawn.
* Allways use transparent polygons sparingly.
* Allways draw them back to front, duh, but it is easier if you sort them in groups first and keep the groups separated (non intersecting).
* Allways use simple objects, often just one or two polygons each, if your doing more complex stuff then it would be better to just render it totaly opaqe to a texture and then render that texture to the scene partialy transparent.
* Aparently sorting algorithms work better with polygons of simmilar size.
* it is only with convex hulls you can use the old render back then frontfacing polygons trick.
* Particle systems should not be counted as such, and they should not need to be sorted, they shouldn't even be drawing to the z buffer.
But because of their nature particle systems could be sorted if needed without any problems besides from the obvious speed issues.
So in the case of your problem, try subdividing the object a little so that you get in this case five convex hulls, you could group them if you want but it is not vital for this to work.
This will get you better data to work with.
So i am going to give you some advice on dealing with transparent polygons in a 3d engine.
* Allways draw all opaque polygons first, even all opaque areas of an transparent polygon should be drawn.
* Allways use transparent polygons sparingly.
* Allways draw them back to front, duh, but it is easier if you sort them in groups first and keep the groups separated (non intersecting).
* Allways use simple objects, often just one or two polygons each, if your doing more complex stuff then it would be better to just render it totaly opaqe to a texture and then render that texture to the scene partialy transparent.
* Aparently sorting algorithms work better with polygons of simmilar size.
* it is only with convex hulls you can use the old render back then frontfacing polygons trick.
* Particle systems should not be counted as such, and they should not need to be sorted, they shouldn't even be drawing to the z buffer.
But because of their nature particle systems could be sorted if needed without any problems besides from the obvious speed issues.
So in the case of your problem, try subdividing the object a little so that you get in this case five convex hulls, you could group them if you want but it is not vital for this to work.
This will get you better data to work with.
www.flashbang.se | www.thegeekstate.com | nehe.gamedev.net | glAux fix for lesson 6 | [twitter]thegeekstate[/twitter]
Sorry, I checked the results of my previous tests and they were somewhat different from my memories... I wanted to provide it anyway so you can check out the results.
Graph zoom of interesting datasets. Unluckly, I have very little detail here to show but I am still confident enough in saying the break-even point is at ~35. My apologies for shooting this to 60. I believe this was happening on the other machine (but I don't have benchmarks anymore).
Quicksort and Quicksort + insertionsort hybrid. Always faster on large datasets...
...and, unsurprisingly, also on small, interesting datasets.
Detail of the numerical outputs. The speed up seems to drop with data set length. On my tests, it never gone below 1.4x.
EDIT: broken links... (I hate you broken server!)
I hope you find this useful or at least interesting.
I believe it is quite enough.
Graph zoom of interesting datasets. Unluckly, I have very little detail here to show but I am still confident enough in saying the break-even point is at ~35. My apologies for shooting this to 60. I believe this was happening on the other machine (but I don't have benchmarks anymore).
Quicksort and Quicksort + insertionsort hybrid. Always faster on large datasets...
...and, unsurprisingly, also on small, interesting datasets.
Detail of the numerical outputs. The speed up seems to drop with data set length. On my tests, it never gone below 1.4x.
EDIT: broken links... (I hate you broken server!)
I hope you find this useful or at least interesting.
I believe it is quite enough.
Previously "Krohm"
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement