Advertisement

YCoCg to RGB in a Pixel Shader. Almost there, one last problem.

Started by December 10, 2008 05:14 PM
3 comments, last by patw 15 years, 11 months ago
Hi guys, Been playing around with converting RGB data to YCoCg before compressing to DXT and the quality has really improved, but I'm facing one last problem which is illumination, I believe. The image looks slightly more colorful, and brighter. I followed an article at NVidia and came up with this in the pixel shader:

   float Co = color.r - 0.5; //( 0.5 * 256.0 / 255.0 );
   float Cg = color.g - 0.5; // ( 0.5 * 256.0 / 255.0 );
   float Y = color.a;
   resultColor.r = Y + Co - Cg;
   resultColor.g = Y + Cg;
   resultColor.a = depthColor.b;
   resultColor.b = Y - Co - Cg;
   //resultColor *= 0.98;
I have to scale down the colors a bit or they are too bright and colorful. That suggests to me that I'm doing something wrong to begin with, and NVidias site shows an example in GLSL:

DP4 result.color.x, color, {  1.0, -1.0,  0.0 * 256.0 / 255.0, 1.0 };
DP4 result.color.y, color, {  0.0,  1.0, -0.5 * 256.0 / 255.0, 1.0 };
DP4 result.color.z, color, { -1.0, -1.0, 1.0 * 256.0 / 255.0, 1.0 };
Looks like I'm missing something, right? I'm not using any dot products. Should I? If anyone knows of any example shaders that do this conversation, or if you got any clue what's going on, I'd appreciate it a lot. Thanks! [Edited by - SymLinked on December 10, 2008 5:31:06 PM]
Don't be confused by use of a dot product operation as only being the scalar product of two 3d vectors.
DP4 result.color.x, color, {  1.0, -1.0,  0.0 * 256.0 / 255.0, 1.0 };DP4 result.color.y, color, {  0.0,  1.0, -0.5 * 256.0 / 255.0, 1.0 };DP4 result.color.z, color, { -1.0, -1.0, 1.0 * 256.0 / 255.0, 1.0 };

This breaks down to:
result.color.x = (color.r * 1.0) + (color.g * -1.0) + (color.b * 0.0) + (color.a * 1.0);

Your assignment is:
resultColor.r = Y + Co - Cg;

where
float Co = color.r - 0.5; //( 0.5 * 256.0 / 255.0 );float Cg = color.g - 0.5; // ( 0.5 * 256.0 / 255.0 );float Y = color.a;

So this results in:
resultColor.r = color.a + (color.r - 0.5) - (color.g - 0.5);

So your code does have different results from the nvidia code.

A dot product operation is simply:
dot(a, b) = (a.x * b.x) + (a.y * b.y) + (a.z * b.z) + (a.w * b.w);
Advertisement
patw: why it has different result from nvidia code? These two expressions do the same thing, don't they?
result.color.x = (color.r * 1.0) + (color.g * -1.0) + (color.b * 0.0) + (color.a * 1.0) = color.r - color.g + color.a

and
resultColor.r = color.a + (color.r - 0.5) - (color.g - 0.5) = color.a + color.r - color.g


On nvidia page there is OpenGL SDK 10 sample "Compress YCoCg-DXT": http://developer.download.nvidia.com/SDK/10.5/opengl/samples.html
It also has Cg shader code for decompressing YCoCg to RGB in OpenGL\src\compress_YCoCgDXT\compress_YCoCgDXT.cg file in display_fp function. Maybe take a look there.
Most appreciated Pat! I'm trying to get this into the procedural shaders system in TGEA, so it's funny you replied. ;) You're the awesome!

The overbrightness was caused by a mistake on my part, not affected by the code here. However, when I modified my function from this:
	float4 outColor;	float Co = inColor.r - 0.5;	float Cg = inColor.g - 0.5;	float Y = inColor.a;	outColor.r = Y + Co - Cg;	outColor.g = Y + Cg;	outColor.a = inColor.b;	outColor.b = Y - Co - Cg;	return outColor;

To this:
	float4 outColor;	outColor.r = (inColor.r * 1.0) + (inColor.g * -1.0) + (inColor.b * (0.0 * 256.0 / 255.0)) + (inColor.a * 1.0);	outColor.g = (inColor.r * 0.0) + (inColor.g * 1.0) + (inColor.b * (-0.5 * 256.0 / 255.0)) + (inColor.a * 1.0);	outColor.b = (inColor.r * -1.0) + (inColor.g * -1.0) + (inColor.b * (1.0 * 256.0 / 255.0)) + (inColor.a * 1.0);	return outColor;


(Edit) The results look identical. I assume the differences are very small, but in any case - correct is correct. I'm just confused why NVidia's pseudocode shows an example much closer to mine, while they later show some GLSL code which is clearly different.
(Second Edit) Using the diff filter in Photoshop on the two output images using the above two methods do show that the results are indeed different. Can't see it, but it's there.
(Third Edit) Pat's method is quite a bit less different from the source material, than the one I made.

(And thanks Bubu LV, as well! Found some scaling code in there.)

[Edited by - SymLinked on December 10, 2008 7:19:02 PM]
The results could be different simply due to hardware. Wherever you see something like:
-0.5 * 256.0 / 255.0

The shader author is saying that the evaluation of this statement could vary depending on GPU, and so they don't want to use a constant value. I've seen this in bit-packing stuff mostly.

Glad it worked out. (Helpful hint while messing with ShaderGen: Features are currently encoded into a 32-bit field. If you have more than 32 shader features, things will start doing strange things without telling you what is wrong. That will be fixed.)

This topic is closed to new replies.

Advertisement