GLSL Working on Nvidia & not on AMD

Started by
15 comments, last by Ashaman73 9 years, 1 month ago

I have some simple shaders I made that are working on my Nvidia card but not on my friend's AMD card. They are compiling on his card but not rendering anything. I have heard AMD is stricter than Nvidia on sticking to the specifications but I have followed them as far as I can tell. Any suggestions?

Vertex shader:


#version 150

in vec4 in_Position;
in vec4 in_Color;
in vec2 in_TextureCoord;

uniform int width;
uniform int height;
uniform int xOffset;
uniform int yOffset;
uniform int zOffset;

out vec4 pass_Color;
out vec2 pass_TextureCoord;

void main(void) {
    gl_Position.z = in_Position.z + (float(zOffset) / 100.0f);
    gl_Position.w = in_Position.w;
    gl_Position.x = ((0.5f / float(width)) + ((in_Position.x + float(xOffset)) / float(width))) * 2.0f - 1.0f;
    gl_Position.y = ((0.5f / float(height)) + ((in_Position.y + float(yOffset)) / float(height))) * 2.0f - 1.0f;
    pass_Color = in_Color;
    pass_TextureCoord = in_TextureCoord;
}

Fragment shader:


#version 150

uniform sampler2D texture_diffuse;
uniform int final;

in vec4 pass_Color;
in vec2 pass_TextureCoord;

out vec4 out_Color;

void main(void) {
    out_Color = pass_Color;
    out_Color = texture(texture_diffuse, pass_TextureCoord);
    if(out_Color.w != 0.0f && final == 1){
        out_Color.w = 1.0f;
    }
}
Advertisement
If you have verified the shader compiles and links without errors or warnings, consider using glValidateProgram to check the program state at the exact point where you would normally render.
I have some simple shaders I made that are working on my Nvidia card but not on my friend's AMD card. They are compiling on his card but not rendering anything. I have heard AMD is stricter than Nvidia on sticking to the specifications but I have followed them as far as I can tell. Any suggestions?

Check if the shaders compile properly using the GLSL Reference Compiler. This is the GLSL compiler written by Khronos and is the "gold standard" for all GLSL compilers so it should tell you which vendor is handling the shaders incorrectly.

and try obvious problems as well.

for example in the pixel shader you have


void main(void) {
    out_Color = pass_Color;
    out_Color = texture(texture_diffuse, pass_TextureCoord);

Now you would expect any decent compiler to handle that, but I have had cases where things like that have broken code

I would change it to


void main(void) {
    out_Color = pass_Color;
    out_Color *= texture(texture_diffuse, pass_TextureCoord);

And see what happens

Then I would change the pixel shader to write to gl_FragColor and see what happens.

Check that your api call are correct, for instance not forgetting glEnableVertexAttrib, or wrong gl Uniform variant (float instead of int)

Pretty sure AMD's GLSL compiler will complain if you put 'f' at the end of float literals (because its not valid GLSL).

But yeah, when each shader is compiled, check if it compiled correctly, print log otherwise. Then link the program and check if it was linked correctly, print log otherwise. You won't get anywhere if you don't start to do that.

Also, use ARB_debug_output/KHR_debug.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Pretty sure AMD's GLSL compiler will complain if you put 'f' at the end of float literals (because its not valid GLSL).

Good spot on the 'f' suffixes :) That certainly isn't valid GLSL, even though some vendors accept it.

Pretty sure AMD's GLSL compiler will complain if you put 'f' at the end of float literals (because its not valid GLSL).

Good spot on the 'f' suffixes smile.png That certainly isn't valid GLSL, even though some vendors accept it.

I read in places that it fixed some people's issues with AMD and that it differentiated between a float and a double? Although I see no references to a double in the GLSL reference sheet.

Just ran them both through the GLSL Reference Compiler and no issues. I will try the rest of the suggestions! Thanks for being so helpful!

Using 'f' for floats should be legal from GLSL 1.2 onwards. If that were the problem you would see a clear compile error for the shader. You do remember to call glGetShader with GL_COMPILE_STATUS though? Failure to compile a shader will not set a normal GL error. The same goes for linking.

Have you verified your code catches actual shader compile or linker errors? If you did, it's most likely an issue with the current program state which glValidateProgram should be able to catch. NVidia unfortunately allows a lot of corner cases to just 'work and do something' even if the specification says otherwise...


Using 'f' for floats should be legal from GLSL 1.2 onwards.

As far as I can tell, it's not supported at all on Mac or iPhone, so I'd advise avoiding it.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

This topic is closed to new replies.

Advertisement