How to develop/test shaders for lower precision? (e.g. Wear OS)

Started by
4 comments, last by james_lohr 1 year, 12 months ago

Hello,

I'm currently making a game for my Galaxy Watch 4. I have a library of my own shaders that I've written over the years. Some work great on the watch; however, some don't work at all.

I think the issue is to do with precision. For example, my rnd function doesn't work at all:

float rand(vec2 n) {
  return fract(sin(dot(n, vec2(12.9898,12.1414))) * 83758.5453);
}

Presumably this is because 83758.54 is larger than whatever the range is of floats on my watches' GPU.

Some of my shaders, I've been able to “optimise” for my watch; however, this is an extremely laborious activity as deploying to a watch for every tiny tweak you make to the shader is slow.

Any tips on how I could force my PC to operate at lower/similar precision to my watch?

I don't mind if it's not an exact emulation of the watch; however, at the very least, I want it to behave in the same way in terms of precision.

Any help would be greatly appreciated.

Advertisement

I would expect you can get SDKs with proper emulators?
Seems the answer is yes: https://developer.android.com/training/wearables/get-started/creating

No, this doesn't work for testing.

I.e. the emulator works fine, and the shaders work as well as they do on PC. I'm assuming that the emulator doesn't bother to actually emulate the whole of OpenGL ES, but rather just uses whatever OpenGL is on the PC…

Or perhaps there's some way to configure the graphics emulation? - I wasn't able to find anything.

james_lohr said:
I'm assuming that the emulator doesn't bother to actually emulate the whole of OpenGL ES, but rather just uses whatever OpenGL is on the PC…

Probably. It would be huge effort to emulate GPU + API all in software. : (

You can use 16 bit floats on recent GPUs. Here is a related blog post. Personally i lack experience, because only my newest GPU finally can do it, i still have to learn myself how to enable it.
But i guess this could help. There should be no less than 16 bit floating point numbers on any GPU i guess (beside int8 which is used only for machine learning stuff).

About your example - issues with random number generation are expected. I use integer hash functions, and when i tried to port them from 32 bit to 64 bit integers, they no longer worked properly and i got visible patterns in my procedural textures.
So it breaks also if precision increases. 16 bit floats can still represent a number like 80k.

In the end I worked around this as follows:

1) I set “precision highp float; ” in my shader, so that it uses 32 bit, even on OpenGL ES

2) Then, to improve the performance, I first render to a smaller texture FBO, then scale it up to the screen

This topic is closed to new replies.

Advertisement