previously (with my DX9 render) I had each letter as a separate texture, at runtime I had these letters textures applied/changed at on a wall made of (fixed) squares like airport old fashion panels used to to announce arrivals/departures.
Now I want to use one large texture containing all the letters and then play with the texture coordinates on the square to display the right letter. My question is do I need to use a separate vertex buffer for each square (4 vertices)? a lot of text means a huge pile of vertex buffers that I need to generate at run time, it also means I have to translate these in the right position.
Now I want to use one large texture containing all the letters and then play with the texture coordinates on the square to display the right letter.
I have used this tool to generate such texture + uv atlas. Pretty nice.
My question is do I need to use a separate vertex buffer for each square (4 vertices)? a lot of text means a huge pile of vertex buffers that I need to generate at run time, it also means I have to translate these in the right position.
You can use one large vertex buffer, generate all the quads of text on CPU into that, then upload and use one draw call. You want to limit uploads and draw calls the most, so maybe you can divide your whole GUI into just a few such draws. Doing this from scratch every frame should not cause performance problems, but it makes sense to display with a latency of one or more frames, so the data transfer does not cause idle waiting.
Thanks JoeJ, do you think its silly to use like 60 to 100 vertex buffers. Its not going to be a slaughter right? (dealing with two workspaces, shader and cpu, is a bit confusing, I`d rather do everything within the cpp file confines)
Calin said: Thanks JoeJ, do you think its silly to use like 60 to 100 vertex buffers. Its not going to be a slaughter right? (dealing with two workspaces, shader and cpu, is a bit confusing)
No, and i ignore my advises about performance constantly myself : ) It's just good to know, so if issues pile up, we have an idea about potential reasons. (Also my comment about latency is more confusing than helpful i guess.)
But… What's surely not ideal would be to send one matrix and draw call for each letter. If that's what you do and it becomes slow, it might help a lot to use instancing, but IDK. Building the text meshes on CPU should be no more work for you? It remains the same - calculating position per letter is necessary in any case?
If i were you, i would integrate ImGui and use this to render some text output. That's not good enough for final GUI but for game design it is, and it's super helpful for debug output and interaction with the running program to test all kinds of stuff.
@joej No need to use matrices I think. You can feed the onscreen position of the letter upon creation if you need to move text on screen you throw the vertex buffers to the bin (release) and run the letter creation algorithm again. I don`t need smooth text movement so the creation algorithm is only run every now and then.
@taby Is the ordering of the characters within the image/texture the same with ordering found in c++ char?