Advertisement

How to do a tiled map

Started by August 10, 2019 04:34 PM
6 comments, last by Alberth 5 years, 5 months ago

Hello,

I have been reading and trying learnopengl.com and it's fun. However when trying to use it in a game, I have a few questions that I'd like to discuss.

This topic is about how to do a map with tiles in OpenGL. Think a horizontal shooter with a tiled background, or a top-view game. It's all in 2D, with a few layers on top of each other.

As an example (to make the discussion more concrete), assume my tiles are 100x100 pixels which is also the size of the tile when displayed to the screen, and I have a map of 300x200 tiles. For simplicity, let's assume a pixel has size 1x1 in OpenGL coordinates (it's all scalable a lot, many options there).

 

The first solution I could think of is to upload the entire map of 30000x20000 units size, with 2 triangles every 100x100, with triangles pointing to the texture that should be displayed at that point in the map (ie I share the images of the tiles across the map). Advantage: It's simple; Disadvantage: It has to skip most of the tiles, since the tot map size is larger than the size of the display. In OpenGL you just translate to the correct position to display that part of the map.

The second solution I could think of is to create the above map, but only just a little larger than the display size. In other words, it is only the displayed portion with a bit extra to enable moving around smoothly. To move further, I guess you have to re-upload the triangle data with different texture indices once every while, or you put the tile information in another texture which is indexed relative to eg the top-left corner of the displayed part (that is, every triangle first queries the tile texture to know the tile-type, and uses that to access the correct tile image for display in the triangle. Advantage: It's less triangles for the GPU (few skipped triangles if any); Disadvantage: It's more complicated to implement.

 

Conceptually, both look like it should work. What I don't know is which one is preferable. Likely actual mapsize is a factor here, so larger mapsize means that at some point the second solution would be better (I think??). But where is that point roughly?

Also, are there better options to display a tiled map in 2D?

As you said, it all depends on the map size. If you are planning on going huge with the map then only having the necessary parts in memory is preferable. However, I actually don't think that memory is ever a real problem here:

Lets say you map is 1000x1000 tiles large. Then you need the same amount of quads to render each tile. Now. assuming you only need your 2D world and texture coordinates, each quad is 96 bytes (6 vertices). Therefore, for your complete map you only need 91 MB of VRAM for the vertices. For the actual textures, one 8k texture gives you about 6561 unique 100x100 tiles. Which costs you 256 MB of VRAM (RGBA8 format). All in all, memory is not an issue imo.

For actually rendering the world you have several options, depending on what kind of hardware you want to support. The nicest option would be to perform culling on the GPU and do rendering indirectly. This way you only render the triangles you see and without ever having to upload anything to the GPU. If you need to support older hardware, you could still leave all vertices on the GPU and just upload the indices per frame (16 bytes per quad, 4 vertices), which should only be around 207 quads for a 1920x1080 screen size. You would then perform culling on the CPU and determine which quads to render. The option of just always rendering the whole map can work well, but this ofc heavily depends on the map size. So one of the former approaches is to be prefered imo.

Advertisement

I have not done a tiled map myself, but it reminds me of what i did in my LOD algorithm. I have not thought the following through to the end, but maybe the principle works.

- Portionise the world map into handsome parts (chunks), like 2000/2000 (magic number, can be anything) that you can load and unload depending on visibility. I assume you never have the whole 30000 map in sight. If so, then it must be LODed anyway.

- A chunk then can be divided in let's say 20 * 20 nodes of 100*100, each node is a quad as you said and has a sprite or texture to it or corresponds to texture coordinates in the world texture. Build it so that you have a fixed offset per chunk and per node in the chunk.

- Call glMultiDrawArraysIndirect(), because you only have a single quad with 4 vertices that you can use instanced for every node. Thus you avoid calling opengl for every single quad. In the vertex shader, you could use the instance count to calculate the node's world position and eventually the texture coordinates.

This way you shouldn't have problems with the world/texture size, but have a little preprocessing on loading of the texture.

 

Thanks both for your replies. I don't plan on having a large map, as it's hand-made. If it becomes bigger it may be simpler to split it into "levels", which also gives nice entry points for restarting or resuming the game I guess (but that's all future, first one "level" only).

I didn't think of the map partitioning, but indeed, that can be a good option too.

For now, I think I'll go with my first option since you both didn't consider that to be problematic. Store everything at the GPU, and just translate to the correct position for the display. Once that works, I can check how fast (or not fast) it is, and try another technique,

 

If you need a lot of custom effects on a passive background but want to stick with pure OpenGL without extensions for compatibility and cleaner code, you can pre-draw the background on the CPU with all the fancy effects and then send updated regions to the background texture when tiles change or are made visible along the edge. This will allow aligning content to whole pixels without the random seams from inaccuracy, but pixels may still vary in size globally. Make the texture at least two times as large as the screen and use it as a seamless cyclic buffer for filling the visible area which may go around the edge using dynamic texture coordinates on the background's full-screen quad. This also makes it easy to implement a pure software version that can be used when OpenGL drivers aren't found or has too many bugs.

30000*20000*8bit RGBA are almost 2.4 GB uncompressed if my calculation doesn't fail me after the third Cold One. It may not be the most performant solution but will work. It might be a good idea to check the maximum texture size. Most graphics cards will (probably hopefully maybe probably not) support these.

Suggesting to check glGetIntegerv() with GL_MAX_TEXTURE_SIZE and/or GL_MAX_TEXTURE_BUFFER_SIZE and use compression.

 

Edit: just checked my GTX970 clone, Linux driver 418.74:


Maximum texture size: 16384.
Maximum texture buffer size: 134217728.

 

You can try and use a GL_PROXY_TEXTURE_2D target to see if it fits.

 

'Nother edit:


GLuint t;
glCreateTextures( GL_TEXTURE_2D, 1, &t );
glTextureStorage2D( t, 1, GL_RGBA8, 30000, 20000 );

results in


Debug message (1281): Source: API; Type: Error; Severity: high; GL_INVALID_VALUE error generated. Invalid texture dimensions.

It definitely works with 8 and 16k textures ...

Advertisement

Whoo, you quys are so far ahead of me.

@Dawoodoz Drawing the image at the CPU first eh? Sounds like a good idea to keep in mind. Don't think it's needed at this time.

@Green_Baron 30*1000 * 20 * 1000 * 4 bytes = 2400 * 1000 * 1000, so yeah 2,4GB. Not much of an idea how big a video card is, but at GB range, I do think twice before going there :)

I was going for the tiles approach, ie each 100x100 rectangle (quad?? in your terms) will have a position pointing to the sprite the draw there, like Batzer described. So it's one sprite-sheet, and a bunch of rectangle coordinates only.

 

I am going to have so much fun exploring the options of OpenGL :D

Thanks all

This topic is closed to new replies.

Advertisement