I have a large heightmap, split into X-sized patches (e.g 32), inserted into a quad-tree for view culling on the CPU. All rendered as instanced drawing of a simple 4-vertex quad with each patch has it's own world-transform (with uniform scale for all). The tessellation of each chunk depends on it's height variance & screen-space size. Fair enough, that works mostly fine, although there are two cases that are problematic:
- Performance, when zoomed out far enough that nearly all patches are visible
- Loss of detail, when not in a top-down view or very very zoomed in
So I'm thinking of doing non-uniform sized patches like this image:
A quad-tree like this is different from mine as it's built from the camera as the root so I need to change that.
As for determining when to merge smaller patches into a larger one, is there a smart metric to use? The most simple one I can think of is measure distance from camera in absolute world units, but I would prefer something more adaptive perhaps, to handle really large & small terrains, and not dependent on such static boundaries. Any ideas?