A few years ago I started creating a procedural planet engine/renderer for a game in Unity, which after a couple of years I had to stop developing due to lack of time. At the time i didn't know too much about shaders so I did everything on the CPU. Now that I have plenty of time and am more aware of what shaders can do I'd like to resume development with the GPU in mind.
For the terrain mesh I'm using a cubed-sphere and chunked LODs. The way I calculate heights is rather complex since it's based on a noise tree, where leaf nodes would be noise generators like Simplex, Value, Sine, Voronoi, etc. and branch nodes would be filters and modifiers (FBM, abs, neg, sum, ridged, billow, blender, selectors, etc). To calculate the heights for a mesh you'd call void CalcHeights( Vector3[] meshVertices, out float[] heights ) on the noise's root node, somewhere in a Unity's script. This approach offers a lot of flexibility but also introduces a lot of load in the CPU. The first obvious thing to do would be (I guess) to move all generators to the GPU via compute shaders, then do the same for the rest of the filters. But, depending on the complexity of the noise tree, a single call to CalcHeights could potentially cause dozens of calls back and forth between the CPU and GPU, which I'm not sure it's a good thing.
How should I go about this?
Thanks.