Procedural texture shader generation from input texture

Started by
6 comments, last by Nagle 1 year, 6 months ago

The idea here is to generate a shader which generates an unlimited area similar to a given texture. So you put in grass, water, rock, dirt, etc. textures, and you get an auto-generated shader which makes unlimited non-repeating areas of them.

This is called exemplar-based texture synthesis, and was first done about 20 years ago. See this paper:

https://www.researchgate.net/publication/270486230_Exemplar-based_Texture_Synthesis_the_Efros-Leung_Algorithm

Here's an overview of systems in more recent years. This seems to have been a hot topic from the late 1990s until about a decade ago.

Conceptually, this is a lot like Adobe Photoshop's Content Aware Fill. Here's how that works: https://gfx.cs.princeton.edu/pubs/Barnes_2009_PAR/patchmatch.pdf

You can think of it as generating an output image by merging small samples from various areas of the input image. What's it's doing isn't really that complicated. If you were doing this in a shader, you'd have a starter image and some procedural mechanism to composite a pattern of small clips from that image. The hard part is figuring out the reassembly plan. Executing the plan isn't that tough.

So, has this technology ever made it to game background textures?

Advertisement

This one seems pretty nice: https://jcgt.org/published/0011/03/05/

Oh, nice! With code, even.

And this, for offline needs: https://github.com/EmbarkStudios/texture-synthesis

I've been working on this stuff a bit but it's far from complete. I have a 3D, 4D and 5D simplex noise routines that work on the GPU. Here is raw 4D over a planet. I use the 4th dimension for altitude and compress it. Since it's spherical I can't just use one of the 3 normal dimensions for that effect.

However I find that noise really can't do everything I want it to so more recently I started working on cellular type shading. This basically puts random points around the planet. My currently algorithm uses something like a “jitter grid” but it's based on a triangular grid since I have to wrap it around a sphere. I found that 3D cellular shading produced too many artifacts and had some limitations. For instance, if you have a surface and you are using 2D cellular shading, the surface will aways pass through the points. However, with 3D cellular shading and a surface, that's rarely the case and so it's hard to get consistent patterns around a sphere. There is also the issue that your terrain passes at varying angles through the cells depending on what part of the planet you are on.

Hence, I developed a 2D coordinate system based on an icosahedron. There are 10 diamond shaped areas, with axes at the edges. I use the skew operation (stolen from simplex noise) to compensate for the fact they are not even close to square. I also have what I call stitching to make area transitions seamless. This all gives me access to 7 random points around any pixel I'm shading. Typically, I only need 2 or 3 of them max.

Here is a test where I'm using it for rather plain Voronoi shading.

This was actually somewhat of a pain even though it looks simple. The man problem is I need it to work over an earth sized planet, be seamless and I wanted it to be able to support fine detail. Hence the text coordinates (for each diamond shaped area) are in double. However, my first version of it was WAY too slow. It was down to 10 frames a second since I was using a lot of double calculations. In this new version, I quickly convert to a local coordinate system and do almost everything in float and int.

Ideally you would want an editor that would let you combine algorithms, something like all the procedural shading tools out there. However, those mostly generate bit maps and I want something that generates code. That's still a long way off for me though.

Gnollrunner said:
Here is a test where I'm using it for rather plain Voronoi shading.

Actually, if you still have low frequency density volume information around, and ideally the density gradient, you could create one random planar UV parametrization patch per voroni cell, pointing outwards of the mesh and well aligned to the local surface.

And then you could drive such content aware blending like in the first paper i have posted above. And with this you could apply high detailed texturing as usual from a bunch of texture samples.

Quite some effort and cost maybe, but i guess it looks much better than tri-planar mapping.

But not sure about height. Because your voronoi cells are 2D, we would get a similar problem is with height maps and a planar texture, causing stretching on cliffs and mirroring on caves. Not sure if a trick like your 4D noise could help this.
But maybe, due to the blending, it would be no problem if a surface is closely aligned to cell boundaries.

Edit: I just realize, very likely the original paper from Eric Heitz about blending with histogram preservation may show a similar idea of procedural free shape texturing. Might be worth a look. It surely is in the references of the linked paper.

JoeJ said: This one seems pretty nice: https://jcgt.org/published/0011/03/05/

That's the most useful one for routine dirt, rocks, etc. All it does is divide the texture into hexes, spin the hexes randomly, fuzz the edges, and recombine. That can be done in a shader. That's enough to break up annoying texture repeats without doing too much compute. The machine learning based approaches are for more complicated problems.

This topic is closed to new replies.

Advertisement