How to innovate the First person shooter (FPS) genre?

Started by
76 comments, last by TeamToBeDetermined 3 months ago

Ok, makes sense now.

RmbRT said:
The thing about just using a LOD image is that if only represents magnitudes at the center of each sample. But I want the samples themselves to be irregularly distributed, so that I have peaks and valleys that are not on a uniform grid.

If you look at my images showing the LOD pyramid of the eroded heightmap, do you think the terrain is somehow snapped or aligned to the global grid? And does your impression change if you go down from high to low detail?

Or, for a better example: If you look at a digital real world photo, do you notice it's pixels form a grid? And if you reduce it to just 8 x 8, does it feel like the grid constraints the content of the image in any way?

If the answer to any of those questions is no (although technically it's a yes), this means you underestimate the option to generate content directly in a grid. It works well if done right, and it's by far the easiest way to do it.
Say for example we want to draw a point to the grid, but the point is at a subpixel position of (0.3, 0.8). How do we do this? Well, we can do the inverse of a simple bilinear filter, distributing the color of the point to a 2x2 region of pixels.
What i mean is: The domain, if it's regular or not, does not hinder you to express any content. An image can generate the impression of a circle, although it's domain is a regular grid. And more important: The resolution does not change this. It limits the amount of detail you can do, but you can still depict and create the same content.

I think that this approach will have different results compared to normal multi-layer noise with evenly spaced sampling & power frequencies.

But i think you actually could use multi-layer noise, and you would still get different results than other people also using multi-layer noise.

What i mean is, you seemingly ignore standard and established methods by intent. But they work. You should use them instead ignoring them, and start on top from the work your fathers already have done.
If you don't, you just come up with the same things anyway, while wasting time on reinventing wheels.
When i was mid twenty i still felt mentally immortal, having all time in the world to make my visions a reality.
But it tell you, that's totally wrong. This here is a race against the clock. Ignoring former work is a luxury you only can afford if you're already 100% sure you know a better way.

Ok, you have been warned. Now go exploring the unknown, knowing your not the first who went there… : )

But now, coming back to this:

But I want the samples themselves to be irregularly distributed

Have you considered particles? I think beside heightmaps and meshes that's the third major option, and for me personally it's the one i use the most currently. I give the particle some shape, e.g. a box or sphere, or a procedural polyhedra, but it can be a whole mesh as well. Then i voxelize those shapes to a density or SDF volume, and generate a iso surface from that. Some examples:

Tried to make a rock out of box shaped particles. Some iterative shrinking and constraints to adjacent particles gave the cracks.

Here is a section from the fluid particles seen in the background.

Some terrain again. But you can see the interior is modeled using larger particles to save resources, and it's all boxes, including the details on the surface.

You surely know IQ's demos showing Disney kind of scenes at high quality, modeled from SDF primitives. That's the same stuff basically.

Rasterizing such primitives to a 2D heightfield is surely an option.

Advertisement

JoeJ said:
But i think you actually could use multi-layer noise, and you would still get different results than other people also using multi-layer noise. What i mean is, you seemingly ignore standard and established methods by intent. But they work. You should use them instead ignoring them, and start on top from the work your fathers already have done.

Yeah I do ignore that by intent, because I tried normal multi-layer noise before years ago and it wasn't satisfactory. Maybe if my funky approach fails, I'll go back to a more regular sample distribution again. These are not my first steps on the journey to procedural landscapes.

JoeJ said:
while wasting time on reinventing wheels. When i was mid twenty i still felt mentally immortal, having all time in the world to make my visions a reality.

You say this to a guy who spent 8 years working on a language and is currently building an ISA, lol. However, my youthful vigour has also declined over the past years, but that just means I'll take longer to finish.

JoeJ said:
Have you considered particles? I think beside heightmaps and meshes that's the third major option, and for me personally it's the one i use the most currently. I give the particle some shape, e.g. a box or sphere, or a procedural polyhedra, but it can be a whole mesh as well.

Your approach is pretty interesting, very straightforward. I think what I was going for is the heightmap / 2D grid equivalent of what you're doing, but without the explicit geometrical approach. The math may certainly end up being quite similar. Basically I am trying to come up with some a priori procedural mountain shapes to use as particles. And then add traditional noise to make those more lifelike/detailed.

JoeJ said:
So the question is likely: At which higher resolution does the simple heightmap look better than your jittered grid, and at which lower resolution does the jitter add enough variety to be worth it? Hard to say ofc.

I think that for the very large-scale features, it is probably best to have them on a non-uniform grid, it should give much more variety to the macrofeatures when I really crank down the LOD.

JoeJ said:
I would propose fbm noise, which is pretty good to model mountains, clouds, etc. The idea is to recursively add higher frequencies with lower amplitudes. Really useful, simple and fast.

I know this as perlin noise. This is the kind of noise I was going for, but I don't like how the macrofeatures are all uniformly spaced. I know that you can offset them quite well with the higher frequencies, but the overall spacing still stays the same and you can't have drastic deviations from that grid. I want to try having these distorted samples of the macro features to see how much more interesting this kind of noise can become. I'm currently basically only working on noise layer 1, which is explicitly designed to only cover macrodetails. The other noise layers will add interesting details and make everything a bit more uneven. Then there will also be a layer 0 which just adds basically continental-scale elevation changes, a super low-frequency smooth noise. The problem I have with regular multi-frequency noise is that you basically need to apply most of those frequencies to still maintain the general shape of the terrain. And the more frequencies you remove, the more the grid becomes apparent. I want to make it so that the detailed stuff is optional and you only need to apply the first 2-3 layers and you can still generate a somewhat accurate very low-detail mesh that expresses the general shape. Mountain tops should not move their location, slopes should be where they are, etc. Removing lower frequencies should just remove some of the bumpiness etc.

I don't want to rasterise/quantise faraway mountains at high detail and then convert them to a LOD mesh again. I'm basically starting at the usage requirements of the mesh, not the optical requirements, in this departure from traditional layered noises. And trying to retroactively make it look roughly like full-detail layered noise again.

I think this is where all the confusion came from.

JoeJ said:
If you look at my images showing the LOD pyramid of the eroded heightmap, do you think the terrain is somehow snapped or aligned to the global grid? And does your impression change if you go down from high to low detail?

That's the crux. I will never be having the final, fully-processed version of faraway terrain in memory, yet the LOD must look accurate across multiple LOD levels. And that's why the macrofeatures must be geometry-based, not raster-based. I don't want to compute a 1024*1024 sample region on the fly just to get the general outline of a mountain side. But my approach may also give undesirable results once I get into LOD switching and all that. But I think that LOD switching or on the fly generation is much simpler if I start with a geometric shape as oposed to having to reconstruct the shape by sampling a function.

Walk with God.

RmbRT said:
You say this to a guy who spent 8 years working on

Yeah, and i feel kinda bad for expressing myself the way i do. But there is no disrespect or attempt to push you into other directions. I just have the impression you make it harder than it needs to be for some things. Which happens to myself as well pretty often.

RmbRT said:
I know this as perlin noise. This is the kind of noise I was going for, but I don't like how the macrofeatures are all uniformly spaced.

I hear you, but two important things:

Noise such as Perlins MUST have those uniform properties about distribution.
If you write a noise function which generates peaks at a sparse and irregular distribution, it might be more interesting, but it also becomes unpredictable and uncontrollable. In fact it becomes useless. That's a dilemma maybe, but i'll try to explain the reasons:
If your noise function generates random characterisitc features, those features are still no content. No mountains, no cracks through a rock, no tree or anything. Let's say the features are some ‘blotch’.
Because of the irregular blotches, we can no longer mix your interesting noise functions, because if we do the blotches are still visible. They are not the content we desire, but they hinder us to create it, since they remain visible and keep popping up where we don't want them. Our intended ‘interesting features’ become artifacts in practice.
Thus it's much better to design a noise function which shows randomness but at a uniform distribution. Using that, we can mix it with other stuff so we come closer to the desired results. We know the noise will give us values in the range of 0 to one, and the average will be 0.5, for example. That's boring, but controllable and predictable. It's a useful building block, only this way modeling multiple frequencies becomes practically possible at all.
So, the truth is: ‘All you can expect from randomness is just that: Randomness.’ i would say. And that's really the only purpose of those noise functions. They model randomness within a expected distribution of frequency and amplitudes, not more.

The second thing is that perlin noise is not fbm noise.
Perlin gives you just one frequency. Fbm is about adding multiple frequencies together, so you get details at any scale. And that's really the point where 1. the limitations and constraints mentioned above become essential, and 2. where those basic noise functions start to become useful.
Perlin is perfect fit to use it for fbm, but both those things are about completely different ideas.

I guess you know those things, but maybe you missed the potential of those tools while being disappointed on achieving some larger goal which randomness can't deliver.

RmbRT said:
I know that you can offset them quite well with the higher frequencies, but the overall spacing still stays the same and you can't have drastic deviations from that grid.

I try to zoom out my fbm noise so even the grid of lowest frequency should become visible:

I can not see the grid, although my noise function uses a regular grid, no advanced skewed stuff.

But i agree the peaks distribution feels pretty uniform (as expected and required).
I also see some longer ridges of connected peaks btw. A typical property of many noise functions.

If you want to break this uniform distribution, i would use a lower frequency signal to partially flatten the terrain.
There would be much more variety after that. Then i look for another improvement, implement it as good as i can, then continue with another idea.
Imagine those node graphs artists use to model materials, shaders, procedural effects, etc. They often use a hundred of nodes. That's a lot of complexity and parameters to tweak. And that's where the finally good results come from. Perlin or fbm are just building blocks. To make it interesting, you still need to be creative with combining a lot of things.

However, there are still some things you can do on the noise function itself. E.g, if i zoom in, so the low level grid fits my image, no more uniform distribution pattern at all:

Or, i can try to introduce some flow, using the gradient of the noise to offset my samples:

This starts to look ‘alien and disgusting’ pretty quickly, but some kind of fluid appearance is given. Zooming in a bit:

Here i've used a similar idea of tracing a procedural vector field for a blur effect:
(Interestingly it looks like having some lighting calculations, but there aren't any.)

Sadly i have nothing to show a combination of multiple such patterns.
But imagine: We build a heightmap from alien fbm noise, and on the surface, some times we grow those blurry blotches out of the surface. In other places we grow some swirls:

And all sorts of shit. We tint it in Atari rainbow colors with weird patterns. Mandelbulps grow everywhere.
We can zoom in, discovering an entire universe. We can zoom out, discovering an entire universe. Parents take the game away from their their kids, confusing it with digital LSD.

I talk about the retro gamedev vision of procedural generation, which maybe is exclusive to the Atari generation, so guys like me. But yours does not seem too different at its core.
So i tell you: Yes, you have discovered the weaknesses of those procedural patterns correctly, but you may have missed their strengths.

You force me to say this. Usually i say ‘replace this retro crap with simulation!’. But so far you did not show interest about the simulation idea, and thus those random patterns might be all you have.

(Reading further i notice you actually agree, and your plan actually is to follow this top down approach of adding levels of smaller details.)

RmbRT said:
The problem I have with regular multi-frequency noise is that you basically need to apply most of those frequencies to still maintain the general shape of the terrain. And the more frequencies you remove, the more the grid becomes apparent.

Sounds you want to lop-pass-filter some frequencies instead removing them.
Never tried this. You could blur the signal, but then you need to have intermediate results of nearby cells in memory. Or you could multi sample the noise function. Which is easy but slow.

I want to make it so that the detailed stuff is optional and you only need to apply the first 2-3 layers and you can still generate a somewhat accurate very low-detail mesh that expresses the general shape. Mountain tops should not move their location, slopes should be where they are, etc. Removing lower frequencies should just remove some of the bumpiness etc.

But that's exactly what you should get?
Ok, i'll try this. Using 3 layers, but i add an option to scale each amplitude with an individual slider…

using just the top level, the image covers 5*5 grid cells:

adding 2nd level:

3rd:

I've used those values to scale the amplitude for each level:

It works exactly like you wanted it?
So if you get results which are worse, we should discuss your method in detail. Maybe something is wrong or can be changed for higher quality.

… i think i know what's your problem.

Likely you use a bilinear filter, so your noise shows discontinuties at grid edges, like with GPU textures in games.
Consequently your result is not smooth if you scale it up.
You see the edges of the grid, and that's your problem, right?

But you can't see those edges in my first images showing level 1, although there is a grid of 4 x 4 lines.

If you want the same, i'm using a cubic filter, which needs 3x3 samples so is twice as expensive.
But if you want to magnify, it's worth the cost and the cheapest option i know.

Here's example noise code i've used to make the images above:

namespace C33
{
	using Vec3 = Vectormath::Aos::Vector3;

	static inline const Vec3 square (const Vec3 &v)
	{
		return mulPerElem(v,v);
	}

	static inline float randF (const uint32_t v) // pcg
	{
		uint32_t state = v * 747796405u + 2891336453u;
		uint32_t word = ((state >> ((state >> 28u) + 4u)) ^ state) * 277803737u;
		word = (word >> 22u) ^ word;

		return float(word) / float(0x100000000LL);
	}

	static float ValueNoise3D ( const float *p )
	{
		float iX = floor(p[0]);
		float iY = floor(p[1]);
		float iZ = floor(p[2]);
		int sX = (int)iX;
		int sY = (int)iY;
		int sZ = (int)iZ;

		Vec3 fw[3];
		{
			Vec3 fx = Vec3(p[0]-iX, p[1]-iY, p[2]-iZ) + Vec3(.5f); // 0.5 - 1.5
			fw[0] = mulPerElem (Vec3(.5f), square(Vec3(1.5f) - fx));
			fw[1] = Vec3(.75f) - square(fx - Vec3(1.0f));
			fw[2] = mulPerElem (Vec3(.5f), square(fx - Vec3(0.5f)));
		}

		float sum = 0;
		for (int k=0; k<3; k++)
		for (int j=0; j<3; j++)
		for (int i=0; i<3; i++)
		{
			float w = fw[i][0] * fw[j][1] * fw[k][2];
			int Z = sZ + k;
			int Y = sY + j;
			int X = sX + i;
			int h = (((Z&0x2FF)<<20) | ((Y&0x2FF)<<10) | (X&0x2FF));
			float val = randF(h);
			sum += val * w;			
		}
		return sum;
	}

I'm sure this should help, so ask if something isn't clear.
Cubic filter works for many things ofc. It's is more blurry, but smooth. At high contrast, grid artifacts become still visible, e.g. if we calculate reflections for the surface. But mostly it's good enough.

JoeJ said:
If you want the same, i'm using a cubic filter, which needs 3x3 samples so is twice as expensive. But if you want to magnify, it's worth the cost and the cheapest option i know.

There is also quadratic interpolation (aka square-square interpolation), which is fairly easy to implement and also avoids the “creasing” artifacts of bilinear. This is what I'm using for upscaling in my terrain noise system and it works well. It's essentially applying two bilinear steps, where one step has a shifted grid.

Interesting. Result looks like Catmull Clark subdivision, showing the same subtle artifacts on the resulting normals. I get them from the cubic filter as well.

Thinking of it, there is a cheaper continuous filter requiring only a 2x2 kernel, but assuming the signal is flat in the middle of a texel. It would look like this:

It would work by squaring the weights of a bilinear filter.
But it would create wobbles around a slope of constant angle like drawn on the right. Surely useless here.

Btw, i have tried to use cubic filter also in erosion sim, actually to fetch the sediment in its advection step.

Here's cubic result:

And here bilinear:

Bilinear remains more natural.

But the cubic is an interesting choice. Due to the blur, sediment diffuses, making the material ‘softer’.
Rivers can form more easily, are better defined and deeper. The overall shape is smoother, but ridges are better defined as well.
Sadly it also causes this unnatural alien effect.
But i'll experiment with mixing both options depending on amount of water or sediment…

JoeJ said:
Yeah, and i feel kinda bad for expressing myself the way i do. But there is no disrespect or attempt to push you into other directions. I just have the impression you make it harder than it needs to be for some things. Which happens to myself as well pretty often.

No worries.

JoeJ said:
I talk about the retro gamedev vision of procedural generation, which maybe is exclusive to the Atari generation, so guys like me. But yours does not seem too different at its core. So i tell you: Yes, you have discovered the weaknesses of those procedural patterns correctly, but you may have missed their strengths. You force me to say this. Usually i say ‘replace this retro crap with simulation!’. But so far you did not show interest about the simulation idea, and thus those random patterns might be all you have.

If you use regular FBM noise but only apply let's say 3 frequencies, all your peaks are at 2⁻³ intervals of the grid. So there are really only 8 × 8 possible positions for a terrain feature within a cell at 3 octaves (assuming power of 2 frequency scaling). And you most likely get very round-looking results. Of course I could just use 16×16 samples and then turn those into vertices that are connected by simple triangles or something. But I want pointy mountain tops, sharp mountain ridges, but also smooth plains and also river beds etc. Which are impossible in a purely FBM approach if I do not calculate the octaves and maybe even apply erosion or something. It's like vector graphics vs. pixel graphics. I want to determine the semantic elements of the landscape, and then generate the shape based on that, instead of letting all semantics emerge on their own. This semantics-first approach lets me easily add stuff like roads, rivers, and other things, because I can perfectly plaster them on the terrain without erosion or searching or anything. I basically know the path it will take a priori because I already know the exact semantic structure of the entire region.

Of course you get more realistic results with erosion and all that simulation stuff. But let's say you want to build a town somewhere during the game, so after the landscape already exists. The terrain needs to be flattened out there retroactively. And let's say you then need to place a roadway connecting the town to another town. If I have a semantic description of how the mountains are shaped, I do not need to run a pathfinding & terraforming algorithm, I can simply decree where the road is based on the structure of the cell. And now imagine a player joins a server and needs to generate terrain in the distance. And there's a road passing through that terrain, but he does not see the start or end. In a simulation-based approach, he would need to generate all the surrounding geometry, then run the path finding to place the road, etc., and then throw the surroundings away again because he only wants to see a small part of the road in the distance.

I simply feel that a bottom-up description of a terrain does not allow for modular modifications, unless you either generate potentially large amounts of surroundings (imagine a 50km road that snakes through the terrain, maybe even having to cross rivers via a bridge or something. That would take ages to path correctly if you don't know all the terrain beforehand) to get deterministic results, or you need to do it like in minecraft where all modifications to the initial terrain are saved on disk and then loaded as needed.

JoeJ said:
… i think i know what's your problem.

Back in the day, like 2016 or so, when I last had a serious attempt at 3D terrain generation, I followed the tutorials that recommended multi-layered perlin noise with cosine interpolation. That looked terrible for low detail levels, because all interpolations form axis-aligned slopes. I also then used cubic interpolation, but that made things too slow and also could not achieve sharp macro features like mountain peaks. That meant to get cool looking mountains like Skyrim or something, I'd need high detail noise, which has exponential cost to generate. Since then, I am not a big fan of this approach, and have been on-and-off thinking about voronoi cell based terrain. Instead of paying exponention computational effort to get a rasterised mountain ridge, I will simply define in code what a mountain ridge looks like and place it in my terrain. And what a mountain peak looks like. Or a grassy plain. And then I simply choose what's where and then connect them. I can still add high frequency noises on top of that to make the mesh I generate from those macrofeatures look better.

If I know what's where and what everything should look like, then I can easily create optimal LODs for everything. I can accurately maintain mountain shapes etc., and have clearly defined roads and rivers that run through the terrain, even at low LOD, without having to pre-bake them. I can know where to plant forests, etc. Multi-layer noise based terrain fails to clearly define its features, because it is a fractal. Without walking the edge on the rasterised fractal, you will not find the edge (for example determining whether an area is above or below sea level, or above or below snow level etc.).

And for simulation-based or even just FBM noise based terrain shapes, I simply do not see a way to have this cheap (even when single-threaded) generation without massive up-front work or memoisation on disk or whatever. I will use FBM noise, but only for minor terrain features and to add some actual randomness to surfaces.

JoeJ said:
So, the truth is: ‘All you can expect from randomness is just that: Randomness.’ i would say. And that's really the only purpose of those noise functions. They model randomness within a expected distribution of frequency and amplitudes, not more.

This FBM based approach is like layering noises and expecting a painting to emerge. You start a painting by drawing outlines and defining shapes, and then filling in details. This is the exact same approach I would use to generate a castle, a town, a road — anything. The only reason FBM works for terrain is that it creates a somewhat usable basis that you can then perform other techniques on. But it does not work for most other things. And what I'm generating is not strictly a terrain shape, but an entire world with artifacts of civilisation, such as towns, bridges, roads, trade routes, etc. My terrain needs to be shaped in a manner that explicitly allows the placement of a castle city in a meaningful location on a mountain within a cell. I don't want to have to analyse where cities could be placed. I want a cell that's tagged as “is mountain” and “can have castle”, and upon generation, it has a nice spot on the mountain where a castle can be placed. I do not want to have to extract this from a rasterised, quantised heightmap. And when I then say “connect this cell's castle to that cell's town via road”, it should not have to perform an A* search or something over a heightmap. It simply forces a road to appear along a single pre-defined, deterministic path, which crosses the cell boundary at a predefined location and can take into account the semantic structure of the cell.

I need the terrain to lend itself to intelligent design and intelligent adaptation, and that is easiest on a semantics-first representation. And as a bonus, it makes it easy to generate the overall shape accurately (with sharp features for mountains, but smooth features for grasslands) for any LOD.

So where you take randomness and then post-process it to make it intelligible, I take intelligible things and make them look more real by adding randomness afterwards.

LOL I probably repeated myself quite a bit in this rant. Sorry.

Walk with God.

Hi very interesting topic, i think acctually work with a team on a great project and what i read here will probaly help me. Thanks you all for sharing infos. There is our website here : https://miragecreativelab.com/

This is (if permit) a link to this project : https://lc.cx/mOEATc​ , this is a survival quest social intuitive game with impostors and crewmate. Spacial sound & 3D.

Feel free to give me review if you want i will apreciate if there is.

I will follow this thread.

RmbRT said:
I followed the tutorials that recommended multi-layered perlin noise with cosine interpolation. That looked terrible for low detail levels, because all interpolations form axis-aligned slopes. I also then used cubic interpolation, but that made things too slow and also could not achieve sharp macro features like mountain peaks. That meant to get cool looking mountains like Skyrim or something, I'd need high detail noise, which has exponential cost to generate.

I think you haven't fully explored all of the options available for noise generation. Perlin noise is not the only type of FBM noise (not FDM). You can also generate FBM noise in a much simpler way that is efficient and allows very high frequency details to be added very cheaply. This process is described in this blog post of mine.

The main idea is to start with a tile of uniform white noise, then to recursively upscale this noise using a smooth interpolation technique (I use quadratic or square-square interpolation). After each upscaling, you add more white noise with amplitude of 2^(-d), where d is the recursion depth (0, 1, 2…).

This approach solves both of the problems you describe. Not only does it allow generation of distant terrain (due to the very large and low resolution starting tile), but also very fine details up close. Due to the LOD pyramid built along the way it enables new tiles to be generated with very low cost. Each tile of noise at any LOD only requires two operations: interpolation of parent, and adding more white noise. Both of these operations are very cheap. Whereas with Perlin noise you would be computing N noise octaves and adding them for every single tile. The only advantage of Perlin noise I can see is that it supports domain warping, while the approach I describe does not.

As for grid artifacts, these are avoided by the quadratic interpolation. You should also familiarize yourself with signal processing concepts like the Nyquist sampling theorem and Fourier transform. These ideas show how any shape can be represented using a grid as long as the grid has half the spacing of the highest frequency's period. You are barking up the wrong tree by discarding grids wholesale. Grids are far more efficient for generation than anything unstructured. Grids are easy to accelerate with SIMD (4-8x speedup), while other representations are not so easy.

This topic is closed to new replies.

Advertisement