Handling animation translation

Started by
1 comment, last by JoeJ 4 months, 2 weeks ago

When you're working with animation, the animation data can have translation information for x,y,z for the animation. I am currently applying this translation data for my animation and I am running into an issue where I am applying the animation translation data but this then moves all animations above my terrain which is at 0,0,0

I'm trying to set animation characters y to be the height of the terrain at the given position so it looks like they're running on the surface but this places them well above the terrain and this was not the case before I started using joint translation data from the animation data file.

Would appreciate any thoughts on how you would deal with this.

jointOffsetRelativeToParent.x += translationData.x;
jointOffsetRelativeToParent.y += translationData.y;
jointOffsetRelativeToParent.z += translationData.z;

auto offset_matrix = Mat4::Translate({jointOffsetRelativeToParent.x, jointOffsetRelativeToParent.y, jointOffsetRelativeToParent.z});

void Renderer::Update()
{
    float height = terrain.GetHeight(animationPos.x, animationPos.z);
    animationPos.y = height;
}
Running above the terrain
Advertisement

I have not worked much with animation data, and actually i have animated it all by myself.

But i think it is common to us a root bone which represents the ground. This root bone parents the hip / pelvis, which usually is the first bone in the character hierarchy.
When animating a walking cycle, the root bone does not move while we walk, and the character does not move forwards either. It walks in place, like walking an a treadmill / conveyor belt.
To use it in a game, we place the root bone at the ground level and move it at the desired walking speed. We may also speed up / slow down the animation playback, so the feet do not slide on the floor, to match animation and actual walking speed.
Usually this root bone is the only one where we apply translation to move the character around. The pelvis will have translation too, e.g. to change height while doing steps, or moving lightly left and right. But we won't touch this, as the animation should get it already right.

That's at least my assumption about the standard workflow with animation.
But it only gives us the basics, no longer good enough for modern games. The levels are detailed now. The floor is not always flat. There may be stairs, terrain with various slope angles, etc. And characters can move at variable speed as well.
So the whole pipeline - as i imagine - is: We may have a walking and a running animation, both cycled and with a matching root bone to represent the ground. But a step in the walking cycle takes longer than a step while running.
To blend both clips, we need to scale both playback speeds so they have the same step time. Then we can blend both clips, to get something in between runnign and walking, matching our desired movement speed. (not sure if its common to blend walking and running, but just to give an example.)
To achieve the blending, we will likely blend quaternions and translation. (Detail: There is a method of spherical blend of two (or four) quaternions, which does not work for 3 quaternions. But there also is a cheaper linear interpolation, which works for any number of quaternions and is usually good enough. That's called Slerp() and Lerp() usually.)
After we applied the blending, we may get some new problems, e.g. the feet sliding on the ground, which looks wrong and silly.
To fix this, we can apply inverse kinematics to modify the interpolated pose. This way we can place the foot on the ground so it does not slide, and we can respect slope angle and step height as well.
We may also apply IK to an arm, so it aims the gun at some target. And we may apply IK to the head, so it looks at the player, etc.
So we use IK to refine the pose and care about details.
At this point we can also cheat eventually. E.g. we can change the length of a bone to hit some target precisely. This would be a case where we apply translation modifications to the internal skeleton. Usually we never do do this, but if we really need to, we can.
Or maybe we have some superhero character who can change the length of his limbs. Anything goes.

Personally i work on foll body IK solvers, so i can talk about that in some more detail.
With IK solvers, we usually try to move the ends of the skeleton graph, actually feet or hands ('effector' bones), to some given target position and / or orientation.
The problem then is underdetermined. We don't know which bones we should bend how much, and there is an infinite amount of solutions.
The basic example is to grab your mouse. While you hold it, you can still rotate your elbow about the line from shoulder to wrist. That's called swivel angle, and usually the main open variable. To control it, we likely introduce secondary objectives, like minimizing change of swivel angle, or maximizing distance to joint limits, or minimizing acceleration / energy, etc.
IK solvers are also usually iterative. I have a closed form analytical solution for an entire limb, including respecting the joint limits, so i can solve a limb in one step.
But for the whole body i still need some iterations so all limbs reach their target, and personally i also need to bring the center of mass from the entire character to a target position, so i can balance it. IK can thus become much more costly then animation blending eventually.
But the real problem with IK is discontinuities. If our targets is at a place which is hard to reach, we may flip forth and back between two very different solutions, generating very different poses. To prevent unnatural movement teleporting bones form one place to the other instantly, we can use an upper bound on acceleration or some other way to smooth things out. We may miss the target then, but that's better than jumping bones.

Though, working on robotic ragdoll simulation the IK topic is surely more complex for me than with the current industry standard of relying on animation for everything.
I have looked at Unreals IK solver tools for example, and the felt very basic and not really impressive. Likely good enough to adjust feet positions, but not much more. So guess devs often come up with custom solutions here, similar to how they often make their own custom character controllers.
I guess there are people specialized on such things, but unlike gfx research, it's rarely discussed in public. Related blog posts or GDC talks which go into details seem quite rare, sadly.

Character animation surely is interesting and a good skill to have.
But personally i think the days of animation dominating game development are counted. It is just way to expensive, and thus imo the true culprit of the current stagnation in innovation. It's not greedy publishers or lazy devs out of ideas. It's the intense cost and effort on content creation, making everything so hard.

So i guess it won't take long until our NPCs can do basic locomotion procedurally, without a need to capture, cleanup, and finally blend hundreds of animation clips.
And it looks like the solution won't be classic approach of solving physics control problems like Boston Dynamic does, or i try to do. I rather think they will use machine learning. This still needs costly training data and may lack some control, but it seems easier to go that route.
Those AI characters can already talk, so making them walk as well should not be that difficult.

But well, i guess we will see if AI can really help us, or if it will make anything only worse… ; )

This topic is closed to new replies.

Advertisement