Tuesday, November 21, 2017

Euler Angles and Its Unexpected Keyframe Interpolation in Character Animation

Have you ever tried to animate a character's bone with a simple rotation along one of the world axis but you see a weird interpolation? An ugly interpolation in two or three axis instead of one! For sure this wasn't what you were expecting. Let's check this GIF:




You rotate the box along X and set a keyframe for it but when you play the animation you will see the box is rotating along X and Y! Well, in this post I want to explain why this issue is happening!

Before going further, let's talk a little about Euler Angles. All the people in animation know this word because it's the most intuitive method for rotating objects in 3D space and rotation has the highest number of clicks within a process of making an animated character. So what is Euler angles then? Euler proved that every unique rotation in 3D space can be described by three rotations along three orthogonal axis. These three rotations are called pitch yaw and roll. Euler angles are famous in 3D animation because they are easy to understand and imagine but every flexibility and ease of use will come up by a cost.

As I wrote earlier, Euler proved that every rotation can be defined uniquely in 3D space but he never said that I will provide you a nice interpolation between two different Euler rotations. So just to find out why this is happening I need to go a bit into the underlying math of Euler angles.

A rotation defined with Euler Angles is composed of 3 different values. Each of which shows rotation along one of the orthogonal axis. Rotation around X axis has a specific rotation matrix as well as rotation along Y and Z so when we want to rotate an object around X then Y and then Z we just need to multiply these matrices:

Z_RotationMatrix * Y_RotationMatrix * X_RotationMatrix

This Euler rotation is called Euler_XYZ. That means you first rotate the object along X then Y and then Z. The important point here is that Euler rotations order matters. The result that Euler XZY provides is different than Euler XYZ even with the same values of pitch, yaw and roll and the reason is because matrix multiplication is not commutative. Euler XZY is calculated like this:

Euler_XZY Rotation = Y_RoationMatrix * Z_RotationMatrix * X_RotationMatrix

And of course the two formulas are different because matrix multiplication order is important.

So let's consider Euler_XYZ. When Y rotation matrix is applied after X rotation matrix, it means the X rotation transformation provide by X matrix is always in the Y rotation matrix's space. That means the first rotation is the child of second. Why? because first we rotate the coordinate system with X rotation matrix and then we rotate the result with Y rotation matrix meaning the whole Y rotation is applied to the first transformation which is X rotation in this case. So wherever Y Matrix takes the coordinate system, X transform is also going there just like a parent-child relationship. This coordinate system with three consecutive rotation matrices is called Gimbal. So Euler angles is always measured in Gimbal coordinates.

Here are some screenshots for you to see how Gimbal rotation react when you rotate an object. Imagine our Euler angles rotation controller order is XYZ which means if we give values of  X= 35, Y= 45 and Z=50 (all in degrees), it will rotate the object first with 35 degrees around X then 45 around Y and at the end 50 around Z. So here you can find the rotation applied step by step and see how Gimbal axis are being changed:


X =0, Y=0, Z=0: Check out the three rotation axis, everything looks normal just like world axis:




X=35, Y=0, Z=0: We rotated the object around X with 35 degrees. Again everything looks normal just similar to default world axis because X is the first rotation to be applied:




X=35, Y=45, Z=0. Y is rotated and you see the X axis is also rotated 45 degrees. Remember about the parent-child relationship I mentioned a few lines above. Here you can see it.




X=35, Y=45, Z=50. Z rotates 50 degrees and both X and Y rotate 50 degrees because in Euler XYZ, Z is the parent of all and all axis follow it:




So these are the axis you have after this rotation. Now imagine if you need to rotate the object in world Y axis again. Since Euler angles are defined in Gimbal coordinate system you can not achieve a rotation around world Y just by rotating the object in one axis. In this case to achieve a rotation around world Y axis, two or three axis will be rotating to be sure you will reach the specified rotation. So YES! here is the ugly and unexpected interpolation. You moved the object in one world axis but the object rotate itself in three or two different dimensions to reach the desired rotation:

























The trajectory above, shows how the object moves on an inefficient arc and it's not just rotation around Y. In the GIF above you can see 2 Gimbal axis are moving during the second rotation we applied (80 degrees around world Y axis):

So how can you avoid this unexpected interpolations while rotating bones or objects? One way is to always have an eye on Gimbal coordinate system. Don't let the World or Local reference coordinate system trick you. Local and World coordinate axis are just there to give a better intuition to the user but your mathematical reference coordinate system is Gimbal here. You can switch to Gimbal constantly to see how you can rotate the bones with less changes in different axis however Gimbal coordinate system suffers from an issue named Gimbal lock where you lose one degree of freedom and it can happen when you rotate second axis 90 degrees. For instance in the order of Euler XYZ if you rotate Y in 90 degrees then X will be aligned on Z because in Gimbal coordinate system X is the child of Y and by rotating Y with 90 degrees, X will rotate 90 degrees around Y as well and it will be aligned on Z. Check out the GIF below:





Now if you want to rotate the object in X axis for the next keyframe, there is no X anymore. Z and X are aligned together and you missed one degree of freedom as you can see in the GIF above. So to unlock the Gimbal lock you need to rotate the object in more than one axis again!

There is also another way to avoid bad interpolation and it's using unit Quaternion rotations instead of Euler angles. Quaternion SLERP interpolation can be very soft and smooth and it always act as expected since it selects the shortest arc possible from one rotation to another. The problem with Quaternions is that they are not intuitive enough to be presented by Bezier curves. The curves animators like so much and can have controls over it a lot! Quaternions are following a different kind of algebra and artists are not really enthusiastic to learn the math behind Quaternions and without learning the math, a Quaternion value can be very confusing however Quaternion rotation controller are still being used in DCC tools like 3DSMax and they come up with interpolators like TCB interpolators which can provide a very smooth interpolation between rotations. So rarely you need to change the TCB values and you can simulate the ease in/outs just by adding one or two single key frames. The only thing here is that you don't have control over the rotation of different axis because you just control the SLERP interpolation factor using a TCB curve and this factor is just a normalized value showing the interpolation percentage between two Quaternions using SLERP function.

When I was teaching game animation, I tried to teach my students to always use Quaternion controllers with TCB interpolators. Using Quaternion controllers with TCB interpolators gives you less control over the interpolation but when you use Quaternion interpolation it always uses SLERP function which selects the shortest arc from the source rotation to the destination. This means you can create your desired curve by just adding a little number of more keyframes and without frequently tweaking unexpected changes on curve values like what Euler angles interpolation offers. In next posts I will try to show you how you can have a better interpolation by using Quaternions instead of Euler angles and by just adding one or two extra key frames.

At the end, the case we studied above can turn to something like this when we switch the rotation controller from Euler XYZ to TCB Quaternion SLERP. As you can see the trajectory is very smooth and just follows the Y rotation in the world. You can compare the trajectories in two GIFs to see the differences!

Sunday, August 21, 2016

Some Combat Animations

During the previous years, I spent most of my time on technical side of animation as I like it more than the artistic side of animation. It's more appealing to me.

Previously, this blog was focusing more on technical side of animations but in next posts it tries to address the art of animation so I try to come up with some tutorials on basics of 3D animation.

So as for the beginning, I've attached some fully key-framed animations I did in the previous years. You might want to check them here:

















Friday, March 25, 2016

The Role of Animations in Hit Effects

This shall be my final post regarding the technical side of animation for a while! I try to write again but it might be more about the artistic side of animation.

Video games are all about entertainment. Game Developers always try to maximize this entertaining experience. One aspect is to let the players feel exactly what they did and receive a fair result of their selected action. This can be considered from different perspectives like game design, risk/reward or aesthetics. Players should receive a suitable feedback based on what they do. Hitting and attacking in action games follow this rule as well. When hitting or being hit, player should feel the impact.  Several techniques can be considered here to show the impact of the hits.
This article tries to address some of these techniques. The techniques are effectively used in Dead Mage’s latest released game named Epic of Kings. EOK is a hack and slash game designed for touch based devices. It’s released on iOS devices and will be released on Android very soon. Here you can see the trailer of the game:



In the rest of the article, I’m going to mention the techniques we used to show the hit impacts in Epic of Kings.

Controlling the Hit Impacts
Before reading this section I want to say that all the mentioned cases here are related to animations which can motivate player’s eyesight. It’s obvious that the audio have a huge impact on hit effects as well but this article is not going to talk about audio as I’m not a professional in the audio field.
So here are some of the animation techniques we used in EOK to control and improve the hit impacts:

1-Animations:
Surely the most important aspect to show the hit effects is the animations themselves. Animations should not be floaty. They have to be started from a damaged pose because the incoming attack has high kinetic energy and it makes the victim to accelerate in the direction of the attack. So the animation should show this. It should start very quick but ends slowly to show some effects of the attack like dizziness. Just note that the time of the hit animations in combat is very important. So the slower part of animations showing the dizziness should have a reasonable time length and it should have safe intervals providing good poses for blending to other animations (if it’s needed to be canceled to other animation). Here is one example of a light hit animation:




2- Uninterruptable Animations:
In hack and slash games, the enemies often has slower animations than the player. One reason, is because of the responsiveness. Responsiveness causes faster animations for player since playable characters are interacting directly with the player and they should respond well to the game input. Enemies are usually slower because player should have some time to see the enemy’s current action and he/she needs some time to make the correct decision. If the enemy animations are too fast he doesn’t have enough time to make the right decision. However the timing of enemy’s animations can be adjusted based on enemy type, attack type and the player progression in the game.
In many situations, this slower enemy animations can’t be cancelled by the player’s attacks. That means the enemy’s animation continues while player hitting him.  Although the player attacks, the enemy is not showing any reaction because his animation is not getting canceled. So here we can use additive animations to show some shakes on the enemy’s body. Here is a video showing additive animations’ role in this scenario:



And here is one additive animation in UE4 editor:





The shown additive animation is animated from reference pose so it can be added on different animation poses generally.

3- No Cross Fade Time (Transitional Blending):
To avoid floaty animations and showing the transferred kinetic energy for hits, the cross fade times should be equal to zero while transitioning to hit animations.

4- Specific Hit Animations:
This is an obvious point. If you have specific hit animations for different attacks, the feeling of the hits would be much better.
For example directional hit animations can help the feeling of hit impacts. Based on incoming attack’s direction, an animation showing the correct hit direction can be played.
One another example is the specific hit animations based on the current animation state. For example in an attack animation, when the time is between t1 and t2, if the character gets hit, animations other than normal hits are played.

5- Body IK as A Post Process:
In Epic of Kings, an IK chain is defined on characters’ spine. It acts as a post process on the poses of block and block hit animations. Post process here means that the original animation generates the base pose and the IK will add some custom poses on top of it so we can save the fidelity to the animations created by the artists.
By moving the end-effector in a reasonable range and blend the IK solution with FK, the spine always change position and creates non-repetitive poses which can improve the visual of the motion.

6- Camera Effects:
As mentioned in the first section, we want the player to feel the impact of the hits. Surely involving the eyesight of the player is very important as all mentioned cases above was about involving players’ eyesight via animation techniques. So camera movements can do a good job to transfer this feeling as well.
One common way is to use camera shakes. In EOK, plenty of different camera shakes with different properties were defined. Properties include frequency and amplitude for position, rotation, FOV. It also has fade in/out values to let the camera shake get added on top of the current camera movement. For example heavy attacks have more amplitude and frequency and light attacks have less frequency and amplitude or the beasts' footsteps have less frequency but more amplitude.
One other important aspect is about animating the camera FOV. In some cases, animating camera FOV on enemy attacks can make sense. Some years ago I watched a documentary movie about self-defense. It was showing that when the brain feels danger, the view of the eyes become more narrow letting them to just focus on the danger. We used this phenomena in EOK by reducing the FOV in some enemy attacks to let the player feel the danger more. Video here shows this in action:



Just note that I suggest to animate FOV just for the situations in which you’re fighting with one enemy which is also our case in epic of kings. For the situations in which you need to fight simultaneously with different characters, FOV should not be changed because player needs to focus on all the events and actions from the enemies around to do the appropriate reaction. Changing FOV in this kind of situations can distract the player a bit.

7- Hit Pauses
One other thing that you can find in many games like street fighter or god of war is hit pauses. Whenever an attack lands, time stops for a short period to show the impact of the attack. We added a slight hit pause in Epic of kings as well.

8- Physically Based Animation
Blending between physically based animation and keyframe animation has been used in many games so far. It can bring dynamic action scenes with non-repetitive animations in the game environment. One common way is to make the ragdoll to follow the animation while responding to external perturbations. With this, the ragdoll can have the overall shape of the animation and respond physically to external forces. This can be blended with the keyframe animation as well to create better and natural poses.
We developed a system on top on UE4 to demonstrate this feature. However we didn’t integrate it in the final game mostly because of the low time in the development and also because in games like Epic of Kings, action scenes are not that dynamic unlike a third person hack and slash or shooter game. So it was not a priority and we forget about integrating this feature into the game.
This video shows this feature in action:


In the video above, a random force applies to random physical bodies and the ragdoll tries to follow the animation while responding to the applied external force. Also it blends with the keyframe animation. If you want to know more about this kind of systems, I’ve written a post about blending between ragdoll and keyframe animation on my blog here.

9- Particle Effects:
There is no doubt that particles can do a great job in terms of aesthetics. Some kind of particles like sparks and blasts can help the hits to be felt better.

Conclusion
Some cases which can help to control and improve hit effects in action games were mentioned. These cases were effectively used in Epic of Kings game. Having all these cases can help the player to feel the action more and involve herself better in the game.

Saturday, February 20, 2016

Epic of Kings: The Game

This post is not directly related to animation techniques. Just wanted to introduce "Epic of Kings" as a game I worked on. It's released recently on Appstore. You may check its trailer here:




 And a game-play video here:



Unreal Engine 4 is used to develop Epic of Kings. 820 animations are used and organized in the game. Unreal engine animation optimization tools helped us a lot here to organize the animations in the game. We didn't let the whole resident animations in memory to exceed 7 MB.

The characters have averagely more than 70 bones which is a high value for mobile games. It's not high for PC/Console games but it's high for mobile games. Having more bones means more memory consumption and more process in calculating skeleton and skin matrices.

Also UE4's animation montage system and animation graphs features helped us a lot to avoid high dimensionality and spaghetti effects while creating animation graphs.

FABRIK IK solution which is very lightweight but great in action is also used at some points for characters' bodies. FABRIK is also provided by UE4 animation system.

Hope you enjoy playing the game and seeing the animations within.

Saturday, November 14, 2015

Mirroring 3D Character Animations

Introduction


Video games have resources. Resources are raw data that need to be manipulated, baked and become ready to be used in game. Textures, meshes, animations and sometimes metadatas are all counted as resources. These resources are consuming significant amount of memory. Re-using and manipulating resources is essential for a game engine.

In terms of animation, there exists plenty of actions which can be used to manage animations as resources and one is motion retargeting.

With motion retargeting, one can use a specific animation on different skeletons with different reference or binding poses, different joint size and different heights. For example, you just have a walk animation and want to use it for 5 different characters with different physical shapes. Motion retargeting systems can do this nicely so you don't need to have five different walks for those 5 different characters. You just have one walk animation which can be used for all characters. This represents lower amount of animations and therefore less needed resources.

Motion retargeting systems apply some modifications on top of animation data to make them suitable for different skeletons. These modifications include:

1- Defining a generic but modifiable skeleton template for bipeds or quadrupeds
2- Root motion reasonable scaling
3- Ability to edit skeleton reference pose
4- Joint movement limitations
5- Animation mirroring
6- Adding a run-time rig on top of the skeleton template.

Creating a motion retargeting system needs a vast amount of work and it's a huge topic. In this post I just want to show you how you can mirror character animations. Motion retargeting systems are usually supporting animation mirroring. It's useful for different purposes. Mirrored animations can be used to avoid foot-skating and also for achieving responsiveness and again, by mirroring an input pose, you can avoid creating new mirrored animations and you just using the same animation data, no new animation needed here. You can select the animation or its mirrored based on the foot phases.

In the next post, I will show you how you can use mirrored animations in action but this post is just concentrating on mirroring an input pose from an animation.

For this post, I used Unreal Engine 4. Unreal Engine has a very robust, flexible and optimized animation system but its motion retargeting is still immature. At this time, it can't be compared with Unity3D or Havok Animation motion retargeting.

Mirror Animations

To mirror animations, two types of bones should be considered. First the bones that have a mirrored bone in the skeleton hierarchy like hands, arms, legs, foots and facial bones. Let's call these mirrored bones, twins. Second, the bones which have no twin, like pelvis, spines, neck and head.

So to create a mirroring system, we have to define some meta data about the skeleton. It should save each bone twins, if it has any. For this reason, I define a class named AnimationMirrorData which saves and manipulate required data such as mirror-mapped bones, rotation mirror axis and position negation direction.

To mirror animations, I defined a custom animation node which can be used in unreal engine animation graph. It receives a pose in local space and mirrors it. It also has two input pins. One is for an animation mirror data object which should be initialized by the user and one is a boolean which let the node to be turned on or off. As you can see in the picture, there is no extra animation needed here and the node just accepts the current pose and mirrors it and you can turn it on or off based on the game or animation circumstances.




Here I discuss how to mirror each type of bones:

1- Mirroring bones which has a twin in the hierarchy

These kind of bones like hands and legs have a twin in the hierarchy. To mirror them, we need to swap the transforms of the two bones. For example the left upper arm transform should be pasted on the right upper arm, and the right upper arm transform should be pasted on the left upper arm. To do this, we have to deduct the the binding pose from the current transform of the bone at the current frame. In Unreal Engine 4 the local poses are calculated in their parent space as well as the binding poses. We don't want to mirror the binding poses of the bones and we just need to mirror the deducted transform. By doing this, we can make sure that the character can stay on the same spot and it won't rotate 180 degrees. Remember, this only works if the binding poses of the twin bones in the skeleton are already mirrored. This means that the rigger should have mirrored the twin bones when he/she wanted to rig the mesh.

2- Mirroring bones with no twin

These kind of bones are like root, pelvis or spine which don't have any twin in the hierarchy. For these kind of bones, again we have to deduct the binding pose from the current bone transform.  Now the current deducted transform should be mirrored. This time we need a mirror axis. The mirror axis should be selected by the user. Mostly it is x,y or z in the bone's binding pose space. So for rotations, if you select X as the mirror axis, you should negate the y and z components of the quaternion. To mirror the translations, things are a little different because for translations we never want to change the up and forward direction of the translations. That means by mirroring the animation, we don't want the character to move upside down and also backward. We just want the side movement to be negated. So here for the translations we just need to negate one component of the translation vector. So it is not counted as a mirror, mathematically.

Following, I placed some parts of the code which I wrote for the mirror animation node:

Here is  the AnimationMirrorData header file:

 #pragma once  
   
 #include "Object.h"  
 #include "AnimationMirrorData.generated.h"  
   
 /**  
  *   
  */  
 UENUM(BlueprintType)  
 enum class MirrorDir : uint8  
 {  
      None = 0,  
      X_Axis = 1,  
      Y_Axis = 2,  
      Z_Axis = 3  
 };  
   
   
 UCLASS(BlueprintType)  
 class ANIMATIONMIRRORING_API UAnimationMirrorData : public UObject  
 {  
 GENERATED_BODY()  
 public:  
   
      UAnimationMirrorData();  
   
      //Shows mirror axis. 0 = None, 1 = X, 2 = Y, 3 = Z   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir MirrorAxis_Rot;  
   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir RightAxis;  
   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir PelvisMirrorAxis_Rot;  
   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir PelvisRightAxis;  
   
      //Functions  
      UFUNCTION(BlueprintCallable, Category = "Mirror Animation")  
      void SetMirrorMappedBone(const FName bone_name, const FName mirror_bone_name);  
   
      UFUNCTION(BlueprintCallable, Category = "Mirror Animation")  
      FName GetMirroMappedBone(const FName bone_name) const;  
   
      TArray<FName> GetBoneMirrorDataStructure() const;  
   
 protected:  
      TArray<FName> mMirrorData; 
} 


And here are two functions which are mainly responsible to mirror animations:


/***********************************************/  
 void FAnimMirror::Evaluate(FPoseContext& Output)  
 {  
      mBasePose.Evaluate(Output);  
   
   
      if (!mAnimMirrorData)  
      {  
           return;  
      }  
   
      if (Output.AnimInstance)  
      {  
           TArray<FCompactPoseBoneIndex> lAr;  
           int32 lCurrentMirroredBoneInd = 0;  
           int32 lMirBoneCount = mAnimMirrorData->GetBoneMirrorDataStructure().Num();  
   
           //Mirror Mapped Bones  
           for (uint8 i = 0; i < lMirBoneCount; i += 2)  
           {  
                FCompactPoseBoneIndex lInd1 = FCompactPoseBoneIndex(Output.AnimInstance->GetSkelMeshComponent()->GetBoneIndex(mAnimMirrorData->GetBoneMirrorDataStructure()[i]));  
                FCompactPoseBoneIndex lInd2 = FCompactPoseBoneIndex(Output.AnimInstance->GetSkelMeshComponent()->GetBoneIndex(mAnimMirrorData->GetBoneMirrorDataStructure()[i + 1]));  
   
                FTransform lT1 = Output.Pose[lInd1];  
                FTransform lT2 = Output.Pose[lInd2];  
   
                Output.Pose[lInd1].SetRotation(Output.Pose.GetRefPose(lInd1).GetRotation() * Output.Pose.GetRefPose(lInd2).GetRotation().Inverse() * lT2.GetRotation());  
                Output.Pose[lInd2].SetRotation(Output.Pose.GetRefPose(lInd2).GetRotation() * Output.Pose.GetRefPose(lInd1).GetRotation().Inverse() * lT1.GetRotation());  
   
                Output.Pose[lInd1].SetLocation((Output.Pose.GetRefPose(lInd2).GetRotation().Inverse() * lT2.GetRotation() * (lT2.GetLocation() - Output.Pose.GetRefPose(lInd2).GetLocation()))   
                     + Output.Pose.GetRefPose(lInd1).GetLocation());  
                  
                Output.Pose[lInd2].SetLocation((Output.Pose.GetRefPose(lInd1).GetRotation().Inverse() * lT1.GetRotation() * (lT1.GetLocation() - Output.Pose.GetRefPose(lInd1).GetLocation()))   
                     + Output.Pose.GetRefPose(lInd2).GetLocation());  
   
                lAr.Add(lInd1);  
                lAr.Add(lInd2);  
   
           }  
   
   
           //Mirror Unmapped Bones  
           FCompactPoseBoneIndex lPoseBoneCount = FCompactPoseBoneIndex(Output.Pose.GetNumBones());  
   
           for (FCompactPoseBoneIndex i = FCompactPoseBoneIndex(0); i < lPoseBoneCount;)  
           {  
                if (!lAr.Contains(i))  
                {  
                     if (!i.IsRootBone())  
                     {  
                          FTransform lT = Output.Pose[i];  
                          lT.SetRotation(Output.Pose.GetRefPose(i).GetRotation().Inverse() * Output.Pose[i].GetRotation());  
                          lT.SetLocation(Output.Pose[i].GetLocation() - Output.Pose.GetRefPose(i).GetLocation());  
                            
                          if (i.GetInt() != 1)  
                          {  
                               MirrorPose(lT, (uint8)mAnimMirrorData->MirrorAxis_Rot, (uint8)mAnimMirrorData->RightAxis);  
                               Output.Pose[i].SetRotation(Output.Pose.GetRefPose(i).GetRotation() * lT.GetRotation());  
                               Output.Pose[i].SetLocation(Output.Pose.GetRefPose(i).GetLocation() + lT.GetLocation());  
                          }  
                          else  
                          {  
                               MirrorPose(lT, (uint8)mAnimMirrorData->PelvisMirrorAxis_Rot, (uint8)mAnimMirrorData ->PelvisRightAxis);  
                               Output.Pose[i].SetRotation(Output.Pose.GetRefPose(i).GetRotation() * lT.GetRotation());  
                               Output.Pose[i].SetLocation(Output.Pose.GetRefPose(i).GetLocation() + lT.GetLocation());  
                          }  
                     }  
                }  
                ++i;  
           }  
      }  
 };  
   
 void FAnimMirror::MirrorPose(FTransform& input_pose, const uint8 mirror_axis, const uint8 pos_fwd_mirror)  
 {  
   
      FVector lMirroredLoc = input_pose.GetLocation();  
   
      if (pos_fwd_mirror == 1)  
      {  
           lMirroredLoc.X = -lMirroredLoc.X;  
      }  
      else  
      {  
           if (pos_fwd_mirror == 2)  
           {  
                lMirroredLoc.Y = -lMirroredLoc.Y;  
           }  
           else  
           {  
                if (pos_fwd_mirror == 3)  
                {  
                     lMirroredLoc.Z = -lMirroredLoc.Z;  
                }  
           }  
      }  
   
      input_pose.SetLocation(lMirroredLoc);  
   
   
      switch (mirror_axis)  
      {  
           case 1:  
           {  
                const float lY = -input_pose.GetRotation().Y;  
                const float lZ = -input_pose.GetRotation().Z;  
                input_pose.SetRotation(FQuat(input_pose.GetRotation().X, lY, lZ, input_pose.GetRotation().W));  
                break;  
           }  
   
           case 2:  
           {  
                const  float lX = -input_pose.GetRotation().X;  
                const float lZ = -input_pose.GetRotation().Z;  
                input_pose.SetRotation(FQuat(lX, input_pose.GetRotation().Y, lZ, input_pose.GetRotation().W));  
                break;  
           }  
   
           case 3:  
           {  
                const float lX = -input_pose.GetRotation().X;  
                const float lY = -input_pose.GetRotation().Y;  
                input_pose.SetRotation(FQuat(lX, lY, input_pose.GetRotation().Z, input_pose.GetRotation().W));  
                break;  
           }  
      }  
 };  


I haven't placed the whole source code here. If you need them, just contact me and I will send them to you.

Monday, September 21, 2015

Creating Non-Repetitive Randomized Idle Using Animation Blending

You might have seen that the standing idle animations in video games are some kind of a magical movement. They never get repetitive. The character is looking at different directions with a non-repetitive pattern. He/she shows different facial animations or shifts his/her weight randomly and does many other usual acts in a standing idle animation.

These kind of animations can be implemented using an animation blend tree and a component which can manipulate animation weights. This post is going to show how a non-repetitive idle animation can be created.

Defining Animation Blend Tree for Idle Animation

In this section, I'm going to define an animation blend tree which can bring a range of possible animations for idle. Before creating a blend tree,  the animations which are used within are described here:

1- A simple breathing idle animation which is just 70 frames (2.33 second).

2- A left weight shift animation similar to the original idle animation while having the pelvis shifted to left and with a more curvy torso. "Similar" here, means that the animations have same timings and almost same poses but just with a difference in main poses. This difference shows the weight shift left pose. I created the weight shift animation just by adding an additive keyframe to different bones on top of the original idle animation in the DCC tool.

3- A right weight shift animation similar to the original idle animation while having the pelvis shifted to right and with a more curvy torso.

4- Four different look animations. Look left, right, up and down. These 4 are all one frame additive animations. Their transforms are subtracted from the first frame of the original idle animation.

5- Two different facial and simple body movement animations. These two animations are additive as well. They are adding some facial animations to the original idle animation and some movement over torso and hands.

So the required animations are described. Now let's define a scenario for blend tree in three steps before creating it:

1- We want the character to stand using an idle animation while often shifting his/her weight. So first we have to create a blend node which can blend between, left weight shift, basic idle and right weight shift.

2- The character wants to look around often and we have four different additive look animations for this. So first we create a blend node which can blend between 4 additive look animations. It works with two parameters. One parameter is mapped to blend between look left and right and one parameter is mapped to blend between look up and down. This blend node is going to be added to the blend node defined in step 1.

3- After adding head look animations, the two additive facial animations are going to be added to the result. These two animations are switching randomly when they are reaching at their final frame.

So a blend tree which is capable of supporting this scenario is shown here:



Idle Animation Controller to Manipulate Blend Weights

So far an animation blend tree is created which can create continuous motions with some simple additive and idle animations. Now we have to manipulate the blend weights to create a non-repetitive idle animation. This would be an easy task. I'm going to define it in four steps to obtain a non-repetitive weight shift animation. These steps can be used for facial and look animations as well:

1- First, we randomly select a target weight for the weight shift. It should be in the range of defined weight shift parameter used in blend tree.

2- I define a random blend speed which makes the character to shift weight through time until it reaches the selected target weight in step 1. The blend speed is randomly selected from a reasonable numeric range.

3- When we reach the target blend weight for weight shift, the character should remain in that blend weight for a while. That's completely like what humans do in reality. When a human stands, he/she shifts his/her weight to left or right and stay in that pose for a while. Shifting weight, helps human body to relax the spine muscles. So we select a random time from a reasonable range to set the weight shift remaining time.

4- After the selected weight shifting time ends, we get back to step 1 and this loop repeats while the character is in idle state.

The same 4 steps goes for the directional look and facial animations as well.

This random time, speed and target weight selection, creates a non-repetitive idle animation. The character always look at different directions with different times while shifting his weight to left or right and do different facial and body movement animations. All are done with different and random time, speed and poses.


You can check the result here in this video:




Here is the source code I wrote for the idle animation controller. The system is implemented in Unreal Engine 4. This component calculates the blend weights and pass them to the animation blend tree:


The header file:

 
   
 #pragma once  
   
 #include "Components/ActorComponent.h"  
 #include "ComponenetIdleRandomizer.generated.h"  
   
   
 UCLASS( ClassGroup=(Custom), meta=(BlueprintSpawnableComponent) )  
 class RANDOMIZEDIDLE_API UComponenetIdleRandomizer : public UActorComponent  
 {  
      GENERATED_BODY()  
   
 public:       
      UComponenetIdleRandomizer();  
   
      // Called every frame  
      virtual void TickComponent( float DeltaTime, ELevelTick TickType, FActorComponentTickFunction* ThisTickFunction ) override;  
   
   
 public:  
      /*Value to be used for weight shift blend*/  
      UPROPERTY(BluePrintReadOnly)  
      float mCurrentWeightShift;  
   
      /*Value to be used for idle look blend*/  
      UPROPERTY(BluePrintReadOnly)  
      FVector2D mCurrentHeadDir;  
   
      /*Value to be used for idle facial blend*/  
      UPROPERTY(BluePrintReadOnly)  
      float mCurrentFacial;  
   
      FVector2D mTargetHeadDir;  
   
      float mTargetWeightShift;  
   
      float mTargetFacial;  
   
 protected:  
   
      float mWSTransitionTime;  
   
      float mWSTime;  
   
      float mWSCurrentTime;  
   
      float mLookTransitionTime;  
   
      float mLookTime;  
   
      float mLookCurrentTime;  
   
      float mFacialTransitionTime;  
   
      float mFacialTime;  
   
      float mFacialCurrentTime;  
   
 private:  
      float mLookTransitionSpeed;  
   
      float mWSTransitionSpeed;  
   
      float mFacialTransitionSpeed;  
   
        
 };  
   


And The CPP Here:


 #include "RandomizedIdle.h"  
 #include "ComponenetIdleRandomizer.h"  
   
   
 /******************************************************/  
 UComponenetIdleRandomizer::UComponenetIdleRandomizer()  
 {  
      // Set this component to be initialized when the game starts, and to be ticked every frame. You can turn these features  
      // off to improve performance if you don't need them.  
      bWantsBeginPlay = true;  
      PrimaryComponentTick.bCanEverTick = true;  
   
      // ...  
      //weight shift initialization  
      mTargetWeightShift = FMath::RandRange(-100, 100) * 0.01f;  
      mCurrentWeightShift = 0;  
      mWSTransitionTime = FMath::RandRange(10, 20) * 0.1f;  
      mWSTime = FMath::RandRange(20, 50) * 0.1f;  
      mWSCurrentTime = 0;  
      mWSTransitionSpeed = mTargetWeightShift / mWSTransitionTime;  
   
      //look initialization  
      mTargetHeadDir.X = FMath::RandRange(-80, 80) * 0.01f;  
      mTargetHeadDir.Y = FMath::RandRange(-15, 15) * 0.01f;  
      mCurrentHeadDir = FVector2D::ZeroVector;  
      mLookTransitionTime = FMath::RandRange(10, 20) * 0.1f;  
      mLookTime = FMath::RandRange(20, 40) * 0.1f;  
      mLookCurrentTime = 0.f;  
      mLookTransitionSpeed = mTargetHeadDir.Size() / mLookTransitionTime;  
   
      //facial initialization  
      mTargetFacial = FMath::RandRange(0, 100.f) * 0.01f;  
      mCurrentFacial = 0.f;  
      mFacialTransitionTime = FMath::RandRange(20, 50) * 0.1f;  
      mFacialTime = FMath::RandRange(20.f, 40.f) * 0.1f;  
      mFacialCurrentTime = 0.f;  
      mFacialTransitionSpeed = mTargetFacial / mFacialTransitionTime;  
 }  
   
   
 /**********************************************************************************************************************************/  
 void UComponenetIdleRandomizer::TickComponent( float DeltaTime, ELevelTick TickType, FActorComponentTickFunction* ThisTickFunction )  
 {  
      Super::TickComponent( DeltaTime, TickType, ThisTickFunction );  
   
      /*look weight calculations*/  
      if (mLookCurrentTime > mLookTransitionTime + mLookTime)  
      {  
           mLookTime = FMath::RandRange(20, 40) * 0.1f;  
           mLookTransitionTime = FMath::RandRange(20, 40) * 0.1f;  
           mLookCurrentTime = 0;  
           mTargetHeadDir.X = FMath::RandRange(-80, 80) * 0.01f;  
           mTargetHeadDir.Y = FMath::RandRange(-15, 15) * 0.01f;  
           mLookTransitionSpeed = (mTargetHeadDir - mCurrentHeadDir).Size() / mLookTransitionTime;  
      }  
   
      mCurrentHeadDir += mLookTransitionSpeed * (mTargetHeadDir - mCurrentHeadDir).GetSafeNormal() * GetWorld()->DeltaTimeSeconds;  
   
      if (mLookCurrentTime > mLookTransitionTime)  
      {  
           /*Damping*/  
           float lTransitionSpeedSign = FMath::Sign(mLookTransitionSpeed);  
           mLookTransitionSpeed = mLookTransitionSpeed - lTransitionSpeedSign * 2.0f * GetWorld()->DeltaTimeSeconds;  
   
           if (lTransitionSpeedSign * FMath::Sign(mLookTransitionSpeed) == -1)  
           {  
                mLookTransitionSpeed = 0.f;  
           }  
   
           if (FMath::Abs(mCurrentHeadDir.X) > 0.9f)  
           {  
                mCurrentHeadDir.X = FMath::Sign(mCurrentHeadDir.X) * 0.9f;  
           }  
   
           if (FMath::Abs(mCurrentHeadDir.Y) > 0.2f)  
           {  
                mCurrentHeadDir.Y = FMath::Sign(mCurrentHeadDir.Y) * 0.2f;  
           }  
      }  
   
      mLookCurrentTime += DeltaTime;  
   
   
      /*weight shift calculations*/  
      if (mWSCurrentTime > mWSTransitionTime + mWSTime)  
      {  
           mWSTime = FMath::RandRange(20.f, 50.f) * 0.1f;  
           mWSTransitionTime = FMath::RandRange(30.f, 50.f) * 0.1f;  
           mWSCurrentTime = 0;  
           mTargetWeightShift = FMath::RandRange(-80.f, 80.f) * 0.01f;  
           mWSTransitionSpeed = (mTargetWeightShift - mCurrentWeightShift) / mWSTransitionTime;  
      }  
   
      mCurrentWeightShift += mWSTransitionSpeed * DeltaTime;  
   
      if (mWSCurrentTime > mWSTransitionTime)  
      {  
           /*Damping*/  
           float lTransitionSpeedSign = FMath::Sign(mWSTransitionSpeed);  
           mWSTransitionSpeed = mWSTransitionSpeed - lTransitionSpeedSign * 2.0f * GetWorld()->DeltaTimeSeconds;  
   
           if (lTransitionSpeedSign * FMath::Sign(mWSTransitionSpeed) == -1.0f)  
           {  
                mWSTransitionSpeed = 0.f;  
           }  
   
           if (FMath::Abs(mCurrentWeightShift) > 1.0f)  
           {  
                mCurrentWeightShift = FMath::Sign(mCurrentWeightShift);  
           }  
      }  
   
      mWSCurrentTime += GetWorld()->DeltaTimeSeconds;  
   
      /*facial calculations*/  
      if (mFacialCurrentTime > mFacialTransitionTime + mFacialTime)  
      {  
           mFacialTime = FMath::RandRange(20, 50) * 0.1f;  
           mFacialTransitionTime = FMath::RandRange(20, 50) * 0.1f;  
           mFacialCurrentTime = 0;  
           mTargetFacial = FMath::RandRange(0, 100) * 0.01f;  
           mFacialTransitionSpeed = (mTargetFacial - mCurrentFacial) / mFacialTransitionTime;  
      }  
   
      mCurrentFacial += mFacialTransitionSpeed * GetWorld()->DeltaTimeSeconds;  
   
      if (mFacialCurrentTime > mWSTransitionTime)  
      {  
           mCurrentFacial = mTargetFacial;  
      }  
   
      mFacialCurrentTime += DeltaTime;  
 }  
   
   

Monday, August 10, 2015

The Challenge of Having Responsiveness and Naturalness in Game Animation

Video games as software need to meet functional requirements and it's obvious that the most important functional requirement of a video game is to provide entertainment. Users want to have interesting moments while playing video games and there exists many factors which can bring this entertainment to the players.

One of the important factors is the animations within game. Animation is important because it can affect the game from different aspects. Beauty, controls, narration and driving the logic of the game are among them.

This post is trying to consider the animations in terms of responsiveness while trying to discuss some techniques to retain their naturalness as well.

Here I'm going to share some tips we used in the animations of the 3D action-platforming side-scroller game named "Shadow Blade: Reload". SB:R, PC version has been released 10th 2015 August via Steam and the console versions are on the way. So before going further, let's have a look at some parts of the gameplay here:





You may want to check the Steam page here too.

So here we can discuss the problem. First, consider a simple example in real world. You want to punch into a punching bag. You rotate your hip, torso and shoulder in order and consume energy to rotate and move your different limbs. You are feeling the momentum in your body limbs and muscles and then you are hearing the punch sound just after landing it into the bag. So you are sensing the momentum with your tactile sensation, hearing different voices and sounds related to your action and seeing the desired motion of your body. Everything is synchronized! You are feeling the whole process with your different senses. Everything is ordinary here and this is what our mind knows as something natural.

Now consider another example in a virtual world like a video game. This time you have a controller, you are pressing a button and you want to see a desired motion. This motion can be any animation like a jump or a punch. But this punch is different from the mentioned example in real world because the player is just moving his thumb on the controller and the virtual character should move his whole body in response to it. Each time player presses a button the character should do an appropriate move. If you receive a desired motion with good visual and sounds after pressing each button, we can say that you are going to be merged within the game because it's something almost like the example of the punching in real world. The synchronous response of the animations, controls and audios help the player feel himself more within the game. He uses his tactile sensation while interacting with controller, uses his eyesight to see the desired motion and his hearing sensation to hear the audios. Having all these synchronously at the right moment can bring both responsiveness and naturalness which is what we like to see in our games.

Now the problem is that when you want to have responsiveness you have to kill some naturalness in animations. In a game like Shadow Blade: Reload, the responsiveness is very important because any extra move can lead the player to fall of the edges or be killed by enemies. However we need good-looking animations as well. So here I'm going to list some tips we used to bring both responsiveness and naturalness into our playable character named Kuro:

1- Using Additive Animations: Additive animations can be used to show some asynchronous motions on top of the current animations. We used them in different situations to show the momentum over body while not interrupting player to show different animations. An example is the land animation. After player fall ends and he reaches the ground, he can continue running or attacking or throwing shurikens without any interruptions or land animations. So we are directly blending the fall with other animations like run. But blending directly between fall and run doesn't provide acceptable motion. So here we're just adding an additive land animation on top of the run or other animations to show the momentum over upper body. The additive animation just have visual purposes and the player can continue running or doing other actions without any interruption.


We also used some other additive animations there. For example a windmill additive animation on spine and hands. It's being played when the character stops and starts running consecutively. It can show momentum to hands and spine.

These additive animations are just being added on top the main animations and not interrupting them while the main animations like run and jump are already providing good responsiveness.


2- Specific Turn Animations: You see turn animations in many games. For instance, pressing the movement button in the opposite direction while running, makes the character slide and turn back. While this animation is very good for many games and brings good felling to the motions,  it is not suitable for an action-platforming game like SB:R because you are always moving back and forth on the platforms with low areas and such an extra movement can make you fall unintentionally and it also kills responsiveness. So for turning, we just rotate the character 180 degrees in one frame. But again, rotating the character 180 degrees in just one frame, is not providing a good-looking motion. So here we used two different turn animations. They are showing the character turning and are starting in a direction opposite to character's forward vector and end in a direction equal to character's forward vector. When we turn the character in just one frame, we play this animation and the animation can show the turn completely. It has the same speed of run animation so nothing is just going to be changed in terms of responsiveness and you will just see a turn animation which is showing momentum of a turn motion over the body and it can bring good visuals to the game.

One thing which has to be considered here is that the turn animation starts in a direction opposite to character's forward vector so for using this animation we turned off the transitional blending. because it can make jerky motions on root bone while blending.

To avoid frame mismatches and foot-skating, we used two different turn animations and played them based on the feet phases in run animation. You may check out the turn animation here:




3- Slower Enemies: While the main character is very agile, the enemies are not! Their animations have much more frames. This can help us to get the focus of players out from the main character in many situations . You might know that the human eye has a great ability to focus and zoom on different objects. So when you are looking at one enemy you can only see it clearly and not the others. Slower enemy animations with more frames help us to get the focus out from the player at many points.

As a side note, I want to say that I was watching a scientific show about human eyes a while ago and it showed that the women eyes has wider view than men and men has better focusing. You might want to check this research if you are interested about this topic.

4- Safe Blending Intervals to Cancel Animations: Assume a grappling animation. It can be started from idle pose and ended in idle pose again. The animation can do its job in its 50% of length. So the rest of its time is just for the character to get back to its idle pose safe and smoothly. At the most times, players don't want to see the animations until their ending point. They prefer to do other actions. In our game, players usually tend to cancel the attack and grappling animations after they kill enemies. They want to run, jump or dash and continue navigating. So for each animation which can be cancelled, we are setting a safe interval of blending which is used as the time to start cancelling current animations(s). This interval provides poses which can be blended well with run, jump, dash or other attacks. It provides less foot-skating, frame mismatches and good velocity blending during animation blending.


5- Continuous Animations: In SB:R, most of the animations are animated with respect to the animation(s) which is playing with higher probability before them.

For example we have run attacks for the player. When animating them, the animators have concatenated one loop of run before it and created the run attack just after that. With this, we can have a good speed blending between source and destination animations because the run attack animation has been created with respect to the original run animation. Also we can retain the speed and responsiveness of the previous animations into the current animation.

Another example here is the edge climb which is starting from the wall run animation.


6- Context Based Combat: In SB:R we have context based combat which is helping us using different animations based on the current state of the player (moving, standing,  jumping, distance and/or direction to enemies).

Attacking from each state, causing different animations to be selected which all are preserving almost the same speed and momentum of the player's current state (moving, standing, diving and so on).

For instance, we have run attacks, dash attacks, dive attacks, back stabs, Kusarigama grapples and many other animations. All are being started from their respective animations like run, jump, dash and stand and all trying to preserve the previous motion speed and responsiveness.


7- Physically Simulated Cloths as Secondary Motion: Although responsiveness can lower the rate of naturalness but adding some secondary motions like cloth simulations can help solving this issue. In SB:R we have a scarf for the main character Kuro which helps us showing more acceptable motions.


8- Tense Ragdolls and Lower Crossfade Time in Contacts: Removing cross fade transition times in hits and applying more force to the ragdolls can help more in receiving better hit effects.  However this is useful in many games not just in our case.



Conclusion


Responsiveness VS naturalness is always a huge challenge in video games and there are ways to achieve both. Most times you have to do trade-offs between both to achieve a decent result.

For those who are eager to find more about this topic, I can recommend this good paper from Motion in Games conference:

Aline Normoyle, Sophie Jorg, "Trade-offs between Responsiveness and Naturalness for Player Characters", 2014.

It shows interesting results about players' responses to animations with different amount of responsiveness and naturalness.