Sunday, August 21, 2016

Some Combat Animations

During the previous years, I spent most of my time on technical side of animation as I like it more than the artistic side of animation. It's more appealing to me.

Previously, this blog was focusing more on technical side of animations but in next posts it tries to address the art of animation so I try to come up with some tutorials on basics of 3D animation.

So as for the beginning, I've attached some fully key-framed animations I did in the previous years. You might want to check them here:

















Friday, March 25, 2016

The Role of Animations in Hit Effects

This shall be my final post regarding the technical side of animation for a while! I try to write again but it might be more about the artistic side of animation.

Video games are all about entertainment. Game Developers always try to maximize this entertaining experience. One aspect is to let the players feel exactly what they did and receive a fair result of their selected action. This can be considered from different perspectives like game design, risk/reward or aesthetics. Players should receive a suitable feedback based on what they do. Hitting and attacking in action games follow this rule as well. When hitting or being hit, player should feel the impact.  Several techniques can be considered here to show the impact of the hits.
This article tries to address some of these techniques. The techniques are effectively used in Dead Mage’s latest released game named Epic of Kings. EOK is a hack and slash game designed for touch based devices. It’s released on iOS devices and will be released on Android very soon. Here you can see the trailer of the game:



In the rest of the article, I’m going to mention the techniques we used to show the hit impacts in Epic of Kings.

Controlling the Hit Impacts
Before reading this section I want to say that all the mentioned cases here are related to animations which can motivate player’s eyesight. It’s obvious that the audio have a huge impact on hit effects as well but this article is not going to talk about audio as I’m not a professional in the audio field.
So here are some of the animation techniques we used in EOK to control and improve the hit impacts:

1-Animations:
Surely the most important aspect to show the hit effects is the animations themselves. Animations should not be floaty. They have to be started from a damaged pose because the incoming attack has high kinetic energy and it makes the victim to accelerate in the direction of the attack. So the animation should show this. It should start very quick but ends slowly to show some effects of the attack like dizziness. Just note that the time of the hit animations in combat is very important. So the slower part of animations showing the dizziness should have a reasonable time length and it should have safe intervals providing good poses for blending to other animations (if it’s needed to be canceled to other animation). Here is one example of a light hit animation:




2- Uninterruptable Animations:
In hack and slash games, the enemies often has slower animations than the player. One reason, is because of the responsiveness. Responsiveness causes faster animations for player since playable characters are interacting directly with the player and they should respond well to the game input. Enemies are usually slower because player should have some time to see the enemy’s current action and he/she needs some time to make the correct decision. If the enemy animations are too fast he doesn’t have enough time to make the right decision. However the timing of enemy’s animations can be adjusted based on enemy type, attack type and the player progression in the game.
In many situations, this slower enemy animations can’t be cancelled by the player’s attacks. That means the enemy’s animation continues while player hitting him.  Although the player attacks, the enemy is not showing any reaction because his animation is not getting canceled. So here we can use additive animations to show some shakes on the enemy’s body. Here is a video showing additive animations’ role in this scenario:



And here is one additive animation in UE4 editor:





The shown additive animation is animated from reference pose so it can be added on different animation poses generally.

3- No Cross Fade Time (Transitional Blending):
To avoid floaty animations and showing the transferred kinetic energy for hits, the cross fade times should be equal to zero while transitioning to hit animations.

4- Specific Hit Animations:
This is an obvious point. If you have specific hit animations for different attacks, the feeling of the hits would be much better.
For example directional hit animations can help the feeling of hit impacts. Based on incoming attack’s direction, an animation showing the correct hit direction can be played.
One another example is the specific hit animations based on the current animation state. For example in an attack animation, when the time is between t1 and t2, if the character gets hit, animations other than normal hits are played.

5- Body IK as A Post Process:
In Epic of Kings, an IK chain is defined on characters’ spine. It acts as a post process on the poses of block and block hit animations. Post process here means that the original animation generates the base pose and the IK will add some custom poses on top of it so we can save the fidelity to the animations created by the artists.
By moving the end-effector in a reasonable range and blend the IK solution with FK, the spine always change position and creates non-repetitive poses which can improve the visual of the motion.

6- Camera Effects:
As mentioned in the first section, we want the player to feel the impact of the hits. Surely involving the eyesight of the player is very important as all mentioned cases above was about involving players’ eyesight via animation techniques. So camera movements can do a good job to transfer this feeling as well.
One common way is to use camera shakes. In EOK, plenty of different camera shakes with different properties were defined. Properties include frequency and amplitude for position, rotation, FOV. It also has fade in/out values to let the camera shake get added on top of the current camera movement. For example heavy attacks have more amplitude and frequency and light attacks have less frequency and amplitude or the beasts' footsteps have less frequency but more amplitude.
One other important aspect is about animating the camera FOV. In some cases, animating camera FOV on enemy attacks can make sense. Some years ago I watched a documentary movie about self-defense. It was showing that when the brain feels danger, the view of the eyes become more narrow letting them to just focus on the danger. We used this phenomena in EOK by reducing the FOV in some enemy attacks to let the player feel the danger more. Video here shows this in action:



Just note that I suggest to animate FOV just for the situations in which you’re fighting with one enemy which is also our case in epic of kings. For the situations in which you need to fight simultaneously with different characters, FOV should not be changed because player needs to focus on all the events and actions from the enemies around to do the appropriate reaction. Changing FOV in this kind of situations can distract the player a bit.

7- Hit Pauses
One other thing that you can find in many games like street fighter or god of war is hit pauses. Whenever an attack lands, time stops for a short period to show the impact of the attack. We added a slight hit pause in Epic of kings as well.

8- Physically Based Animation
Blending between physically based animation and keyframe animation has been used in many games so far. It can bring dynamic action scenes with non-repetitive animations in the game environment. One common way is to make the ragdoll to follow the animation while responding to external perturbations. With this, the ragdoll can have the overall shape of the animation and respond physically to external forces. This can be blended with the keyframe animation as well to create better and natural poses.
We developed a system on top on UE4 to demonstrate this feature. However we didn’t integrate it in the final game mostly because of the low time in the development and also because in games like Epic of Kings, action scenes are not that dynamic unlike a third person hack and slash or shooter game. So it was not a priority and we forget about integrating this feature into the game.
This video shows this feature in action:


In the video above, a random force applies to random physical bodies and the ragdoll tries to follow the animation while responding to the applied external force. Also it blends with the keyframe animation. If you want to know more about this kind of systems, I’ve written a post about blending between ragdoll and keyframe animation on my blog here.

9- Particle Effects:
There is no doubt that particles can do a great job in terms of aesthetics. Some kind of particles like sparks and blasts can help the hits to be felt better.

Conclusion
Some cases which can help to control and improve hit effects in action games were mentioned. These cases were effectively used in Epic of Kings game. Having all these cases can help the player to feel the action more and involve herself better in the game.

Saturday, February 20, 2016

Epic of Kings: The Game

This post is not directly related to animation techniques. Just wanted to introduce "Epic of Kings" as a game I worked on. It's released recently on Appstore. You may check its trailer here:




 And a game-play video here:



Unreal Engine 4 is used to develop Epic of Kings. 820 animations are used and organized in the game. Unreal engine animation optimization tools helped us a lot here to organize the animations in the game. We didn't let the whole resident animations in memory to exceed 7 MB.

The characters have averagely more than 70 bones which is a high value for mobile games. It's not high for PC/Console games but it's high for mobile games. Having more bones means more memory consumption and more process in calculating skeleton and skin matrices.

Also UE4's animation montage system and animation graphs features helped us a lot to avoid high dimensionality and spaghetti effects while creating animation graphs.

FABRIK IK solution which is very lightweight but great in action is also used at some points for characters' bodies. FABRIK is also provided by UE4 animation system.

Hope you enjoy playing the game and seeing the animations within.

Saturday, November 14, 2015

Mirroring 3D Character Animations

Introduction


Video games have resources. Resources are raw data that need to be manipulated, baked and become ready to be used in game. Textures, meshes, animations and sometimes metadatas are all counted as resources. These resources are consuming significant amount of memory. Re-using and manipulating resources is essential for a game engine.

In terms of animation, there exists plenty of actions which can be used to manage animations as resources and one is motion retargeting.

With motion retargeting, one can use a specific animation on different skeletons with different reference or binding poses, different joint size and different heights. For example, you just have a walk animation and want to use it for 5 different characters with different physical shapes. Motion retargeting systems can do this nicely so you don't need to have five different walks for those 5 different characters. You just have one walk animation which can be used for all characters. This represents lower amount of animations and therefore less needed resources.

Motion retargeting systems apply some modifications on top of animation data to make them suitable for different skeletons. These modifications include:

1- Defining a generic but modifiable skeleton template for bipeds or quadrupeds
2- Root motion reasonable scaling
3- Ability to edit skeleton reference pose
4- Joint movement limitations
5- Animation mirroring
6- Adding a run-time rig on top of the skeleton template.

Creating a motion retargeting system needs a vast amount of work and it's a huge topic. In this post I just want to show you how you can mirror character animations. Motion retargeting systems are usually supporting animation mirroring. It's useful for different purposes. Mirrored animations can be used to avoid foot-skating and also for achieving responsiveness and again, by mirroring an input pose, you can avoid creating new mirrored animations and you just using the same animation data, no new animation needed here. You can select the animation or its mirrored based on the foot phases.

In the next post, I will show you how you can use mirrored animations in action but this post is just concentrating on mirroring an input pose from an animation.

For this post, I used Unreal Engine 4. Unreal Engine has a very robust, flexible and optimized animation system but its motion retargeting is still immature. At this time, it can't be compared with Unity3D or Havok Animation motion retargeting.

Mirror Animations

To mirror animations, two types of bones should be considered. First the bones that have a mirrored bone in the skeleton hierarchy like hands, arms, legs, foots and facial bones. Let's call these mirrored bones, twins. Second, the bones which have no twin, like pelvis, spines, neck and head.

So to create a mirroring system, we have to define some meta data about the skeleton. It should save each bone twins, if it has any. For this reason, I define a class named AnimationMirrorData which saves and manipulate required data such as mirror-mapped bones, rotation mirror axis and position negation direction.

To mirror animations, I defined a custom animation node which can be used in unreal engine animation graph. It receives a pose in local space and mirrors it. It also has two input pins. One is for an animation mirror data object which should be initialized by the user and one is a boolean which let the node to be turned on or off. As you can see in the picture, there is no extra animation needed here and the node just accepts the current pose and mirrors it and you can turn it on or off based on the game or animation circumstances.




Here I discuss how to mirror each type of bones:

1- Mirroring bones which has a twin in the hierarchy

These kind of bones like hands and legs have a twin in the hierarchy. To mirror them, we need to swap the transforms of the two bones. For example the left upper arm transform should be pasted on the right upper arm, and the right upper arm transform should be pasted on the left upper arm. To do this, we have to deduct the the binding pose from the current transform of the bone at the current frame. In Unreal Engine 4 the local poses are calculated in their parent space as well as the binding poses. We don't want to mirror the binding poses of the bones and we just need to mirror the deducted transform. By doing this, we can make sure that the character can stay on the same spot and it won't rotate 180 degrees. Remember, this only works if the binding poses of the twin bones in the skeleton are already mirrored. This means that the rigger should have mirrored the twin bones when he/she wanted to rig the mesh.

2- Mirroring bones with no twin

These kind of bones are like root, pelvis or spine which don't have any twin in the hierarchy. For these kind of bones, again we have to deduct the binding pose from the current bone transform.  Now the current deducted transform should be mirrored. This time we need a mirror axis. The mirror axis should be selected by the user. Mostly it is x,y or z in the bone's binding pose space. So for rotations, if you select X as the mirror axis, you should negate the y and z components of the quaternion. To mirror the translations, things are a little different because for translations we never want to change the up and forward direction of the translations. That means by mirroring the animation, we don't want the character to move upside down and also backward. We just want the side movement to be negated. So here for the translations we just need to negate one component of the translation vector. So it is not counted as a mirror, mathematically.

Following, I placed some parts of the code which I wrote for the mirror animation node:

Here is  the AnimationMirrorData header file:

 #pragma once  
   
 #include "Object.h"  
 #include "AnimationMirrorData.generated.h"  
   
 /**  
  *   
  */  
 UENUM(BlueprintType)  
 enum class MirrorDir : uint8  
 {  
      None = 0,  
      X_Axis = 1,  
      Y_Axis = 2,  
      Z_Axis = 3  
 };  
   
   
 UCLASS(BlueprintType)  
 class ANIMATIONMIRRORING_API UAnimationMirrorData : public UObject  
 {  
 GENERATED_BODY()  
 public:  
   
      UAnimationMirrorData();  
   
      //Shows mirror axis. 0 = None, 1 = X, 2 = Y, 3 = Z   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir MirrorAxis_Rot;  
   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir RightAxis;  
   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir PelvisMirrorAxis_Rot;  
   
      UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Mirror Animation")  
      MirrorDir PelvisRightAxis;  
   
      //Functions  
      UFUNCTION(BlueprintCallable, Category = "Mirror Animation")  
      void SetMirrorMappedBone(const FName bone_name, const FName mirror_bone_name);  
   
      UFUNCTION(BlueprintCallable, Category = "Mirror Animation")  
      FName GetMirroMappedBone(const FName bone_name) const;  
   
      TArray<FName> GetBoneMirrorDataStructure() const;  
   
 protected:  
      TArray<FName> mMirrorData; 
} 


And here are two functions which are mainly responsible to mirror animations:


/***********************************************/  
 void FAnimMirror::Evaluate(FPoseContext& Output)  
 {  
      mBasePose.Evaluate(Output);  
   
   
      if (!mAnimMirrorData)  
      {  
           return;  
      }  
   
      if (Output.AnimInstance)  
      {  
           TArray<FCompactPoseBoneIndex> lAr;  
           int32 lCurrentMirroredBoneInd = 0;  
           int32 lMirBoneCount = mAnimMirrorData->GetBoneMirrorDataStructure().Num();  
   
           //Mirror Mapped Bones  
           for (uint8 i = 0; i < lMirBoneCount; i += 2)  
           {  
                FCompactPoseBoneIndex lInd1 = FCompactPoseBoneIndex(Output.AnimInstance->GetSkelMeshComponent()->GetBoneIndex(mAnimMirrorData->GetBoneMirrorDataStructure()[i]));  
                FCompactPoseBoneIndex lInd2 = FCompactPoseBoneIndex(Output.AnimInstance->GetSkelMeshComponent()->GetBoneIndex(mAnimMirrorData->GetBoneMirrorDataStructure()[i + 1]));  
   
                FTransform lT1 = Output.Pose[lInd1];  
                FTransform lT2 = Output.Pose[lInd2];  
   
                Output.Pose[lInd1].SetRotation(Output.Pose.GetRefPose(lInd1).GetRotation() * Output.Pose.GetRefPose(lInd2).GetRotation().Inverse() * lT2.GetRotation());  
                Output.Pose[lInd2].SetRotation(Output.Pose.GetRefPose(lInd2).GetRotation() * Output.Pose.GetRefPose(lInd1).GetRotation().Inverse() * lT1.GetRotation());  
   
                Output.Pose[lInd1].SetLocation((Output.Pose.GetRefPose(lInd2).GetRotation().Inverse() * lT2.GetRotation() * (lT2.GetLocation() - Output.Pose.GetRefPose(lInd2).GetLocation()))   
                     + Output.Pose.GetRefPose(lInd1).GetLocation());  
                  
                Output.Pose[lInd2].SetLocation((Output.Pose.GetRefPose(lInd1).GetRotation().Inverse() * lT1.GetRotation() * (lT1.GetLocation() - Output.Pose.GetRefPose(lInd1).GetLocation()))   
                     + Output.Pose.GetRefPose(lInd2).GetLocation());  
   
                lAr.Add(lInd1);  
                lAr.Add(lInd2);  
   
           }  
   
   
           //Mirror Unmapped Bones  
           FCompactPoseBoneIndex lPoseBoneCount = FCompactPoseBoneIndex(Output.Pose.GetNumBones());  
   
           for (FCompactPoseBoneIndex i = FCompactPoseBoneIndex(0); i < lPoseBoneCount;)  
           {  
                if (!lAr.Contains(i))  
                {  
                     if (!i.IsRootBone())  
                     {  
                          FTransform lT = Output.Pose[i];  
                          lT.SetRotation(Output.Pose.GetRefPose(i).GetRotation().Inverse() * Output.Pose[i].GetRotation());  
                          lT.SetLocation(Output.Pose[i].GetLocation() - Output.Pose.GetRefPose(i).GetLocation());  
                            
                          if (i.GetInt() != 1)  
                          {  
                               MirrorPose(lT, (uint8)mAnimMirrorData->MirrorAxis_Rot, (uint8)mAnimMirrorData->RightAxis);  
                               Output.Pose[i].SetRotation(Output.Pose.GetRefPose(i).GetRotation() * lT.GetRotation());  
                               Output.Pose[i].SetLocation(Output.Pose.GetRefPose(i).GetLocation() + lT.GetLocation());  
                          }  
                          else  
                          {  
                               MirrorPose(lT, (uint8)mAnimMirrorData->PelvisMirrorAxis_Rot, (uint8)mAnimMirrorData ->PelvisRightAxis);  
                               Output.Pose[i].SetRotation(Output.Pose.GetRefPose(i).GetRotation() * lT.GetRotation());  
                               Output.Pose[i].SetLocation(Output.Pose.GetRefPose(i).GetLocation() + lT.GetLocation());  
                          }  
                     }  
                }  
                ++i;  
           }  
      }  
 };  
   
 void FAnimMirror::MirrorPose(FTransform& input_pose, const uint8 mirror_axis, const uint8 pos_fwd_mirror)  
 {  
   
      FVector lMirroredLoc = input_pose.GetLocation();  
   
      if (pos_fwd_mirror == 1)  
      {  
           lMirroredLoc.X = -lMirroredLoc.X;  
      }  
      else  
      {  
           if (pos_fwd_mirror == 2)  
           {  
                lMirroredLoc.Y = -lMirroredLoc.Y;  
           }  
           else  
           {  
                if (pos_fwd_mirror == 3)  
                {  
                     lMirroredLoc.Z = -lMirroredLoc.Z;  
                }  
           }  
      }  
   
      input_pose.SetLocation(lMirroredLoc);  
   
   
      switch (mirror_axis)  
      {  
           case 1:  
           {  
                float lY = -input_pose.GetRotation().Y;  
                float lZ = -input_pose.GetRotation().Z;  
                input_pose.SetRotation(FQuat(input_pose.GetRotation().X, lY, lZ, input_pose.GetRotation().W));  
                break;  
           }  
   
           case 2:  
           {  
                float lX = -input_pose.GetRotation().X;  
                float lZ = -input_pose.GetRotation().Z;  
                input_pose.SetRotation(FQuat(lX, input_pose.GetRotation().Y, lZ, input_pose.GetRotation().W));  
                break;  
           }  
   
           case 3:  
           {  
                float lX = -input_pose.GetRotation().X;  
                float lY = -input_pose.GetRotation().Y;  
                input_pose.SetRotation(FQuat(lX, lY, input_pose.GetRotation().Z, input_pose.GetRotation().W));  
                break;  
           }  
      }  
 };  


I haven't placed the whole source code here. If you need them, just contact me and I will send them to you.

Monday, September 21, 2015

Creating Non-Repetitive Randomized Idle Using Animation Blending

You might have seen that the standing idle animations in video games are some kind of a magical movement. They never get repetitive. The character is looking at different directions with a non-repetitive pattern. He/she shows different facial animations or shifts his/her weight randomly and does many other usual acts in a standing idle animation.

These kind of animations can be implemented using an animation blend tree and a component which can manipulate animation weights. This post is going to show how a non-repetitive idle animation can be created.

Defining Animation Blend Tree for Idle Animation

In this section, I'm going to define an animation blend tree which can bring a range of possible animations for idle. Before creating a blend tree,  the animations which are used within are described here:

1- A simple breathing idle animation which is just 70 frames (2.33 second).

2- A left weight shift animation similar to the original idle animation while having the pelvis shifted to left and with a more curvy torso. "Similar" here, means that the animations have same timings and almost same poses but just with a difference in main poses. This difference shows the weight shift left pose. I created the weight shift animation just by adding an additive keyframe to different bones on top of the original idle animation in the DCC tool.

3- A right weight shift animation similar to the original idle animation while having the pelvis shifted to right and with a more curvy torso.

4- Four different look animations. Look left, right, up and down. These 4 are all one frame additive animations. Their transforms are subtracted from the first frame of the original idle animation.

5- Two different facial and simple body movement animations. These two animations are additive as well. They are adding some facial animations to the original idle animation and some movement over torso and hands.

So the required animations are described. Now let's define a scenario for blend tree in three steps before creating it:

1- We want the character to stand using an idle animation while often shifting his/her weight. So first we have to create a blend node which can blend between, left weight shift, basic idle and right weight shift.

2- The character wants to look around often and we have four different additive look animations for this. So first we create a blend node which can blend between 4 additive look animations. It works with two parameters. One parameter is mapped to blend between look left and right and one parameter is mapped to blend between look up and down. This blend node is going to be added to the blend node defined in step 1.

3- After adding head look animations, the two additive facial animations are going to be added to the result. These two animations are switching randomly when they are reaching at their final frame.

So a blend tree which is capable of supporting this scenario is shown here:




Idle Animation Controller to Manipulate Blend Weights

So far an animation blend tree is created which can create continuous motions with some simple additive and idle animations. Now we have to manipulate the blend weights to create a non-repetitive idle animation. This would be an easy task. I'm going to define it in four steps to obtain a non-repetitive weight shift animation. These steps can be used for facial and look animations as well:

1- First, we randomly select a target weight for the weight shift. It should be in the range of defined weight shift parameter used in blend tree.

2- I define a random blend speed which makes the character to shift weight through time until it reaches the selected target weight in step 1. The blend speed is randomly selected from a reasonable numeric range.

3- When we reach the target blend weight for weight shift, the character should remain in that blend weight for a while. That's completely like what humans do in reality. When a human stands, he/she shifts his/her weight to left or right and stay in that pose for a while. Shifting weight, helps human body to relax the spine muscles. So we select a random time from a reasonable range to set the weight shift remaining time.

4- After the selected weight shifting time ends, we get back to step 1 and this loop repeats while the character is in idle state.

The same 4 steps goes for the directional look and facial animations as well.

This random time, speed and target weight selection, creates a non-repetitive idle animation. The character always look at different directions with different times while shifting his weight to left or right and do different facial and body movement animations. All are done with different and random time, speed and poses.


You can check the result here in this video:




Here is the source code I wrote for the idle animation controller. The system is implemented in Unreal Engine 4. This component calculates the blend weights and pass them to the animation blend tree:


The header file:

 
   
 #pragma once  
   
 #include "Components/ActorComponent.h"  
 #include "ComponenetIdleRandomizer.generated.h"  
   
   
 UCLASS( ClassGroup=(Custom), meta=(BlueprintSpawnableComponent) )  
 class RANDOMIZEDIDLE_API UComponenetIdleRandomizer : public UActorComponent  
 {  
      GENERATED_BODY()  
   
 public:       
      UComponenetIdleRandomizer();  
   
      // Called every frame  
      virtual void TickComponent( float DeltaTime, ELevelTick TickType, FActorComponentTickFunction* ThisTickFunction ) override;  
   
   
 public:  
      /*Value to be used for weight shift blend*/  
      UPROPERTY(BluePrintReadOnly)  
      float mCurrentWeightShift;  
   
      /*Value to be used for idle look blend*/  
      UPROPERTY(BluePrintReadOnly)  
      FVector2D mCurrentHeadDir;  
   
      /*Value to be used for idle facial blend*/  
      UPROPERTY(BluePrintReadOnly)  
      float mCurrentFacial;  
   
      FVector2D mTargetHeadDir;  
   
      float mTargetWeightShift;  
   
      float mTargetFacial;  
   
 protected:  
   
      float mWSTransitionTime;  
   
      float mWSTime;  
   
      float mWSCurrentTime;  
   
      float mLookTransitionTime;  
   
      float mLookTime;  
   
      float mLookCurrentTime;  
   
      float mFacialTransitionTime;  
   
      float mFacialTime;  
   
      float mFacialCurrentTime;  
   
 private:  
      float mLookTransitionSpeed;  
   
      float mWSTransitionSpeed;  
   
      float mFacialTransitionSpeed;  
   
        
 };  
   


And The CPP Here:


 #include "RandomizedIdle.h"  
 #include "ComponenetIdleRandomizer.h"  
   
   
 /******************************************************/  
 UComponenetIdleRandomizer::UComponenetIdleRandomizer()  
 {  
      // Set this component to be initialized when the game starts, and to be ticked every frame. You can turn these features  
      // off to improve performance if you don't need them.  
      bWantsBeginPlay = true;  
      PrimaryComponentTick.bCanEverTick = true;  
   
      // ...  
      //weight shift initialization  
      mTargetWeightShift = FMath::RandRange(-100, 100) * 0.01f;  
      mCurrentWeightShift = 0;  
      mWSTransitionTime = FMath::RandRange(10, 20) * 0.1f;  
      mWSTime = FMath::RandRange(20, 50) * 0.1f;  
      mWSCurrentTime = 0;  
      mWSTransitionSpeed = mTargetWeightShift / mWSTransitionTime;  
   
      //look initialization  
      mTargetHeadDir.X = FMath::RandRange(-80, 80) * 0.01f;  
      mTargetHeadDir.Y = FMath::RandRange(-15, 15) * 0.01f;  
      mCurrentHeadDir = FVector2D::ZeroVector;  
      mLookTransitionTime = FMath::RandRange(10, 20) * 0.1f;  
      mLookTime = FMath::RandRange(20, 40) * 0.1f;  
      mLookCurrentTime = 0;  
      mLookTransitionSpeed = mTargetHeadDir.Size() / mLookTransitionTime;  
   
      //facial initialization  
      mTargetFacial = FMath::RandRange(0, 100) * 0.01f;  
      mCurrentFacial = 0;  
      mFacialTransitionTime = FMath::RandRange(20, 50) * 0.1f;  
      mFacialTime = FMath::RandRange(20, 40) * 0.1f;  
      mFacialCurrentTime = 0;  
      mFacialTransitionSpeed = mTargetFacial / mFacialTransitionTime;  
 }  
   
   
 /**********************************************************************************************************************************/  
 void UComponenetIdleRandomizer::TickComponent( float DeltaTime, ELevelTick TickType, FActorComponentTickFunction* ThisTickFunction )  
 {  
      Super::TickComponent( DeltaTime, TickType, ThisTickFunction );  
   
      /*look weight calculations*/  
      if (mLookCurrentTime > mLookTransitionTime + mLookTime)  
      {  
           mLookTime = FMath::RandRange(20, 40) * 0.1f;  
           mLookTransitionTime = FMath::RandRange(20, 40) * 0.1f;  
           mLookCurrentTime = 0;  
           mTargetHeadDir.X = FMath::RandRange(-80, 80) * 0.01f;  
           mTargetHeadDir.Y = FMath::RandRange(-15, 15) * 0.01f;  
           mLookTransitionSpeed = (mTargetHeadDir - mCurrentHeadDir).Size() / mLookTransitionTime;  
      }  
   
      mCurrentHeadDir += mLookTransitionSpeed * (mTargetHeadDir - mCurrentHeadDir).GetSafeNormal() * GetWorld()->DeltaTimeSeconds;  
   
      if (mLookCurrentTime > mLookTransitionTime)  
      {  
           /*Damping*/  
           float lTransitionSpeedSign = FMath::Sign(mLookTransitionSpeed);  
           mLookTransitionSpeed = mLookTransitionSpeed - lTransitionSpeedSign * 2.0f * GetWorld()->DeltaTimeSeconds;  
   
           if (lTransitionSpeedSign * FMath::Sign(mLookTransitionSpeed) == -1)  
           {  
                mLookTransitionSpeed = 0;  
           }  
   
           if (FMath::Abs(mCurrentHeadDir.X) > 0.9f)  
           {  
                mCurrentHeadDir.X = FMath::Sign(mCurrentHeadDir.X) * 0.9f;  
           }  
   
           if (FMath::Abs(mCurrentHeadDir.Y) > 0.2f)  
           {  
                mCurrentHeadDir.Y = FMath::Sign(mCurrentHeadDir.Y) * 0.2f;  
           }  
      }  
   
      mLookCurrentTime += GetWorld()->DeltaTimeSeconds;  
   
   
      /*weight shift calculations*/  
      if (mWSCurrentTime > mWSTransitionTime + mWSTime)  
      {  
           mWSTime = FMath::RandRange(20, 50) * 0.1f;  
           mWSTransitionTime = FMath::RandRange(30, 50) * 0.1f;  
           mWSCurrentTime = 0;  
           mTargetWeightShift = FMath::RandRange(-80, 80) * 0.01f;  
           mWSTransitionSpeed = (mTargetWeightShift - mCurrentWeightShift) / mWSTransitionTime;  
      }  
   
      mCurrentWeightShift += mWSTransitionSpeed * GetWorld()->DeltaTimeSeconds;  
   
      if (mWSCurrentTime > mWSTransitionTime)  
      {  
           /*Damping*/  
           float lTransitionSpeedSign = FMath::Sign(mWSTransitionSpeed);  
           mWSTransitionSpeed = mWSTransitionSpeed - lTransitionSpeedSign * 2.0f * GetWorld()->DeltaTimeSeconds;  
   
           if (lTransitionSpeedSign * FMath::Sign(mWSTransitionSpeed) == -1)  
           {  
                mWSTransitionSpeed = 0;  
           }  
   
           if (FMath::Abs(mCurrentWeightShift) > 1)  
           {  
                mCurrentWeightShift = FMath::Sign(mCurrentWeightShift) * 1;  
           }  
      }  
   
      mWSCurrentTime += GetWorld()->DeltaTimeSeconds;  
   
      /*facial calculations*/  
      if (mFacialCurrentTime > mFacialTransitionTime + mFacialTime)  
      {  
           mFacialTime = FMath::RandRange(20, 50) * 0.1f;  
           mFacialTransitionTime = FMath::RandRange(20, 50) * 0.1f;  
           mFacialCurrentTime = 0;  
           mTargetFacial = FMath::RandRange(0, 100) * 0.01f;  
           mFacialTransitionSpeed = (mTargetFacial - mCurrentFacial) / mFacialTransitionTime;  
      }  
   
      mCurrentFacial += mFacialTransitionSpeed * GetWorld()->DeltaTimeSeconds;  
   
      if (mFacialCurrentTime > mWSTransitionTime)  
      {  
           mCurrentFacial = mTargetFacial;  
      }  
   
      mFacialCurrentTime += GetWorld()->DeltaTimeSeconds;  
 }  
   
   

Monday, August 10, 2015

The Challenge of Having Responsiveness and Naturalness in Game Animation

Video games as software need to meet functional requirements and it's obvious that the most important functional requirement of a video game is to provide entertainment. Users want to have interesting moments while playing video games and there exists many factors which can bring this entertainment to the players.

One of the important factors is the animations within game. Animation is important because it can affect the game from different aspects. Beauty, controls, narration and driving the logic of the game are among them.

This post is trying to consider the animations in terms of responsiveness while trying to discuss some techniques to retain their naturalness as well.

Here I'm going to share some tips we used in the animations of the 3D action-platforming side-scroller game named "Shadow Blade: Reload". SB:R, PC version has been released 10th 2015 August via Steam and the console versions are on the way. So before going further, let's have a look at some parts of the gameplay here:





You may want to check the Steam page here too.

So here we can discuss the problem. First, consider a simple example in real world. You want to punch into a punching bag. You rotate your hip, torso and shoulder in order and consume energy to rotate and move your different limbs. You are feeling the momentum in your body limbs and muscles and then you are hearing the punch sound just after landing it into the bag. So you are sensing the momentum with your tactile sensation, hearing different voices and sounds related to your action and seeing the desired motion of your body. Everything is synchronized! You are feeling the whole process with your different senses. Everything is ordinary here and this is what our mind knows as something natural.

Now consider another example in a virtual world like a video game. This time you have a controller, you are pressing a button and you want to see a desired motion. This motion can be any animation like a jump or a punch. But this punch is different from the mentioned example in real world because the player is just moving his thumb on the controller and the virtual character should move his whole body in response to it. Each time player presses a button the character should do an appropriate move. If you receive a desired motion with good visual and sounds after pressing each button, we can say that you are going to be merged within the game because it's something almost like the example of the punching in real world. The synchronous response of the animations, controls and audios help the player feel himself more within the game. He uses his tactile sensation while interacting with controller, uses his eyesight to see the desired motion and his hearing sensation to hear the audios. Having all these synchronously at the right moment can bring both responsiveness and naturalness which is what we like to see in our games.

Now the problem is that when you want to have responsiveness you have to kill some naturalness in animations. In a game like Shadow Blade: Reload, the responsiveness is very important because any extra move can lead the player to fall of the edges or be killed by enemies. However we need good-looking animations as well. So here I'm going to list some tips we used to bring both responsiveness and naturalness into our playable character named Kuro:

1- Using Additive Animations: Additive animations can be used to show some asynchronous motions on top of the current animations. We used them in different situations to show the momentum over body while not interrupting player to show different animations. An example is the land animation. After player fall ends and he reaches the ground, he can continue running or attacking or throwing shurikens without any interruptions or land animations. So we are directly blending the fall with other animations like run. But blending directly between fall and run doesn't provide acceptable motion. So here we're just adding an additive land animation on top of the run or other animations to show the momentum over upper body. The additive animation just have visual purposes and the player can continue running or doing other actions without any interruption.


We also used some other additive animations there. For example a windmill additive animation on spine and hands. It's being played when the character stops and starts running consecutively. It can show momentum to hands and spine.

These additive animations are just being added on top the main animations and not interrupting them while the main animations like run and jump are already providing good responsiveness.


2- Specific Turn Animations: You see turn animations in many games. For instance, pressing the movement button in the opposite direction while running, makes the character slide and turn back. While this animation is very good for many games and brings good felling to the motions,  it is not suitable for an action-platforming game like SB:R because you are always moving back and forth on the platforms with low areas and such an extra movement can make you fall unintentionally and it also kills responsiveness. So for turning, we just rotate the character 180 degrees in one frame. But again, rotating the character 180 degrees in just one frame, is not providing a good-looking motion. So here we used two different turn animations. They are showing the character turning and are starting in a direction opposite to character's forward vector and end in a direction equal to character's forward vector. When we turn the character in just one frame, we play this animation and the animation can show the turn completely. It has the same speed of run animation so nothing is just going to be changed in terms of responsiveness and you will just see a turn animation which is showing momentum of a turn motion over the body and it can bring good visuals to the game.

One thing which has to be considered here is that the turn animation starts in a direction opposite to character's forward vector so for using this animation we turned off the transitional blending. because it can make jerky motions on root bone while blending.

To avoid frame mismatches and foot-skating, we used two different turn animations and played them based on the feet phases in run animation. You may check out the turn animation here:




3- Slower Enemies: While the main character is very agile, the enemies are not! Their animations have much more frames. This can help us to get the focus of players out from the main character in many situations . You might know that the human eye has a great ability to focus and zoom on different objects. So when you are looking at one enemy you can only see it clearly and not the others. Slower enemy animations with more frames help us to get the focus out from the player at many points.

As a side note, I want to say that I was watching a scientific show about human eyes a while ago and it showed that the women eyes has wider view than men and men has better focusing. You might want to check this research if you are interested about this topic.

4- Safe Blending Intervals to Cancel Animations: Assume a grappling animation. It can be started from idle pose and ended in idle pose again. The animation can do its job in its 50% of length. So the rest of its time is just for the character to get back to its idle pose safe and smoothly. At the most times, players don't want to see the animations until their ending point. They prefer to do other actions. In our game, players usually tend to cancel the attack and grappling animations after they kill enemies. They want to run, jump or dash and continue navigating. So for each animation which can be cancelled, we are setting a safe interval of blending which is used as the time to start cancelling current animations(s). This interval provides poses which can be blended well with run, jump, dash or other attacks. It provides less foot-skating, frame mismatches and good velocity blending during animation blending.


5- Continuous Animations: In SB:R, most of the animations are animated with respect to the animation(s) which is playing with higher probability before them.

For example we have run attacks for the player. When animating them, the animators have concatenated one loop of run before it and created the run attack just after that. With this, we can have a good speed blending between source and destination animations because the run attack animation has been created with respect to the original run animation. Also we can retain the speed and responsiveness of the previous animations into the current animation.

Another example here is the edge climb which is starting from the wall run animation.


6- Context Based Combat: In SB:R we have context based combat which is helping us using different animations based on the current state of the player (moving, standing,  jumping, distance and/or direction to enemies).

Attacking from each state, causing different animations to be selected which all are preserving almost the same speed and momentum of the player's current state (moving, standing, diving and so on).

For instance, we have run attacks, dash attacks, dive attacks, back stabs, Kusarigama grapples and many other animations. All are being started from their respective animations like run, jump, dash and stand and all trying to preserve the previous motion speed and responsiveness.


7- Physically Simulated Cloths as Secondary Motion: Although responsiveness can lower the rate of naturalness but adding some secondary motions like cloth simulations can help solving this issue. In SB:R we have a scarf for the main character Kuro which helps us showing more acceptable motions.


8- Tense Ragdolls and Lower Crossfade Time in Contacts: Removing cross fade transition times in hits and applying more force to the ragdolls can help more in receiving better hit effects.  However this is useful in many games not just in our case.



Conclusion


Responsiveness VS naturalness is always a huge challenge in video games and there are ways to achieve both. Most times you have to do trade-offs between both to achieve a decent result.

For those who are eager to find more about this topic, I can recommend this good paper from Motion in Games conference:

Aline Normoyle, Sophie Jorg, "Trade-offs between Responsiveness and Naturalness for Player Characters", 2014.

It shows interesting results about players' responses to animations with different amount of responsiveness and naturalness.

Tuesday, June 30, 2015

Foot Placement Using Foot IK

Inverse kinematics has found its way in character animation very well. It has become a major part of animation content creation tools while the animators can not animate characters without IK. There exits different solvers for inverse kinematics. Analytical solution for IK chains with two bones and Cyclic Coordinate Descent for IK chains with more than two bones are the most famous ones. They are widely used in animation content creation tools. While IK has a wide usage in DCC tools, it has found its way to real time animation systems as well. This includes game engines and animation systems which are widely being used in games. Using IK has almost become a standard for many games with good visuals. By using IK in real time animations, characters can overcome the variety of the environments they are moving in. Assume a character that has a walk animation. The animator has animated it on an even surface so it will be fine if you move it on an even surface. But when you move it on an uneven surface, his feet is not going to be placed correctly on the ground. Here you can use Foot IK to place character feet on ground. The usage of IK is not restricted to feet. It can be used for hands as well. The same scenario can be used in a rock climbing feature for both hands and feet. Feet IK is also used to avoid foot skating.

IK in real time animation systems is acting as a post process on animations. This means that the original animation(s) are always being calculated in their normal way, then IK is applying after that to correct character poses to respond well to the changes in environment. It is also used to avoid foot skating while moving root position/rotation procedurally or semi-procedurally.

Using IK in video games can go beyond this, as some video games has integrated full body IK within their engine. However using full body IK has not become a standard in gaming industry yet and not many games using it but using IK for hands and feet has almost become a standard for games which are caring more for their visuals.

This post is going to show how a foot placement system can be created to place the character feet on uneven surfaces dynamically or planting feet on ground to avoid foot skating. The post is originally based on the document I provided with a foot placement system named "Mec Foot Placer". Mec Foot Placer is a foot placement system which I implemented in my free time. You can get it from unity asset store. I've shared some useful parts of the document here for those who like to use or implement these kind of systems for their games.

Before going further I recommend to check out these unity web player build to see how the system is affecting characters feet:

Mec Foot Placer with plant foot feature
Mec Foot Placer without plant foot feature

And here is the link to the asset store:

Mec Foot Placer on Unity Asset Store

The system is using Unity5 Mecanim so you might see some Mecanim specific notes in the post. If you are not a Unity3D developer, you can jump out of the unity specific topics in this post, otherwise those would be helpful as well. However the technique described here is not restricted to Unity and you can implement it on any platform which is offering IK, FK and physics. So in this post, I tried to describe the system generally for those who like to implement the same system on different platforms (not just Unity) and at the end some Unity specific notes are provided.

Mec Foot Placer


1- Introduction


Mec foot placer provides an automatic workflow for the character feet to be placed on grounds and uneven terrains. This document provides the details of this system and depicts how it can be setup. Mec foot placer acts as a post process on animations so while it places the foot automatically on grounds, it will save the overall shape of the feet determined by the active animation(s).


2- Work Flow


Mec foot placer can find the appropriate foot position on ground by using raycasts. The system uses three raycasts to find foot, toe and the corner of heel position. Toe position is used for foot pitch rotation and heel corner position is used for foot roll. The foot yaw rotation will be obtained from the animation itself to make sure the original animation pose is not wrongly affected. The system always check the ground availability based on foot position from the current active animation(s). If system detects any ground, it will set the foot in an appropriate position and rotation on it.

When the system is active, it will automatically place the foot on ground in these steps respectively:


1- First it gets the foot position from current active animation(s).

2- It casts a ray from an origin on top of the foot position in direction of the up vector with a custom offset distance.

3-The ray is pointing down from the origin in the direction of the opposite up vector with a distance equal to the same offset distance from step 1 plus foot height and a custom extra ray distance value. Figure 1 shows how the ray is cast for step 1 to 3.

Figure 2 shows the final foot position after detecting a contact point. White sphere in the figure 2 is showing the detected contact point.

The detected contact point is not suitable for placing the foot because it is ignoring the foot height and it causes the leg to be stretched and penetrate through ground. So a vector equal to UpVector * FootHeight will be added to detected contact point (white sphere) to set the final foot position. The Up Vector will be normalized automatically within the system. The blue sphere in Figure 2 shows the final foot position.


Figure 1- Ray casting for finding foot contact point


Figure 2- Final foot position after detecting a contact point



4- From the detected foot position another ray is cast based on the foot forward vector and current Foot rotation from the FK pose (foot animation pose). This ray is used to find toe position. The toe position is going to be used to find foot pitch rotation. Figure 3 shows how toe position is going to be found by using raycasts.


Figure 3- Raycast for finding Toe position


The Toe Vector in figure3 is equal to foot yaw rotation from animation multiplied by normalized forward vector multiplied by foot length:


Applying foot yaw rotation from animation is causing the system to save the original foot direction determined by artist/animator. Figure 4 shows the detected toe position leading the foot to be placed on the surface correctly. Blue sphere shows the detected toe position.


Figure 4- Detected toe position and the according foot rotation

5- From the detected foot position in step 1 to 3 another ray is cast to find the heel corner position. This ray is used to find the foot roll rotation. Figure 5 shows how this step is working.

The Heel Vector in figure5 is equal to foot yaw rotation from animation multiplied by normalized right vector multiplied by foot half width. Where right vector is forward vector rotated 90 degrees around up vector.




Figure 5- Raycast for finding Heel corner position


The blue sphere in the Figure 6 shows the detected heel corner position and the roll rotation of Foot IK based on that.


Figure 6- Detected heel corner position and the according foot roll rotation


6- The system also adjusts the IKHints of legs automatically to achieve a natural knee shape. IKHints are known as swivel angles as well. The IKHints or swivel angles are determining the surface in which The IK chain is being solved.

Figure 7 shows how the detected IKHint position is set. The blue sphere is showing the final IKHint position and the white sphere is showing the detected toe position. Calf vector is a vector in direction of calf bone (lower leg) and with magnitude equal to calf bone length.


7-    The system can also adjust the character’s pelvis to avoid unrealistic foot stretching. If the pelvis adjustment feature is checked, the system compares the distance of the detected ground contact position and the upper leg position. If the distance is higher than the specified leg length, it means the leg is stretched and it comes up with an unrealistic look. So the pelvis is adjusted based on the calculated error between the detected ground contact position and the leg length. This error is multiplied by the current foot up vector and will be added to the pelvis through time to move the pelvis along the up vector. Figure 8 shows the situation.



Figure 7- Final IKHint position (swivel angle)




Note that when the system is in IK mode, if the ray for foot position detection fails to contact to any ground, the foot placement system will transition to FK mode smoothly through time. This is also true if the system wants to switch back from FK to IK as well.


                       Figure 8-1- Character without pelvis adjustment. Leg stretched and located on air





Figure 8-2- Character with pelvis adjustment. Leg is not stretched and placed exactly on ground




3- Foot Placement Input Data


Each foot needs an input data to work correctly. For this reason a FootPlacementData component is provided to manipulate each foot needed data.The input variables coming with the component should be filled by user. Each input variable is described here:
  • Foot ID: Selects the foot or hand ID. This shows which foot or hand this component information belongs to.

  • Plant Foot: If this check box is checked, the system will check for foot planting feature. Character’s foot will be planted after detecting a contact point and it will remain on the detected position and rotation until it automatically returns to FK mode. It also checks for the ground height changes so feet can be placed on ground while being planted. This feature provides good solution to avoid foot skating. If the plant foot check box is not checked the foot always gets its position and rotation from animation and after that the foot placement uses raycasts to place it on the ground. If the plant foot is checked the foot will be placed on the first detected contact point and it will not follow the animation while it is in the IK mode. While foot plant feature is active the system can blend between the planted foot position and rotation with the position and rotation of the foot without plant foot feature. However feet are always placed on the uneven terrains and grounds.

Foot plant feature has some functions to manipulate its blend weights. Check out section 4 to find out more about its functions. It is recommended to enable this feature in some states which might have foot skating like locomotion states and disable it in the other states which is not capable of having foot skating like standing Idle. Please check “IdleUpdate.cs” and “LocomotionUpdate.cs” to find out how to disable and enable this feature safely.
  • Forward Vector: This vector should be set to show the character foot initial direction. What you see in the Character foot reference pose or (Mecanim T-Pose) is what you should use here. At many times character initial forward vector is equal to foot forward vector.

  • IK Hint Offset: The IK hint position will be calculated automatically as stated in section 1. The IK Hint Offset is added to this the final calculated IK Hint Position to fine tune the final IK hint position.

  • Up Vector: Up vector shows the character up vector. This should be equal to world up vector (Vector3(0 , 1, 0)) if the character is moving on ground. Otherwise for some rare situations like running on walls this should be changed accordingly.

  • Foot Offset Dist: The distance used for raycasting. Figure 1, 3 and 5 show the parameter in action.

  • Foot Length: Foot length is used to find the toe position to set the foot pitch rotation. It should be equal to character’s foot length. Figure 3 shows the details.

  • Foot Half Width: This parameter should be equal to half width of the foot. It is used to find foot roll rotation. Figure 5 shows the details.

  • Foot Height: Foot height is used to set the correct heel position on ground. It should be equal to distance of heel center to lower leg joint. Figure 1 and 2 show the details.

  • Foot Rotation Limit: The IK foot rotation will not exceed from this value and this will be its limit for rotation. The value is considered in degrees.

  • Transition Time: When no contact point is detected by the system, it will switch to FK mode smoothly through time. Also when the system is in FK mode and it finds a contact point it will switch to IK mode smoothly through time. The “Transition Time” parameter, is the time length of the smooth transition between FK to IK or IK to FK.

  • Extra Ray Distance Check: This parameter is used to find correct foot, toe or heel corner position on ground. Figure 1, 3 and 5 show the parameter in action. This parameter can be changed dynamically to achieve better visual results. Check out the "IdleUpdate.cs" and "LocomotionUpdate.cs" scripts in the project to find out the details. These scripts are changing the “Extra Ray Distance Check” value based on the foot position in the animations and the current animation state. Both scripts use Unity 5 animator behavior callbacks.

  • Set ExtraRay Distance Check Automatically: As mentioned above it’s always better to change the extra ray distance check parameter manually based on the animation frames which the foot is stable or unstable on the ground. So at the frames in which the foot is planted on the ground like standing idle or support phases in run or walk, the Extra Ray Distance Check should be increased to make sure that foot hits the ground. Surely this needs some extra scripting like what the IdleUpdate or LocomotionUpdate scripts are doing. However if you don’t want to do extra scripting, you can turn this feature on and the FootPlacementData component will set the ExtraRayDistanceCheck value automatically. By turning this feature on, the component will find the frames that in which the foot is stable on the ground and at these frames it will set the ExtraRayDistanceCheck to a user specified maximum value and if the foot is not stable it will set the ExtraRayDistanceCheck to a minimum value specified by user. It’s recommended to use manual scripting setups for Extra Ray Distance Check like what explained earlier if you can do animation scripting. This feature is mostly useful for the users who don’t want to do extra scripting regarding the animation system.

  • Error Threshold:  This value only works if the “Set Extra Ray Distance Check Automatically” parameter is checked. The system checks out foot displacement in every 1/30 seconds and if the foot displacement is higher than this value the ExtraRayDistanceCheck will be set to the minimum value specified by the user otherwise if the foot displacement is lower than this value, the ExtraRayDistanceCheck will be set to the maximum value. This means the foot is stable and it can surely touch the ground. It’s recommended to set this value to half of the “Foot Half Width” value. For example if Foot Half Width is 4 cm, this value should be set to 2.

  • Extra Ray Distance Check Min: The minimum value used to set “Extra Ray Distance Check”. Only works if the “Set Extra Ray Distance Check Automatically” is checked. Please check out “Error Threshold” and “Set Extra Ray Distance Check Automatically” sections for more info.

  • Extra Ray Distance Check Max: The maximum value used to set “Extra Ray Distance Check”. Only works if the “Set Extra Ray Distance Check Automatically” is checked. Please check out “Error Threshold” and “Set Extra Ray Distance Check Automatically” sections for more info.

4- Mec Foot Placer Component


The Mec Foot Placer component is responsible for setting the correct rotation and position of feet on the ground and switching between IK and FK for feet automatically.It also can adjust character’s pelvis to avoid unrealistic foot stretching often seen while using foot IK.

Mec Foot Placer provides some functions which can be used by user. These functions are stated here:

  • void SetActive(AvatarIKGoal foot_id, bool active): This function will set the system on or off safely for each foot. At some states you don’t need the system to be active. For example when character is falling, there is no need to check for foot placement. However the system will work correctly on this state too but the user can disable it to ignore the calculations of the component. Each feet can be activated or deactivated separately.
  • bool IsActive(AvatarIKGoal foot_id): If the current foot is active the function returns true otherwise false.

  • void SetLayerMask(LayerMask layer_mask): Sets the layer mask which is going to be used in raycasts within system. The default value is Everything (LayerMask.NameToLayer(“Everything”); which means all of the objects in the world can be collided by the raycasts.  

      Previously there was a check box in foot placement data components named “Ignore Character Controller”. The option is removed and instead the ability of setting layer masks is added. To avoid probable collisions between the character controller of the owner game object and the raycasts within system, the layer mask should be set correctly. For example if you set the owner game object layer to 8 and you want to have collision with the whole world and avoid collisions with the owner game object, you can use this sample code:


       SetLayerMask( ~0 & ~(1 << 8));
  
       or
       SetLayerMask (LayerMask.NameToLayer ("Everything") & ~Mathf.RoundToInt( Mathf.Pow(2, LayerMask.NameToLayer( “LayerName” ) ) )

  • LayerMask GetLayerMask():  Returns the layer mask set for the raycast.

  • void EnablePlant(AvatarIKGoal foot_id, float blend_speed):  This function will set the plant foot weight from its current value to 1 through time based on blend speed parameter. The function is useful to be used for some states which need foot planting like locomotion. By using it, plant foot feature will be activated smoothly through time. To have the plant foot feature effective the plant foot weight value should be higher than 0.

  •  void DisablePlant(AvatarIKGoal foot_id, float blend_speed):  This function will set the plant foot weight from its current value to 0 through time based on blend speed parameter. The function is useful to be used for some states which doesn’t need foot planting like standing idles. By using it plant foot feature will be deactivated smoothly through time. 

  • void SetPlantBlendWeight(AvatarIKGoal foot_id, float weight): Sometimes you need to manually change the plant blend values rather than using DisablePlant or EnablePlant functions. This function sets the blend weight between planted foot position and rotation with the non-planted foot position and rotation.

  •  float GetPlantBlendWeight(AvatarIKGoal foot_id): This function returns current blend weight for foot planting feature.
Mec foot placer have some public member variables which are related to the pelvis adjustment feature. They are listed here:

  •          Adjust Pelvis Vertically: It is highly recommended to turn this feature on just in the stationary situations like idle animations. Turning this feature on, in situations like fighting and locomotion can affect the original motion since having a stretched leg in running or in many combat animations is common and the pelvis should not be adjusted because of the stretched leg.


     Turning the pelvis adjustment on or off can be done easily via behavior scripts in Unity 5. For example in “SimpleLocomotionCointroller” the pelvis adjustment is just turned on while entering IdleLeft and IdleRight states and it turns on when it goes to Locomotion State. Check out PelvisSet.cs and PelvisUnset.cs scripts in the “Extra Scripts“ folder to find out more.

  •         Damp Pelvis: If this variable is checked, pelvis will have a slight damping while being adjusted.


  •          Max Leg Length: Indicates the leg length. It should be equal to the distance of character’s upper leg and foot. Pelvis adjusts itself based on the error between max leg length and the current detected position on ground to avoid leg stretching.


  •          Min Leg Length: This shows the minimum distance between upper leg and the foot of the character. Pelvis adjusts itself so that difference between upper legs with their corresponding foots never goes less than the min leg length.


  •         Pelvis Adjustment Speed: Shows the speed of pelvis adjustment translation.

  • Layers To Ignore: The layers that needed to be ignored for ray-cast collisions can be added to this list. The layers also can be added using SetLayerMask function. It’s always suggested to add character’s layer to this list so the system would not face self-collisions.



5- Quick Setup Guide


To setup the system you have to add a MecFootPlacer component. The foot placement component needs at least one FootPlacementData component otherwise it will not work. If two feet are needed to be considered, two foot placement data components should be set. One for right foot and one for left foot so the system can manipulate both feet.
After setting the components the Mec Foot Placer system should work correctly. Check out the example scenes for more info.


5-1- Important Notes on Setting Up the System


Some important notes should be considered before setting up the system:


  • Exposing Necessary Bones: If you checked “Optimize Game Objects” in the avatar rig, then some bones have to be exposed since the Mec Foot Placer needs them to work correctly. The bones are listed here:
           1- The corresponding bones for left and right feet.
           2- The corresponding bones for left and right lower legs.

          Check out the “Robot Kyle” avatar in the project to find how it can be done.


  • Setting up the data correctly: Don’t forget that setting up the data for foot placement needs precision and at some states the data needs to be changed dynamically to achieve the best effects. For example the “Extra Ray Distance Check” parameter should be increased or decreased in different states or animation times to achieve better visual results. Fortunately this can be done easily in Unity 5 by using animator behavior scripts. Check out the "IdleUpdate.cs" and "LocomotionUpdate.cs" in project to find out the details. Both scripts are being called within “SimpleLocomotion” animation controller. As Scripts show, the “Extra Ray Distance Check” parameter is increased while character enters in idle state. Also in the locomotion state, the “Extra Ray Distance Check” parameter increases in the times that character is putting his feet on the ground. This is to make sure that foot touches the ground. It will be decreased while the foot is not on the ground to help the foot move freely while looking for ground contacts as well.

  • Setting Extra Ray Distance Check Automatically: From version 1.5 it is possible to set the Extra ray distance check automatically based on the frames in which the feet are stable. The component can detect the stable foot frames and change the extra ray distance check to a maximum value if the foot is stable and to a minimum value if the foot is unstable. Stable means that the foot is not moving much and unstable means that the foot is moving frequently. Please check out “Set Extra Ray Distance Check Automaticallyfor more info.

  • Checking IK Pass Check Box: On any layer in which you need IK, you have to check IK pass check box in the animator controller so the IK callbacks can be called by Mecanim. If you don't check this check box, Mec Foot Placer will not work.

  • Mecanim Specific Features: Mecanim humanoid rigs provide a leg stretching feature which can help foot IK to look better. The leg stretching feature can avoid Knee popping while setting foot IK. However the value should be set accurately. High values of this feature makes the character cartoony and low values can increase the chance of knee popping in the character. To setup Leg Stretch feature you have to select your character asset. It should be a humanoid rig. In the rig tab, select configure then select muscles tab. In the “Additional Settings” group, you can find a parameter named “Leg Stretch”. Check out “Robot Kyle” avatar in the project to find out more.
  •     Using Pelvis Adjustment Feature: It is highly recommended to turn this feature on just in the stationary situations like idle animations. Turning this feature on, in situations like fighting and locomotion can affect the original motion since having a stretched leg in running or in many combat animations is a must and the pelvis should not be adjusted because of the stretched leg. 


  • Using Mec Foot Placer for Qudrupeds: From version 1.4 the hands can also be placed on ground using the same mechanism for foots. From now, Mec Foot Placer can be used for quadrupeds as well but only quadrupeds which have legs with 3 joints since the Mecanim humanoid rig just supports two bones IK solver and gives control for the third bone rotation so no leg with more than 3 joints can be used correctly. Using Mec Foot Placer for legs with more than 3 joints might cause unexpected behavior.


  • Using Mec Foot Placer for climbing and cliff hanging: Form version 1.4 Mec Foot Placer can be used for climbing situations like ladders and cliff hanging. Just note that in cliffhanging or ladder climbing animations, you need to change the up vector in the foot placement data components to point perpendicular to the surface which the character is climbing. Changing the values of the up vector can be done easily at run-time.

  • Avoiding Self Collision: It’s always good to add character’s layer to the layer ignore list of the Mec Foot Placer component to be sure that there would be no self-collision between the physical objects of the character and the rays casted by the Mec Foot Placer component.


If you want to use the pelvis adjustment feature, you have to turn off the “Optimize Game Objects” in the mecanim rig. When having the game objects optimized, the skeleton is just updated within the mecanim avatar update loop and it can’t be manually adjusted. So it’s recommended to use this feature if you don’t need the “Optimize Game Objects” feature.