Saturday, January 27, 2018

How To Implement Active Ragdoll

You might have seen video game characters getting hit by a bullet or an explosion and turn into ragdoll while ending up in a pose very similar to animated pose where they can blend back smoothly to the animation! In this post I try to show you how you can implement an active ragdoll to drive the joint motors toward the animation pose to achieve this goal. I've already written another post which was targeting how to use these kind of systems, if you already have them implemented. You may want to check it out too:

Combing Ragdoll and Keyframe Animation to Achieve Dyanmic Poses

So this post focuses on implementing such systems and it comes with required source code. Source codes here are written for Unreal Engine 4. Before going forward you might know that UE4 already supports active ragdoll as an engine feature and you don't really need to reimplement it. This post just carries an academic approach to let the developers know how these kind of systems are implemented in general. To find out how UE4's physically based animation works you can watch this nice video tutorial:

How To Make An Active Ragdoll / Unreal Engine 4

Physically based animation is also known as active ragdoll or animation driven ragdoll and if you have used Havok animation before, it's called powered ragdoll in Havok animation. In this article I call it active ragdoll implying on a ragdoll simulated character trying to follow animation poses(s).


How Does An Active Ragdoll Work?

So imagine you have setup your character with joint constraints and motors and it can simulate as a ragdoll on collisions. Your goal is to make the ragdoll following an animation pose while making the body still reacting to external physical forces and avoid having a loose physical pose.

To solve this problem, let's consider a simple case for just one bone. While you setup your character for ragdoll simulation, a rigid body should be attached to the bone to represent its physical properties like volume, mass, friction etc. The movement of this physical body is controlled by a physical constraint and a motor attached to it. So the angular and linear velocity of the bone is controlled and limited by the constraints it's assigned to. Now imagine you just do a normal ragdoll simulation without following any animation. When you run the simulation the bone as a rigid body gains some angular and linear velocity and based on these velocities the rigid body moves in space!

Now you know that we have angular and linear velocities for bones then we can control these velocities to adjust the physical body to follow the animation easily. To move the ragdoll toward the animation, we can have access to the rotation of the rigid body attached to bone (it's last frame rotation). We can also get the rotation of the bone from animation pose and we have access to the frame's delta time! The difference between the rotation of the bone from animation and the rigid body's simulated rotation from last frame shows how much we should rotate our bone in this frame. We can get the rotation axis of this difference rotation (in quaternions) and scale its length by the angle of the quaternion representing this rotation divided by delta time. We do this length scale because angular velocity is defined by a vector which its direction shows the rotation direction and its length represents angular speed.

This angular velocity can rotate the bone from its current rotation to the pose provided by animation because it rotates the bone from its current rotation to the rotation coming from animation pose in current frame's delta time. Just note, the animation should have been updated at this point to be sure  we're receiving this frames's valid animated pose but physics should be updated afterwards to avoid having one frame delay. Because we get the animation pose and then set the physics velocity and then we want to run the simulation so these calculations should be done before physics loop and after animation update or after certain pose update of animation like an animation node.

Same calculations can be done for bone's linear velocity. We just need to use bones's translation instead of rotation in this case and of course we need to work with vectors instead of quaternions. Applying linear velocity is helping the character to follow the animated bones' positions as well which can be used for bones carrying translation keyframes like pelvis.

If we do this procedure for all the bones in the skeleton hierarchy, we can be sure all the rigid bodies will exactly follow the animation pose. The bones velocity can still be changed because of the external physical forces applied to them from physics world. So this is the point that the bone can still react to external forces while trying to follow the pose from animation. You can find all the codes I've written for this case here. I provided an animation node handling the subject. Note that the skeletal mesh should be physically simulated to see the effect:

Header here:


#pragma once

#include "Engine.h"
#include "AnimGraphNode_Base.h"
#include "Runtime/Engine/Classes/Animation/AnimNodeBase.h"
#include "ActiveRagdollAnimNode.generated.h"

USTRUCT(BlueprintType)
struct FAnimNode_ActiveRagdoll : public FAnimNode_Base
{
 GENERATED_USTRUCT_BODY()

 UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Links)
 FComponentSpacePoseLink mBasePose;

 UPROPERTY( EditAnywhere, BlueprintReadWrite, Category = Links, meta = (PinShownByDefault) )
 FRotator mInitialRelativeRotation;

 UPROPERTY( EditAnywhere, BlueprintReadWrite, Category = Links, meta = (PinShownByDefault) )
 FVector mInitialRelativeTranslation;

 UPROPERTY( BlueprintReadWrite, EditAnywhere, Category = Links, meta = (PinShownByDefault) )
 float mTranslationWeight;

 UPROPERTY( BlueprintReadWrite, EditAnywhere, Category = Links, meta = (PinShownByDefault) )
 float mRotationWeight;

protected:
 float mDeltaTime;

public:
 /************/
 FAnimNode_ActiveRagdoll();

 /****************************************************************/
 void Initialize_AnyThread( const FAnimationInitializeContext& Context ) override;

 /****************************************************************/
 void Update_AnyThread( const FAnimationUpdateContext& Context ) override;
/****************************************************************/
void CacheBones_AnyThread( const FAnimationCacheBonesContext& Context ) override;

 /*********************************************************/
 void EvaluateComponentSpace_AnyThread( FComponentSpacePoseContext& Output ) override;
};


UCLASS(BlueprintType)
class ANIMBASEDRAGDOLL_API UAnimGraphNode_ActiveRagdoll : public UAnimGraphNode_Base
{
 GENERATED_BODY()
 
public:
 UPROPERTY(EditAnywhere, Category = Links)
 FAnimNode_ActiveRagdoll mAnimPose;

 virtual FText GetNodeTitle(ENodeTitleType::Type TitleType) const override;
 virtual FLinearColor GetNodeTitleColor() const override;
 virtual FString GetNodeCategory() const override;
 virtual void CreateOutputPins() override;
};



CPP here:


#include "AnimBasedRagdoll.h"
#include "ActiveRagdollAnimNode.h"
#include "AnimationGraphSchema.h"
#include "AnimInstanceProxy.h"
#include "PhysicsEngine/PhysicsAsset.h"
#include "Components/SkeletalMeshComponent.h"

#define ANIM_MATH_PI 3.141592724f

//*******FAnimPoseNode implmentations*******************
FAnimNode_ActiveRagdoll::FAnimNode_ActiveRagdoll()
{
 mInitialRelativeRotation = FRotator( 0.f, 0.f, 0.f );
 mInitialRelativeTranslation = FVector::ZeroVector;
}


/*********************************************************/
void FAnimNode_ActiveRagdoll::Initialize_AnyThread( const FAnimationInitializeContext& Context )
{
 mBasePose.Initialize( Context );
};

/****************************************************************/
void FAnimNode_ActiveRagdoll::Update_AnyThread( const FAnimationUpdateContext& Context )
{
 mBasePose.Update( Context );
 mDeltaTime = Context.GetDeltaTime();
}
/****************************************************************/
void FAnimNode_ActiveRagdoll::CacheBones_AnyThread( const FAnimationCacheBonesContext& Context )
{
 mBasePose.CacheBones( Context );
}

/****************************************************************/
void FAnimNode_ActiveRagdoll::EvaluateComponentSpace_AnyThread( FComponentSpacePoseContext& Output )
{
 mBasePose.EvaluateComponentSpace( Output );
 
 const UAnimInstance* const lAnimInstance = Cast< UAnimInstance >( Output.AnimInstanceProxy->GetAnimInstanceObject() );
 USkeletalMeshComponent* lSkel = lAnimInstance->GetOwningComponent();

 if( lSkel && lSkel->IsSimulatingPhysics() )
 {
  FQuat lBoneQuaternion;
  FQuat lOwnerRotation;
  
  FVector lBoneTranslation;
  FVector lOwnerTranslation;

  const AActor* const lCharOwner = lAnimInstance->GetOwningActor();
  const UINT32 lPhysicsBoneCount = lSkel->GetPhysicsAsset()->SkeletalBodySetups.Num();
  
  if( lCharOwner )
  {
   lOwnerRotation = lCharOwner->GetActorRotation().Quaternion();
   lOwnerTranslation = lCharOwner->GetActorLocation();
  }
  else
  {
   lOwnerRotation = FQuat::Identity;
   lOwnerTranslation = FVector::ZeroVector;
  }

  for( uint8 i = 0; i < lPhysicsBoneCount; i++ )
  {
   const FName lBoneName = lSkel->GetPhysicsAsset()->SkeletalBodySetups[ i ]->BoneName;
   const FCompactPoseBoneIndex lInd( lSkel->GetBoneIndex( lBoneName ) );
   
   lBoneQuaternion = Output.Pose.GetComponentSpaceTransform( lInd ).GetRotation();
   lBoneTranslation = Output.Pose.GetComponentSpaceTransform( lInd ).GetLocation();

   //Setting up Linear Velocity

   // Updating bone positions in world space
   if( mTranslationWeight > 0.f )
   {
    const FVector lTargetPositionInWorld = lOwnerTranslation + lOwnerRotation *(
     (mInitialRelativeTranslation + mInitialRelativeRotation.Quaternion() * lBoneTranslation));

    FVector finalVelocity = (lTargetPositionInWorld - lSkel->GetBoneLocation( lBoneName )) / mDeltaTime;

    if( mTranslationWeight < 1.0f )
    {
     finalVelocity = FMath::Lerp( FVector::ZeroVector, finalVelocity, mTranslationWeight );
    }

    lSkel->SetPhysicsLinearVelocity( finalVelocity, false, lBoneName );
   }


   //Setting up angular velocity
   if( mRotationWeight > 0.f )
   {
    const FQuat lTargetRotationInWorld = lOwnerRotation * mInitialRelativeRotation.Quaternion() * lBoneQuaternion;
    const FQuat lRotDiff = lTargetRotationInWorld * lSkel->GetBoneQuaternion( lBoneName ).Inverse();
    const float lAngDiff = 2.0f * FMath::Acos( lRotDiff.W );
    FVector lAngVelocity;

    //Checking for shortest arc.
    if( lAngDiff < ANIM_MATH_PI )
    {
     lAngVelocity = lRotDiff.GetRotationAxis().GetSafeNormal() * lAngDiff / mDeltaTime;
    }
    else
    {
     lAngVelocity = -lRotDiff.GetRotationAxis().GetSafeNormal() * (2.0f * ANIM_MATH_PI - lAngDiff) / mDeltaTime;
    }

    if( mRotationWeight < 1.0f )
    {
     const FVector currentAngularVelNormalized = lSkel->GetPhysicsAngularVelocityInRadians( lBoneName ).GetSafeNormal();
     FQuat lInterpolatedRot = FQuat::FindBetweenVectors( currentAngularVelNormalized, lAngVelocity.GetSafeNormal() );
     lInterpolatedRot = FQuat::Slerp( FQuat::Identity, lInterpolatedRot, mRotationWeight );
     const float lRotRad = FMath::Lerp( 0.f, lAngVelocity.Size(), mRotationWeight );

     lAngVelocity = lAngVelocity.GetSafeNormal() * lRotRad;
    }

    lSkel->SetPhysicsAngularVelocityInRadians( lAngVelocity, false, lBoneName );
   }
  }
 }
}

/*******AnimGraphNode Implmentations. Codes Below Are Just Used for Unreal Editor. No Run-Time Code*******************


/******************Title Color!****************************/
FLinearColor UAnimGraphNode_ActiveRagdoll::GetNodeTitleColor() const
{
 return FLinearColor(0, 12.0f, 12.0f, 1.0f);
}

/***************Node Category***********************/
FString UAnimGraphNode_ActiveRagdoll::GetNodeCategory() const
{
 return FString("Active Ragdoll");
}

/*******************************Node Title************************************/
FText UAnimGraphNode_ActiveRagdoll::GetNodeTitle(ENodeTitleType::Type TitleType) const
{
 return FText::FromString("Move Physical Bones to Animation Pose");
}


/*******************************Exposing Output Pins************************************/
void UAnimGraphNode_ActiveRagdoll::CreateOutputPins()
{
 const UAnimationGraphSchema* Schema = GetDefault();
 CreatePin( EGPD_Output, Schema->PC_Struct, TEXT( "" ), FComponentSpacePoseLink::StaticStruct(), /*bIsArray=*/ false, /*bIsReference=*/ false, TEXT( "Pose" ) );
}


As you can see there are weight values for both angular and linear velocity. If these weights are set to 1.0, the physics will follow the animation pose 100%. So I suggest to change these two weight values based on gameplay events. For example if the character gets hit by an explosion the weights can be set to zero so we can have pure ragdoll simulation and after a while based on the average linear speed of the skeletal mesh (or it's pelvis velocity), you can increase the value until it gets to 1.0 and afterwards you can blend back to animation and turn off simulation. Simulation should be turned on when it's needed because it can be an expensive procedure.

And here are some GIFs out of the results:




GIF above shows the character with 100% angular velocity adjustment applied and zero percent of linear velocity. As you can see the character follows joint rotations while responding to external forces.





GIF above shows the character with 50% angular velocity adjustment  applied and zero percent of linear velocity. As you can see the character follows joint rotations but not completely!




GIF above shows the character with 10% angular velocity applied and zero percent of linear velocity. As you can see the character has a loose pose but still carries some percentage of animation pose.




GIF above shows the character with 100% angular and linear velocity applied. As it's shown, character saves its animated form completely with  both bones positions and rotations but the responses to external forces look like a glitch!


GIF above shows the character with 30% angular and linear velocity applied. As It's shown in the GIF, character saves its animated form partially but the response to external forces is smoother now!

Results above shows these weights should be controlled externally based on the gameplay events to let the character have a natural physical pose based on the context of gameplay.


Alternative Approach Using Spring Dampers

There is also an alternative approach to make active ragdoll happening. So instead of directly setting the angular and linear velocity of the bones we can apply a desired torque or force to make them following the animation pose. This approach uses a spring-damper equation to move the bones toward the animation pose. Here, the animation pose is considered as equilibrium point for the system. First we see how far the bone is from its equilibrium point then we apply a torque which can move the bone toward the point (Spring). Then there is resistant angular velocity which is applied in the opposite direction of the bone to smooth out the movement (damper) and avoid oscillation as much as possible.

The torque applied to each bone is calculated with this equation:

Joint_Torque = k * ( joint_rotation_from_animation * Inverse( current_joint_rotation ) ).RotaionAxis - m *  joint_angular_velocity; //rotations are in quaternions and in world space.


As you can see there are two gain values involved in this equation and you need to set them manually. These gains are used to scale the effect of spring and dampness for the movement. These gains should be different from joint to joint based on their mass and their momentum! The problem with this approach is that we need to do a lot of trial and error to set these gain values for each bone to avoid crazy oscillation but it also has pros as well. It can make smoother external physical reactions because we're not force setting the velocities directly and just trying to apply torques and forces toward the target so the bones can carry their current pure momentum.

However the famous paper below suggests an inertia scaled method for joints where all the bone gains are scaled automatically based on their parent bone's angular momentum. With this, specifying the gains can be done with less trial and error but still needs consideration.

ZORDAN, V. B., AND HODGINS, J. K. 2002. Motion capturedrivensimulations that hit and react. In ACM SIGGRAPH / EurographicsSymposium on Computer Animation, 89–96. 

Sunday, December 24, 2017

Using Multiple Bones to Look At a Target in World Space.

This post explains the details of a plugin I created for Unity3D. The plugin is about to have characters looking at a target in a biomechanically correct way and using more than one bone to look at a target in world space. The plugin can be found from asset store via link below:

http://u3d.as/136L

The post is organized as follows. First, the workflow of the plugin is completely explained to help users understand the topic well. Also the blog post tries to carry an academic approach so if anyone wants to implement this feature on another platform can get an idea how to do it. Second the API and parameters of the plugin is explained so users know exactly how to use the system and at the end, important notes are provided. Make sure to completely read the important notes on setting up if you don't have time to read the whole documentation.

1- Introduction


Imagine you want to write something using your computer's keyboard. When you are pressing keyboard buttons, you are mostly using your fingers and less movement comes from your arms or elbows. This shows a simple and basic rule in biomechanics, telling that if you can use your small muscles to do something, you will use them and you won't involve your big muscles in action. Using bigger muscles means consuming more energy and it's avoided on unnecessary situations. Of course using small muscles always need more focus and training. That's why you see a kid can start walking between age 1 or 2 but he can't tie his shoe laces until a reasonable age.

So let's consider another example. A pull up action. When you want to pull up your weight using a pull up bar first you use fingers to hang, then you see your fingers' tendons, ligaments and muscles are stretched and they can't hold your weight then you use your elbow and you see your elbow muscles are extended and tendons are getting stretched then you add your arms to the action and this procedure continues for your shoulders, chest and abs' muscles. So as you can see, you tried to do the action first with your small muscles and since they weren't powerful enough, you asked for help from your other muscles and managed to use your bigger muscles to finish the action.

Now let's expand this on another example. A look at example! So now imagine there is a picture in front of you. You can look at it without moving your head and just by moving your eyes. Now move the picture a few centimeters to your left. You still can look at the picture with your eyes however you feel your eyes muscles are getting stretched and tired. Now again move it a bit more to left you. Try to look at it and you see you can't look at the picture with just using your eyes because your eyes muscles are stretched completely and the picture is out of your eyes joint range so you need to use your neck and head as well to look at the picture. So continue like this and move the picture away from you and even more toward your back. You will see your head and neck joints and muscles get stretched and you need to rotate your spine and chest joints to look at the target. At the end you see that all your eyes muscles, head, neck and spine are in action to let you look at the target just like the pull up example where you couldn't just use your finger muscles to pull up your weight and lots of other muscles came into action for you to pull your weight up.

Perfect look at is working based on this rule. In perfect look at, you can define a chain of bones and their corresponding joint limits in degrees. If the first joint reaches its limit the second one starts rotating to look at the target. When the second bone reaches its limit the third one starts rotating and this procedure goes until the end of the bone chain. This way you can create a combination of bones to look at a target and not just use simple head movement. Let's have look at the results in these videos:







2- Technical Workflow


This section describes the look at procedure with details.

Every defined bone in look at bone chain has a forward vector which is used to show the bone's current direction. A target in world space is defined as the point which character wants to look at. To look at the point, the system starts from the first bone in the chain. It gets the current bone's forward vector and then calculates the rotation which can bring the bone's forward vector to the difference vector of the target point and the bone position in world space. Pic below shows the vectors.



The first bone rotates and will be clamped into its joint limit range. If the first bone meets its joint limit the second bone starts to rotate to let the first bone follow the target. Please note that the second bone should be an ancestor of the first bone. It should not be necessarily its parent but a bone in the same hierarchy which can rotate the first bone when it rotates. The same relation should go for bone two, three and so on. For example if the first bone is head, the second bone can be neck or chest because they are ancestors of the head but it can't be eye because eye is not an ancestor of head.

To rotate the next bone in the bone chain, the system needs to specify a forward vector and a target vector to find the rotation between them. The forward vector is calculated by adding the normalized rotated forward vector of the first bone and the position differences of the first bone of the chain to the current bone( all in world space ).

The target vector is calculated by adding the position difference vector from the first bone of the chain to the current bone and adding this to the normalized position difference vector of target point from the first bone position. This way, by rotating next bones in the look at at chain, we can make sure the first bone in the chain aligns to the target even if the target is out of its joint range. Just a small note here, if the first bone has a huge translation difference from the next bones in the look at chain, the final look at result might have a little error and it won't exactly meet the target but generally character will always look at the target with a good precision which can provide a good intuition of character look ats.

The same workflow continues until the first bone can hit the target or the final bone in the chain meets its joint limit.

Each joint limit is calculated based on the angle between its forward vector and its parent's forward vector. By parent I mean the exact parent in the skeleton hierarchy however the joint angle limit can be calculated easier by calculating the difference of the current bone rotation with its corresponding reference pose rotation but unfortunately Unity Mecanim is not exposing the reference pose into scripts and currently there is no way getting the reference pose. Whenever unity exposes the reference pose into the scripts, both the bone forward vector and parent forward vector will be removed and the reference pose forward vector will be used instead to provide an easier setup for users.


3- Perfect Look At Component Properties


Up Vector:

Up vector shows the character’s world yaw axis which is set to engine’s up axis by default. This vector is used as an axis for “Rotate Around Up Vector Weight”. When “Rotate Around Up Vector Weight” is set to one the bone just rotates around up vector and for any values less than one, the axis blends between the original rotation axis and the up vector axis.

Target Object:

A game object used as the target object for the system. Characters with perfect look at component will look at this object.

Look At Blend Speed:

This value shows how fast the current look at pose will be blended in from the last look at pose. This smooth blending can be very helpful specially when look at is applied on top of a dynamic animation which has a lot of movement.

Draw Debug Look At:

If checked, target vectors and forward vectors for each bone is drawn in scene viewport. Target vectors are drawn in red and forward vectors are green.

Look At Bones:

An array of look at bone data. The size of the array should be equal to the number of bones you want to get involved in the look at process. Make sure there would be no missing bone in the array unless the systems prevent itself from working.

Bone:

The look at bone which is going to be rotated to look at the target.

Rotation Limit:

Joint limit in degrees. If the angle difference between the current bone and its parent in the skeleton hierarchy is higher than this value the next bone in the "Look At Bones" array starts rotating to help the first bone reach the target.



Rotate Around Up Vector Weight:

This value shows how much a rotation should be toward character up vector. This value can be very useful for bones which has higher rank in the look at chain hierarchy for example spine bones. When Rotate Around Up Vector Weight is set to zero for a spine bone in a look at chain, the spine bone can be rotated in three directions while a human doesn’t usually rotate the spine in three dimensions to look at a target. Two pics below can show the differences:




Pic above shows the character while the values for the Rotate Around Up Vector Weight is set to zero for all the bones. As you can see the spine has some extra rotations which is not quite natural. When a humanoid wants to look at a target it mostly rotates its spine in yaw axis and less in pitch and roll. This limited rotation is done to be sure the body remains balanced because spine carries a big part of upper body mass and by rotating it, the body center of mass can be changed and the character loses balance so setting this value for spine bones can help to achieve a more natural pose.


In Pic below values for the Rotate Around Up Vector is set as follows:

Eyes: 0
Head: 0
Neck: 0
Spine1: 0.1
Spine2: 0.2
Spine3: 0.7



As you see in the results, the spine rotations are more natural now. To find out examples of this case please check out the three provided prefabs in the project.

Forward Axis:

The forward vector of the bone. To find the forward axis of a bone, first you need to turn the coordinate system into local then you need to select the bone from the hierarchy panel. Afterwards you can find the forward vector. Picture below shows an example how to find the forward vector of the head bone for a character. As you can see the forward vector for this bone is Y-Axis:


Parent Bone Forward Axis:

The forward vector of the current bone's parent. To find the forward axis of the parent bone first you need to turn the coordinate system into local then you need to select the bone's parent from the hierarchy panel. Afterwards you can find the forward vector. Picture below shows an example how to find the forward vector of the head's parent bone which in this case is neck. As you can see the pic below, neck's forward vector is Y-Axis:


Reset To Default Rotation:

In Unity Mecanim, when an animation is retargetted on a different rig, if that rig has more bones than the retargetted animation, those bones never get updated to any specific transform. This mean they always use a cached value of the last valid transformation they already received and the pose buffer never gets flushed which sometimes makes problems. To avoid having this situation, make sure to check the "Reset to Default Rotation". Check this check box only when you are sure the look at bones don't receive any pose from the current animation otherwise leave this check box unchecked. Check out the two GIFs below to find out the differences:


As you can see in the GIF above the spine remains disconnected because it receives no pose from animation and it uses the last valid cached pose. By checking the Reset To Default check box we can create a valid rotation for the bones which don't have any animation but want to be in the look at bone chain.




Linked Bones:

Linked bones are the bones which should be rotated as same as the current bone. For example, one look at bone chain can be created like this:

Lookat Bone 1 = Right Eye
Lookat Bone 2 = Head
Lookat Bone 3 = Neck
Lookat Bone 4 = Spine2
Lookat Bone 5 = Spine1
Lookat Bone 6 = Spine0

As you can see, there is no Left eye here. So if you apply this look at bones chain to the character, all the bones can be rotated based on their joint limits but the left eye remains still. Here we can define left eye as a linked bone of the right eye. So wherever right eye rotates, left eye also rotates with it. Just like a linked transform. You can add as many as linked bone you want to the current bone.

To find examples of linked bones check out HumanCharacter and UnityCharacter prefabs in the project.

Linked Bones-Reset to Default Rotation:

This is exactly the same as Reset To Default Rotation in the LookAtBones. If you face some situations like the GIF below when you add linked bones, this means the linked bone doesn't carry any animation info and you need to check the Reset To Default Rotation for the bone to make the Mecanim pose buffer not to use the invalid poses.



4- Perfect Look At Component Public API:


GetLookAtWeight():

Returns the current weight of the perfect look at. If weight is zero, perfect look at is turned off, if one perfect look at is applied 100% and any value between will make an average between animation and procedural rotation provided by perfect look at.

SetLookAtWeight( float weight ):

Sets the current weight of the perfect look at. Please note if you use this function any transition will be cancelled because Perfect Look At is not letting external systems to change the weight in two different ways. By two different ways I mean setting look at weight manually or by calling Enable/DisablePerfectLookAt.

This cancelling is provided to avoid having an error-prone pipeline. To find out more about transitions, check out EnablePerfectLookAt and DisablePerfectLookAt.

EnablePerfectLookAt( float time, bool cancelCurrentTransition = true ):

If this function is called, perfect look at's weight will turn into one within the specified time (blending in).

cancelCurrentTransition: If set to true and if another call to this function or DisablePerfectLookAt is made and the system is still on the transition, the current transition time will be set to zero and transition will continue from current weight to the destination weight within the new time specified.

If cancelCurrentTransition is false and if the system is on a transition, any other call to DisablePefrectLookAt or EnablePerfectLookAt will be ignored.

DisablePerfectLookAt( float time, bool cancelCurrentTransition = true ):

If this function is called, perfect look at's weight will turn into zero within the specified time. All other details are the same as EnablePerfectLookAt. Please refer to EnablePerfectLookAt to find out more about the function parameters.

IsInTransition():

If PerfectLookAt is on disabling or enabling transition, it will return true otherwise false.

GetTimeToFinishTransition():

If PerfectLookAt is on disabling or enabling transition, it returns the remaining time to finish transition.

Important Notes on Setting up Perfect Look At:

Perfect Look At is a component. To use it, it's just needed to be assigned to a game object. Some important cases to use Perfect Look At is addressed here:

1- Turn Off Optimize Game Objects:

The only way to change bones transformations in a procedural way in Unity, is through LateUpdate of a component. Unfortunately Unity won't let you set the bones transforms if the "Optimize Game Objects" option of a rig is checked. To make PerfectLookAt working you need to be sure "Optimize Game Objects" is not checked. There is no info on Unity documentation why it is impossible to transform bones in an optimized rig and how Unity Optimize the skeleton calculations.


2- Setting Reset To Default Rotation On Some Necessary Cases:

If you see any of the linked bones are rotating constantly make sure you turn the Linked Bone's Reset To Default Rotation on. To find out more why this issue is happening please refer to Reset To Default Rotation section in this documentation.

3- Defining the Forward Axis of The Bones and Their Corresponding Parents Correctly:

Make sure you always select the correct forward axis both for the bone and its parent. Make sure you change the coordinate system to local and see the bone and its parent's forward axis in the local coordinate system. For more info please check out "Forward Axis" and "Parent Bone Forward Axis" section in this document. 

4- Look At Bones Should Be In The Same Hierarchy But Not Necessarily Child and Parent:

Look at bones order in the "Look At Bones" array matters. It should be set based on the bones hierarchy. For example 4 bones( two eyes, head and chest ) are needed to rotate using perfect look at. These 4 bones should be specified in this order:

First Bone: Left eye ( its linked bone should be right eye )

Second Bone: Head

Third Bone: Chest

As you can see, bones defined here are not necessarily parent and child but they are in the same hierarchy. For example Chest is the parent of neck and neck is the parent of head. So when Chest rotates, head will also rotates.

5- Checking Character Prefabs as an Example:


Make sure to check the 3 Prefab Characters and their corresponding scenes as an example of perfect look at. All 3 have different rigs and they use perfect look at. They can be found in Assets/Media/Prefabs