The first Coneland executable that I compiled animates the cone characters using my own wobble.cs script written in C# to wobble the cone character as it moves forward. A similar script turns them to look at the player and makes them blink etc.
Below is a video showing them racing around...
I refined this a bit so that they are slower, smaller and the models are better drawn...
Anyhow, the question emerged for me — why bother learning Blender/3DSMax to manually move and record a tonne of set piece animations? Why not just create arms, legs, wings, heads etc with their own C# controller script that animates them relative to their origin. Then pin the limbs in Unity to follow a parent body e.g. pin the arms at the shoulder to follow the torso etc since Unity will update parent-child mesh relationships in this way.
I think the reason that this is not generally done is two-fold:
1) characters will often require a lot of animation sequences for independent body parts such as turning of the head independently of the animations playing for the other body parts and Unity provides an animation state engine to assist with this, so that a ‘tonne of set-piece animations’ reduces to a more manageable set of animations for each limb that can be easily combined to create a tonne of permutations on the fly.
2) animation involves a large amount of computation for mesh points being animated. If the animations of each parent-child limb were to be done in managed code like C# the performance would be too slow. Therefore it is done in unmanaged code like c++ and, moreover, the relative mesh points are pre-calculated in an animation file such as fbx for each frame so that the c++ engine has less maths to perform on the fly.
Comments