Game Development Services

I've done quite a bit of modeling, UV mapping, rigging and animation on game projects over the last couple decades - but these days I've mainly been leading animation teams and developing tools and methods to strenghten the animation production pipeline.

It's been a while since I worked directly in the game industry, but I believe there is enough congruence between my unique skillset and current trends in the game industry for it to make sense to engage with game development. Even just the emergence of real-time editing potential with Maya seems like enough to recommend my specific experience as having some value.

And I am very motivated to find a way to contribute to game evolution - that actually was my initial career intent back in the mid-80's, but I wanted to build story-telling skills first - to get a foundation - so gravitated to Disney as an animator and then got side-tracked for the better part of the last 30 years (with some years of game development in the middle) designing and animating the best animatronics on the planet.

Time to get back on track.


Animation System & Pipeline Creative Direction

High-Efficiency Animation Workflow & UI Design

Gesture-Capture Animator Tool Development

Precision Tactile Input Tool Development

Real-time Animation Training

Transition Control Scripting System

Behavior-based Animation Datatype Development

Behavior-based Animation Workflow Development

Behavior-based Engine Control Strategies

Production Direction & Support

Concept Development Creative Support

Character Performance Profiling

Character Design and Rigging Strategy

Character Performance Strategy/Layout

Animation Team Recruitment & Leadership

Animation/Perfomance Direction

R&D Creative Support

General Asset Creation Support


Animation Pipeline
Development Areas

I've been working with a few problem-sets pertinent to game development:

Animation Editing Efficiency - producing high-quality animation quickly to meet production requirements using project-tested techniques.

Performance Continuity Strategies - linking animation segments in a manner that maintains both responsiveness and performance quality beyond basic motion blending.

Behavioral Animation - Enhancing the animation process so that once a solid character performance is created, the elements of character behavior can be quantified and used as a baseline for parametric editing and in-game processing to reduce production pressures as scale increases without losing performance quality.

Stepping Stones Toward Behavioral Animation - moving toward a more robust representation format and editing process, but ensuring that we maintain a working pipeline throughout the process. With so much on the line, there has to be a stable path to follow.

So what am I really talking about? I'll toss out a few thoughts to give you a better idea...

Development Area #1: The Animator's Toolkit



Designing effective tools for artists isn't a one-size-fits-all kind of thing - to address any specific team and pipeline, listening is the first order of business and learning a lot about the specifics involved before diving into the design process. The notes below are thoughts based on my own experience as a character animator and pipeline director, with an approach to high-efficiency animation system design that has evolved over time.



What Boosts Productivity?

My own character animation skills were built up in an eclectic manner over the course of years, starting with life drawing for cel animation, then learning tactile and real-time methods when working with animatronics, getting some experience with live performance/puppeteering, and then leaning into CG keyframe animation as Maya, Softimage, 3DMax and Lightwave hit the market, followed by a number of years in game development before coming back to animatronics. Throughout that whole arc I was also actively evolving editing systems/pipelines and spear-heading advancements in digital design, pre-vis tools and interactive system design. One thing all of that work had in common was intense pressure to increase productivity. I found myself consistently focused on the question above, "what boosts productivity".

I don't presume to have all the answers, but I have a few...


Deep Focus

My impression is the the single biggest productivity factor for animators is achieving and maintaining deep focus - getting in and staying in 'the zone'. One design goal is to remove systemic impacts to that process.

One of the things we do consistently when animating is to keep track of a large amount of spatial and temporal information related to a specific set of performance moments. Analysis, deconstruction and reconstruction of spatial information is our bread and butter.

Any workflow element that consistently requires the use of the spatial processing part of our brain for other tasks is likely to impact deep focus. There is a distinct advantage to keeping that specific cognitive processing part of the animator's brain as clear from extraneous work as possible.

Learning the Ropes

The system I was using in the early 90's had no graph editor. I had a tactile UI with basically three modes to work with - keyframe, "detail" real-time data capture using one or two knobs at a time, or using a waldo to do multi-axis real-time data capture. These modes were blended into a single workflow so the artist could easily move from one to another fluidly without any real 'mode' change.

It took some time to get used to working with real-time capture, but once I gained facility and confidence I found that it was an effective way to produce organic motion and saved a huge amount of time over keyframing when laying in certain classes of movement.

In this same period I did some work with Henson's team and gained an appreciation for the serendipitious strengths of live performance. I felt that the puppeteers lacked a certain amount of discipline in working through fine detail, but their loose performance style brought performances to life in ways that sometimes eludes character animators.

I guess I got greedy. I wanted the best of both worlds, and set out to develop systems that would allow animators to maintain the detail control that we crave, and still have access to live performance advantages, without losing the ability to deliver on a production schedule.

The console shown here was my first UI/UX design effort. The resulting system was in continuous service for 25 years as the primary show programming tool for Disney attractions.


Enemy #1 - the Mouse

Think about what happens when you use a mouse. One thing that is absolutely required is to visually track the cursor as you move it with the mouse, pen or trackpad. You can't do anything with a mouse without spatially tracking the cursor, which puts that operation directly at odds with the other spatial processing you are trying to do.

Of course, every major application that has been designed in the last 30 years is extremely mouse-centric, and Maya is no different. The one saving grace is that an animator can still achieve a level of deep focus when using curves for control selection directly in the viewport, making the impact of mouse operation less egregious than with most other operations, but the process is still not optimal for a few reasons.

I would submit that deep focus can be further improved with UI modifications, particularly when additional viewer UI is combined with tactile tools and real-time methods.

CG Real-time Editing Exploration

When I left WDI in 2002 the first thing I did was build a tactile unit of my own (with the help of a talented embedded system engineer). I wanted to experiment with applying the same methods I'd come to appreciate in animatronic editing to CG character animation. One of the criteria I considered important was being able to pose a whole character without a bunch of UI interaction. Although film rigs might have hundreds of controls, game rigs have to be more efficient usually, so I settled on 64 analog inputs.

Implementing a system was problematic though. Maya wasn't yet really ready to support real-time editing. I'd been using MotionBuilder since early FilmBox versions and focused on taking advantage of its real-time recording and playback core. What I found was that the way MB exposed those features - designed to support mocap - imposed constraints that made it impossible to achieve an efficient workflow. Doing charcter animation with this type of system requires having solid control over control selection and an efficient way to apply a sampled recording to the primary animation layer. After a number of plug-in development rounds, I simply couldn't streamline the workflow enough to make it worth chosing over hand-keying. To make tangible progress, I needed a solid content-creation platform to work with. That took a decade longer than seemed necessary, but CG CC software development is naturally driven by the large-market pipelines, which continue to be narrowly focused on the hand-key process. Only recently has the primary content-creation tool reached a point that it can really support directly capturing and manipulating real-time input. Autodesk's strategy to retire MB is, IMO, an indication that they now view Maya as a viable real-time platform - finally!

The console pictured here looks like it was built in a garage because, well, it really was. I started with an old audio mixing board, put a wooden frame around it to get the rake I wanted without having to machine parts, and personally soldered hundreds of points to wire it up - and by some miracle it was 100% the first time I turned it on! This was intended as a learning tool, so I wasn't too concerned about aesthetics - it's a dinosaur, but still works 20 years later.


Enemy #2 - Reading Text

Another thing that tends to muck with deep focus is reading. I'm not precisely certain why, whether its actually spatial processing or some other interference, but when I process text a good chunk of the deep in deep focus disappears and I have to work back to it again.

There are some really useful tricks to reduce text processing, particularly with regard to elements that are used repeatedly. Numeric memory association for often used lists is a great one. Anything groups of elements that are used often enough to commit associated numbers to memory can be recalled quickly without a big hit to deep focus. Add this to a 10-key and an operation command keypad - again to reduce mouse use - and the combination of numeric indexing and tactile muscle-memory can make a really big difference.

I do this for as many list-like structures as possible. When I sit down to animate, I print out pertinent numbered lists and tape them around my monitor. After a few days of using the referene sheets, I've generally memorized the numeric associations well enough to recall them without looking at the lists. Even when I have to use the reference sheet, I still use the command keypad and 10-key to reduce the impact on deep focus.

4 Interwoven Workflows

As valuable as real-time motion recording and a tactile UI are, the real power of the system is fluid access to different workflows so the animator has the ability to move back and forth effortlessly between them and develop a personal style that works best for their talents and personal abilities. So the true value isn't so much defining one specific way of working that's better than all others, but creating a flexible set of tools that give the artist flexibility in finding the best way through.

I tend to think in terms of four main workflows:

[1] Pose-Testing - blocking in high-level performance with holds between poses to make it easy to quickly adjust broad timing and layout decisions.

[2] Long-form Real-time Capture - Laying in recorded movement either in multi-axis blocks with a Waldo, or one or two at a time with individual knobs or sliders. It takes some training to do this well, but for some types of work it can be extremely valuable. The challenge is determining if the performance freshness and acquisition speed of the live capture exceeds the amount of detail cleanup required.
I personally find strategies to utilize this method to be very performance-specific - animating to music or audio with very strong 'beat' indicators are good candidates for building up with real-time recording, but with more verbally nuanced dialog-driven segments I tend to do pose-testing first and then lay over long-form real-time if and when appropriate.

[3] Short-form Real-time Capture - This is mainly to 'punch-in' little moments that have very specific organic timing requirements that would take a while to get right through keyframing. Grabbind knob and doing a three second patch a few times (to experiment with different timing details) and then quickly cleaning up the ends can often save hours of noodling.

[4] Curve Editing - Once the performance is close, the fine precision of curve editing is hard to beat.

A key feature in making this all work is having a way of managing the different keyframe environments of sampled and sparse-key hand animation. There are some different strategies for handling this - no room to get into the details here, but it's a critical part of the mix.

The system in this image was designed to support waldos for puppeteering appendages - arms, legs, trunk and head. The waldo shown was mainly designed for animatronic arm and body control - a different one would be used for torso and head motion. Note: CG waldos would have different requirements.


Embracing Real-time Recording

I hesitate to use the term 'motion capture' when speaking to animators, but that is really what the subject is. The difference between this and the common context is in providing the animator with localized - personal - methods of recording semi-complex motion with immediate and accurate control over what gets recorded and how it patches into the animation data. The goal is to give the animator some of the advantages of live performance within the context of a fairly standard character animation process - we don't want to lose anything, just broaden and strengthen our toolkit.

This image of Jim Henson is from a video of him showing audiences how the waldo device is used to control a CG Waldo character. Not long after this was shot I was fortunate enough to meet Jim and worked with a number of his performers and the Creature Shop team. I took my discussion with Jim about the strengths and weaknesses of our different systems to heart, and worked to integrate the principles he advocated into my work. It took a while, but I sincerely believe that resulted in raising my skills as an animator to a higher level.

Creating a Path Forward

Probably the hardest part of implementing the tools discussed in this section is gaining facility with processes that are new to character animators. Animators spend so much time mastering the technologies involved with CG animation, learning to train their brains to handle all of the deconstruction and abstraction inherent in the system to wring some life out of all those electrons. Adding new techniques that take time to get good with isn't an easy sell.

The best first step I know of to address this is to make create a smooth on-ramp to the new highway, both for the artist and the production pipeline. We wouldn't be taking anything away from the existing toolkit, just adding some new tools to it. But there is still risk in how a system is implemented. Once fully adopted, at least some artists are likely to feel most comfortable with the biggest, most versatile tactile 'busy-box' they can get. But at first, and for some artists permanently, a version of the system that doesn't suck up a lot of desktop space is likely to be better.

The design challenge that creates is that this system depends on custom hardware and hardware fabrication follows the laws of physics and doesn't have quite the flexibility of code. There are lots of ways to manage this, but it is definitely a consideration in the design process.

This UI and console design was done while I was trying to wrangle MotionBuilder. It included a number of interesting UI concepts and the hardware wasn't anything technically challenging - I didn't fabricate the console, just mocked up a prototype for testing, but as mentioned previously, getting the MotionBuilder SDK to meet the workflow requirements was the main obstacle.



A Mocap Editor Designed for the Animator

Mocap is an important part of the process for many game pipelines. There is a natural tension between mocap and traditional animation - a certain amount of methodogical oil-and-water, and creates challenges in managing performance decision-making and keeping the animator feeling like an integral part of the creative process.

This class of tools can help by bridging some of the bigger gaps in the process, allowing the animator to step over the line with a bit more confidence, directly editing data in the same sampled format it comes in with a robust toolset. When translation into sparse-key format for hand-key editing is still necessary, the process should be more graceful and creatively manageable.

Defining Requirements

Most of the systems I've designed to date have needed to meet requirements for the theme park production environment. That includes things like portability and ruggedness so the field animator can comfortably pack the system to and from a construction site every day, work on a ledge or a rock or a tiny, wobbly table in a soup of dirt, moisture and toxins. And although I've been aligning many of the the design elements with Maya as the core technology - to support digital design - final deliveralbles for theme park applications have additional specific requirements as well.

A game production environment will have a different set of design parameters and so, naturally, an effective solution will be specialized as well. So, while I've included some images here to demonstrate the evolution of this class of tools, the actual tools that are most applicable to a specific game pipeline might be dramatically different.

I'm looking forward to figuring out what form that would take and how I can help your animation team reach a bit higher.




Development Area #2: Performance Strategy



This category focuses more on utilizing existing pipelines or making small augmentations to help hide common issues by applying motion-blending management techniques. Techniques and tools vary from studio to studio of course, and I have no doubt that there are many features and tools in use that I'm not aware of, having essentially kept up with game technology from the outside for the last few years. So my apologies in advance for any declaritive statements that don't apply to a specific team or need updating.

Having said that, I feel pretty comfortable suggesting that game animation doesn't really maintain the illusion of life throughout a gameplay experience in the way developers would ideally want. Fairly early on in any given game characters stop being breathing, thinking beings and become more of an abstract avatar. The result is that engagement and storytelling are both impacted at a pretty basic level.

Motion-blending and transition strategies are fundamental areas to focus on, as a high percentage of what the player experiences is the result of very short animation segments stitched together using automation. There are certainly other aspects of performance design and strategy that are important, but transitions have a pretty big impact on presentation, and I'll specifically focus here on basic directional mobility - walking, running, jumping and climbing.


How Can Transition Strategy Be Improved?

Motion-blending tools have evolved and some provide really pretty helpful tools to manage individual parts of the anatomy to reduce sliding errors. That's a great start. But I feel like there is more that can be done in this critical area of character performance...


Rough Around the Edges

When I play a game and see my character spin on a dime while running, slide around or move in an unnaturally clipped manner - or just appear generally soulless - I'm always disappointed. It's not that I'm surprised - I've been playing games since Pong first came out and I know the vocabulary. I grew up on it - it's part of my DNA. But at that point the character stops being a viable agent. Soon after that I tend to lose interest in the the story, ignoring most the carefully scripted narrative in favor of focusing on gameplay guided by level design and other cues. I can't say that most gamers have the same experience, but I feel that my own experiences have validity. In most other mediums I'm a story and character junkie, but with games there just isn't usually enough to keep me 'on the train'.

But I don't believe it really has to be that way.

(note: this is not as true of some limited-scope visually-focused indy games - which I've found to be mesmorizing)

Mobility Robustness

Transition issues around mobility are largely an artifact of gameplay responsiveness requirements. We can't sacrifice the responsiveness of the player environment - that's a core requirement - but we can drill down and add some layers of detail into those brief, endlessly repetitive bits that add up to a large percentage of the game experience. Any given transition is maybe just a quarter to a half-second long - sometimes just a few frames. Based on the percentage of gameplay they are viewed they may be the most important performance moments in the whole project, along with the cycled segments they link together.


Psuedo Animator-in-Training

One of my objectives with transition strategy is to, crudely perhaps, model the types of decisions animators make when keying similar movement. I don't expect the model to be perfect, that's a high bar to achieve with the relatively crude tools we have at our disposal. But we still should be able to reduce negative cues significantly with some care and a splash of transition-engine complexity. With a couple extra layers of detail we can do quite a bit.

Anticipation and overlap are concepts to focus on. Follow-through may be a bridge too far in this case, so let's stick a pin in that, but anticipation and overlap can generally be broken down into a fairly small number of objective rules and applied with existing parameters without muddying the waters of timing and complexity so much we lose responsiveness. When applied with care, those rules may not produce perfect animation always, but should produce better animation all or most of the time. They can fool most of the people most of the time.

In this case, we're mainly targeting autonomic issues - muscle physiology and low level nervous system control. It takes longer to get big muscles up to speed than small ones. When doing big things with an appendage, our low-level brain tends to lead big muscles first to get them ramping up and then bring in smaller muscle groups in series to make a specific move. Overlapping motion has other practical purposes, such as managing force on both ends of an action so we don't tear our ligaments apart any faster than is necessary, which is also a driver for anticipation. Having some parts lag behind helps distribute force at the start of a big motion.

Art & Science

One of the guidelines I've learned to trust is that when a animator has a really useful rule of thumb - traditional character animation axioms or personal observations - it invariable directly relates to actual biology. I've come to trust that correlation so much I actively seek out ways to prove or disprove it.

For this type of effort, that correlation becomes particularly valuable. If we can just thread the needle in how we design an animation system so those relationships between art and engineer are not only recognized but embedded in the DNA of the system, the long-term value is likely to be significantly greater as we learn how to better apply and manipulate automation without sacrificing expression.


Beautiful Curves

In some cases, just manipulating a B-spline might be enough to manage timing variation. It would be great if we could trust a parametric bezier solution in the general case, but with the short durations transitions operate with, this can get us into trouble and create nasty edges if we don't have tight control over how all the incoming and outgoing positions line up. And since imposing extra restrictions on the animator is counter to the purpose, other strategies will probably have to be employed.

In Gratitude to Pierre Bezier

I don't want to 'dis' B-splines though. They are one of the animator's best friends ever, since they tend to mimic the natural world we are trying to model in so many ways - particulary in the area of muscle response we are talking about here. If managed well, bezier curves may be excellent predictors of the need to switch to another solution set - when they start to freak out.

Thanks Pierre.


Anatomical Heirarchy

A big ‘tell’ is having too many body parts moving in the same way at the same time. Breaking the skeleton up into nested anatomical groups is very helpful. A given transition can then have a profile with timing offsets per group including min and max durations to manage overlap and maybe a few other parameters.

Each mode of movement has specific physiologic and/or behavioral rules. For example, a character considering switching direction (while standing still) doesn't need to move their feet if the adjustment is +/- 60 degrees or less, they can just turn their head and maybe make a small torso adjustment. Starting a walk from that position may require a different intro segment to get the legs and weight shift correct, but the complexity isn't crazy. Beyond 60 degrees things can get more complicated as emotional state/alertness starts to impact behavior significantly enough to matter even if the POV is a God-level 3rd person, but is still thoroughly objectively definable.

Getting those bits and pieces right is important. Players may not immediately notice the difference, but the character will feel more alive and when combined with other performance details, the whole will add up to a stronger performance.

To Jump or Not To Jump

Another challenge with segmented animation is to determine if it's okay to let a segment finish before initiating a transition or if an additional layer of logic/control is needed to jump out in mid-playback. If transitions are required before a cycle is about half over, special consideration may be required. In some cases, adjusting the curve algorithm used on a few axis during the blend may be sufficient, or it may be necessary to add some procedural modifications to one or more anatomical groups to maintain appropriate arcs, secondary motion or even change the targeted start position.


Procedural Animation

Some transition performance needs don't really fit well into motion-blending. A quick example might be having to plant a foot a bit wide to absorb force when making a quick turn. Another might be wanting a character to look up on the front end of a leap to grab a rope when the climbing cycle has the head looking forward most of the time. In both cases the data isn't within the interpolation window, but resolving with canned data is likely to lead to timing or additional performance issues. Triggering a brief procedural gesture script can take care of the look without mucking up other structures.

Scripting Tools

Procedural animation can be a really valuable tool to bridge canned animation with automated forms of data management. But figuring out how to integrate with both sides of the bridge isn't trivial. Historically procedural animation has been used with a fairly hard line between engineer and animator. I'm in favor of having a specialized class of scripting designed for the animator which essentially overrides the animation system, including motion-blending, so the artist can play around with parameters directly in-engine.

When changing direction, the head almost always leads the torso, with the torso leading the body. Whether pupils lead or follow depends the priority of focus, but leads generally when someone is changing direction. This could pretty effectively be made procedural, following rules based on what the player is likely trying to look at. If the legs follow a different set of transition rules, but still consistent with the amount of orientation change, the overall impression is likely to look pretty natural.

There are secondary advantages to this approach. For example, procedural head moves could be triggered when there is activity within peripheral percepltion - sights or sounds - adding to positive performance cues without interfering with overall player control.


Secondary Action

I'm not a great fan of the term 'keep-alive motion' as it tends to be used in the context of doing minimal movement until something important happens and that minimal movement often becomes repetitive quickly. I consider resting an integral part of a performance. Even if the character is active most of the time, they should still be fully alive when waiting. In many cases, a combination of canned segments and variable procedural gestures can be effective.

Scripted vs. Canned

Until a fully behavioral scripting system is developed, most complex character motion will by necessity be in the form of static data sets with either hard-end matching or motion-blending profiles, augmented when appropriate with procedural animation. This configuration can work effectively for most existing game types, but as performance complexity starts to exceed basic directional mobility with a small number of augmentations, the complexity of the transition strategy will increase dramatically, as well as the complexity of the player control UI.

I believe that there is a natural limit to how far we can push game development and stick with a canned data model. But as long as we stay within that limit, we can still bring a lot of life to game characters, even in the pressure-cooker of in-game action.




Development Area #3: Behavioral Animation



The intention here is to develop an animation system that supports both creative development and automated non-linear processing, whether that is for game playback or advanced robotic mobility. It's a big step, particularly for the artist, and one that is sure to raise concerns from animators. I believe it is important that it be figured out to help the medium evolve, and equally important that it be done in a way that aids creative expression and strengthens the artist's position in the development process.


Why Mess with the Character Animation process?

There are a number of reasons that developing a new baseline data type and generation process for character animation is worth the likely push-back this concept is likely to initially cause:

Creative Expression: Animators have to manage a lot of complexity, which over time takes its toll on development as actors - there is a natural sacrifice made to creative development because of the technical demands of the process. I don't believe it has to be that way, at least not to the degree it is true today.

Game Development: Game evolution depends on innovation. The animation generation and playback tools in current use really haven't changed that much in the last couple decades. They put constraints on what can be attempted. Character animation isn't the only category constraining the evolution of gaming, but it's the one I have the most experience in. I expect an argument that counters that assertion, but I don't consider 'because that's how its always been done' to be enough of a reason by itself.

Economic Drivers: Character animation is expensive, there's no question about that, and although there are some technologies that have had an impact - mocap and animation re-targeting come to mind - it feels like there is an awful lot of tension and risk in the current pipeline, and not the kind of tension and risk that teams would choose. I'm not an advocate of replacing animators with bots, but I am an advocate of reducing risk so investors are willing to experiment more and give teams more freedom to do ground-breaking work.

Integration with Emerging Technologies: There's a bunch of amazing things going on in both the digital and the practical world that animators could be more involved with if we weren't locked into a static, machine-code-like data model.

Here's a quick overview of some of the baseline concepts that seem applicable...



Nested Complexity

My biggest pet peave about the existing animation represetion system is that it inherently contains absolutely no useful information regarding the meaning of the data. It makes historical sense, of course. The paradigm is exactly the same as it was in the medium it orginated - film -where the only purpose is to convey images to a screen. There is no intermediary processing or variation - the only purpose is to accurately move the original data set to another type of display.

Character animation is still problematic for film though. It's incredibly laborious, which has had a natural impact on its evolution, largely constrained to a niche market. At least until the advent of CG - now animoators get to branch out into mocap editing too.

Game development requires levels of dynamics that go way beyond film. There's a lot going on in a game enviroment and the characters need to live in that chaotic space and still be believable. Film development economics could really use a system that captures meaning in the motion data and streamlines production, but game development truly needs it.

Machine/Assembly Language vs. OOP

So in the interest of full disclosure, some background...

After high school, before I realized that giving up art was a terrible idea for me, my major in college was computer science for a couple years. This was back when most computer work involved punch-cards. I did quite a bit of assembly language coding, along with early high-level languages. Then I changed paths, became an animator and watched the OOP revolution unfold from the 'outside'. Software engineers could suddenly develop ideas with a much stronger set of complexity-management tools. And it just kept evolving. It was an amazing thing to witness.

But I and my animator friends were all still doing animation with the same basic toolset, even after we made the jump to CG animation. Game projects pretty quickly became focused on mocap to address content productivity needs and performance requirements as engines and graphics systems improved. Now, a couple decades later, animators are still working with the equivalent of assembly and machine code. We accept it because it's all we have and we've spent so very much time learning to use it well.

I consider that pretty tragic. And unecessary. The artists and the industry deserve better. Yes, the artist still needs to be able be in full control, making sure everything moves just right, but we can still be control freaks with a more robust toolset. I doubt that many software engineers would choose to go back to coding search algorithms from scratch each and every time they need one. The concepts of encapsulation, abstraction, inheritance and polymorphism apply just as well to movement as they do to computational logic.


Character Integrity

The development of character can't be automated. Whether the source is an actor or an animator, the art is in figuring out how a specific character behaves uniquely from all others, and hopefully, in a compelling manner. And that's one place where the human brain has overwhelming advantage.

In my mind, the baseline character definition in a behavioral animation system should be a working in-engine equivalent of a character model sheet document, showing any animator who needs to work with a character how it should move. That baseline may be created from scratch or by applying deviations and details to an existing, more generic baseline character definition. Either way, the first step is to establish a solid artist-driven example.

Investment Casting

At first, this part of the process will probably take longer than it would just to generate an equivalent number of character animatiuon segments from scratch using the existing keyframe process. In fact, initially that is precisely how behaviors will have to be defined, by taking keyframe animation and adding additional information to a profile.

In addition to other advantages, this can be compared to the difference between doing a quick one-off cast for a sculpture or doing investment casting when you know you will need to make many, many copies of a sculpt. In the case of a behavioral animation profile 'making copies' may mean handing the animation to other animators on the team or in other arenas (e.g. outside studios or for marketing efforts). The IP of the movement is baked right in, as much as the model and texture. A significant amount of IP control over character motion goes with the character.

For production, in this should significantly lower the amount of low-level animation decision-making that has to be done as the content scales up, and for the equivalent of re-targeting to similar-but-different creature versions. It should also add layers to the advantages of retargeting and helps keep the animation team focused more on high-level acting than low-level keyframe matching.


Parametric Definition

An underlying concept here is defining character in terms of deviations from a 'normal' profile. In general concept this is a variation of the Eigenface parametric modeling concept used for facial recognition, but using character movement instead of image elements. Easy to say, but more challenging to design, as we are working with time as well as space in this case, in addition to a highly subjective topic. And, as previously discussed, there is no 'white space' inbetween motions the way there are between image edges or scripted dialog. Transitions are messy.

Normalization & Deviation

Normalization and deviation feels like a natural part of the process already. We just don't capture that information. When I analize human and animal motion, I naturally do so comparitivel, identifying how an actor moves differently than I expect, or differently than another actor, or how much variation there is in the actions of a herd of antelope. An understanding of basic gestural language is part of each animators personal toolkit which builds with experience. We want to quantify some of that information as it will be useful for in-game performance processing and playback.


Physiologic Control Heirarchy

When working with live CG or animatronic performance design kone of the first things I do is break up the skeleton into logical anatomical groups. Once I have a nice nested anatomical heirarchy, I can start to manipulate it in interesting ways - just need an animation playback engine with that kind of support.

On some past projects I haven't been able to corral off-the-shelf engines to support this effectively and permit non-linear triggering and selective overriding and transition-blending of groups with overlapping joints, so I had to have a custom playback engine developed. That level of control becomes pretty critical when you start managing an entire performance with procedural animation. I don't believe either Unreal or Unity support that feature set natively, but I believe it will be needed.

Physics-Autonomics-Action-Expression

There is a hierarchy to the class of motion itself, related to how it is controlled. Physics elements like gravity, autonomic elements like breathing, balance and blinking, lower-level actions like walking or scratching and high-level expressive elements like communicative gesturing and attitude adjustments each have different types of drivers and rulesets. They can overlap, of course. To a degree we can override breathing, blinking or balance (to a degree - before gravity kicks in) with expression decisions. And there is a wide gray line between action and expression, but structurally these four essentially have additive rule sets in the order listed.

Most of the time, the animator would be focused mainly on the higher two levels, with indications of the impacts of influence from the lower two levels and a means of overriding or allowing their direct influence.


Representational Hierarchy

The architectural structure of the data type itself is really the most crucial part of the whole system. The structure needs to work in conjunction with a coherent language of organic motion that can classify any movement or set of movements in a manner that can make logical sense to both an artist and machine automation. No small part of that includes being able to define edges and jump-point to suport effective transitions to and from other behaviors.

Scripting - Playback - Data

The primary use-cases would be to (1) initially define motion using existing low-level static data (time and position keyframe data), then (2) define behavioral profiles based on deviation from existing profiles, procedural scripts and static data segments and (3) linking behaviors together with a high-level scripting language that includes low-level procedural controls, or (4) packaging data and scripting into a playback format that is efficient for system processing.


Semi-structured Metadata

Depending on the application, some projects may need relatively little behavioral metadata and others may need very compllicated logical structures to manage performances properly. As such my guess is that this structure needs to support an arbitrary breadth and depth of heirarchy, incorporating a relatively small set of data types that can be linked and cross-linked together with a robust multi-layered indexing system.

But to be honest, I'm just throwing up words - experimentation and more than a couple brains will need to mix together to settle on a definition that actual makes sense. I just feel that the system will probably have to work for a pretty wide variety of representational models.

Embedding Meaning

The system needs to work for a wide variety of representational models, but it can't really be a total free-for-all. There will have to be some consistency in what we mean by 'meaning'. In some cases it may just be an indication of where appendages are and what type of motion is being applied - regions identified to help make more believable transitions with fewer odd changes in speed or position. Or identifying levels of character attention - contemplative vs. aware vs. attentive vs. highly attentive vs. panicking. Or identifying animation deviations due to weight being carried. Or maybe all of those and dozens of other classifications with or without inter-affective relationships.


Conservation of Energy

One of the big axioms for me is that humans and animals don't waste energy, or at least not arbitrarily. Everything is done with purpose, and when its done, the character either rests or decides to purposefully do something else. Keeping that in mind often helps keep me from overanimating.

Abberant behavior also exists of course, and some people and animals exhibit behavior that isn't strictly 'normal', which can be seen as wasting energy, but they don't really do that by choice, so I tend to put it back in the autonomic category.

Rest - Action - Rest

Adopting a behavioral structure that identifies rest as a start-end point may be helpful. Things should tend to want to stop at some point, giving us an equivalent of 'white space' or silence between dialog lines to work with - even if the character is still breathing and doing what resting figures do. All action behavior-sets eventually end at a rest behavior of some sort. I'm not sure that helps us architecturally, but its a pretty good performance guideline.

The down side is that when a character is active - like during most gameplay, there can be a dozens to hundreds, maybe thousands, of independent behaviors strung together and overlapping each other before anything like a real rest occurs. So we still need a robust way to link everything. Still, the idea of keeping track of appendage action without rest may facilitate some interesting ways to handle exhaustion, strength, accuracy, etcetra during play.


Be the Baby

Building a behavioral language from scratch seems loosely analagous to learning to talk or walk. There are a whole bunch of ways to do it that don't work, and the only real way to get good is to fall down a lot first and learn what does work.

For all the animators that might use this type of system in the future, I - we - get to be the baby.

The Role of Low-Level Code

As excited as I am to figure out how to make a high-level behavioral structure work out, I don't expect that we'll be saying goodbye to having animators continue generating keyframe data any time soon. For one thing, we need a solid bridge (next section). For another, there's really no immediate need to take anything away. Hopefully we are just adding tools to the existing toolbox.


Keyframes & Data Points

One of the big issues I have with CG keyframe editing is that it inherently treats all keyframes as data points and all data points as keyframes. I believe there should be some class distinction when it comes to animation data.

My best guess is that there just weren't many character animators in the room when the software engineers built early keyframing editing tools and appropriated the term. But that implementation doesn't really represent what keyframes really were historically.

Keyframe History

The word keyframe originally was meant to describe a frame-drawing in animation that was more important than a number of the ones on either side of it. Each drawing was a data point, but some data points helped describe important transition inflection moments and allowed the lead animator to make timing notations describing how the keyframe should relate to the previous keyframe, how many data points (drawings) should exist between them and the timing breakdown that should define how quickly those 'inbetweens' resolve to the previous or next keyframe.

Cel animation isn't exactly like CG animation, and much of that notation was done in a timing chart was done to simulate things very much like a well-placed B-spline which practically happens automatically with CG. But identifying meaningful moments that don't necessarily correlate to specific data points is likely to really important to this effort.


A Markup Language for Motion

So since we're never going to change the current nomenclature, lets leave keyframes as what they are, agree that they are really just data points and come up with different terminology, say maybe 'Behavioral Keyframes', for the important moments near the beginning and end of a segment (but not usually right at the edge). And maybe add 'Gestural Keyframes' to high-activity moments in the middle, and 'Jump Window' start and end frame references for any point were it's acceptable to transition out of the middle.

Then each of those timing reference classes can be defined relative to any and all applicable anatomical groups and in terms of motion durations before and after for more robust transition processing. It's the beginning of a motion markup language that can be integrated into a high-level procedural scripting system, allowing us to strat to blur the line between procedural and static data types.

Timing Chart Notation

For me, the important thing about the cel animation timing chart system was that it was primarily a communication tool between animators - a way to describe meaning. However, it was mostly focused on low-level timing, so the inbetween artists would know how to break up the drawings between keyframes. Mostly, it was to help them to simiulate the right type of curve - slow-in, slow out or even. And when necessary there would be multiple charts for individual anatomical groups, along with any special performance notes they might need.

In that situation, humans were communicating to other humans, so quite a lot could be done with shorthand notes. Here we are trying to communicate performance-critical information to a machine, so we need to be pretty explicit.




The intention is to develop a procedural animation system that conforms to the way Character Animators develop performances and the way automated systems process movement, essentially re-defining static keyframe data as procedural data. It's a big step, and one that is sure to raise concerns from animation artists. I believe it is important that it be figured out, for the sake of the medium, and equally important that it be done in a way that aids creative expression and strengthens the artists position in the development process.


Development Area #4: Stepping Stones to Behavioral Animation



Building a Bridge

Maintaining a viable production pipeline throughout the process is high on the priority list. Fortunately, creating a non-destructing development path shouldn't be terrribly complicated. Small, testable steps should be manageable, with recognition that some parts of the development path will be unkowable until we learn things from the initial steps. A quick high-level order of development might look like this:

1) Create Initial Motion Markup Language Data Type - Provide animators with a quick-and-dirty version of a metadata workflow that uses a starter set of elements, like basic transition parameters and a broad gesture/behavior classification structure. Maybe just using Maya attributes to carry the extra data, although I expect most of the workflow will need to be within the engine. The artists would then provide development test data while generating content for a production project.

2) First Pass Engine Tool/Feature Development Suite:
- Baseline Engine Transition Feature Development - Core features to facilitate robust transition definition and execution using overlapping anatomic heirarchies and prioritized blending.
- Artist-Friendly Procedural Scripting Tool - Includes both traditional logic/timing pseudo-code style descriptors and manipulation of the Motion Markup Language to define robust procedural profiles.
- Behavioral Animation Playback Engine - Interprets or packages scripted and keyframe data from behavioral profiles for efficient playback.

3) Lather - Rinse - Repeat - After an initial round of development and production testing a careful analysis will need to be done to define the evolving shape of the system and define workflow requirements. With some care it should be able to be implemented into the production pipeline in stages. This will serve multiple purposes: [1] maintaining a working production system, [2] train Animators and TD's, and [3] create a valuabe feedback loop of design requirements and adjustments.