JALI Transforms Facial Animation Workflows by combining Automation with Directorial control with JFace:

JALI’s complete solution for 3D lip synchronization and facial animation for animated tv series, film and games.

We believe that a complete expressive facial performance can be automatically generated from an input audio stream and tagged text transcript.

Maya Unreal Unity

JALI provides products and services for the complete automation of high end lip sync and facial animation with the option for ultimate animator directorial control. Our system delivers the fastest and simplest animation curves providing higher quality and greater efficiency.

With plugins for Autodesk Maya, Unreal Editor and Unity 3D, our modular solution can be purchased as a suite or custom toolkit, integrates directly into standard and proprietary pipelines and is compatible with most industry typical rigs.



Standalone audio and tagged text transcript processing tool that outputs:

    • Audio-Transcript alignment.

    • Detects speech style and audio features like volume, pitch, shimmer, and formants.

    • Multi-lingual support for 9 languages

    • Accurate & Editable

    • Fast 2-5 seconds / 10 secs

    • Any length (tested up to 12 mins)

    • Batch processing



Fast, accurate easy to edit animation curves

    • Multi-lingual

    • Quality at par with professional key-framing or performance capture

    • Output sensitive to speaking style

    • Sparse phonetic animation curves are easily edited

    • Real-time application capable with TTS solutions, i.e., Amazon Polly



Integrated Rig Interface & Production tool

    • In-view 3D control interface, with speech, speech style and expression controls driving FACS AUs

    • A connection interface to wire the 3D controls to any custom face rig

    • Language agnostic control over speech effort, style and expression

    • Default set of 6 morphable faces, wired to the control interface



jAmbient is the upper face counterpart to lower-face centric speech. When activated within jSync it synchronizes blinks and brows with the aspects of the input speech and text and also creates ambient gaze motion. Many new behaviours are in development for jAmbient, stay tuned.

    • Brings the entire face to life, even in the absence of dialogue

    • Parameters to control personality and directorial intent

    • Sparse animation curves are easily edited

    • Capable of real-time application with TTS

Pricing and Licensing

We’ve worked with a number of different customers from mobile technologies, animated and VFX studios, independent and AAA game studios. If we’ve learned one thing, it’s that one size does not fit all.

Downloadable plug n’ play solutions fall short of expectations because they fail to recognize this about their customers.

Whether you are looking to animate hours of run time character dialogue in multiple languages, humanize an interactive consumer avatar or streamline your character animation workflow in an animated series, we can help you select the license option and solutions to achieve your unique creative and technical goals.

Request Quote


Maya 2015 Plugin Mac/Windows

Maya 2017 Plugin Mac/Windows

Maya 2018 Plugin Mac/Windows

Native Unity3D Editor

Native Unreal Engine 4 Editor

C++ Libraries for Development


Request More Info


R&D Licensing

Build Custom Language Models

Develop Custom Features

Language Support


French / Français

Russian / Pусский

Polish / Polskie

Portuguese (Brazil) / Portugues (Brasil)

Japanese / 日本人

Mandarin / 普通话

German / Deutsche

Italian / Italiano

Spanish / Español


Our goal is to change the face of 3D facial animation. While many existing solutions automate, we’ve developed an innovative system that not only automates, but allows for the animator to retain artistic control.

Our team is not afraid to challenge conventional thinking about automated speech or facial animation. We strive to build solutions that not only work, but surpass current solutions. Our team members are both artists and scientists. Grounded in a deep understanding of anatomy, perception, language and craft, our process is unapologetically procedural at it’s core. We get results that work, automate the process and most importantly, humanize it.

Over the last two years we’ve worked with many customers, from AAA and independent game studios, animation and VFX houses and mobile device companies. We’ve learned that our users are looking for more than just software, but the means with which they can completely transform the way character animation teams work.

And we are happy to help.



Join our team

Want to join an exciting start up team developing groundbreaking solutions for 3D facial animation problems. We are currently expanding our team and looking for self motivated, team oriented individuals with experience in character rigging and digital marketing respectively. Contact info@jaliresearch.com for a contract description and application instructions.


Our research team are leaders in the area of speech animation whose recent contributions to the field include resurrecting the long dormant area of procedural speech (SIGGRAPH 2016) and also for new work in deep learning based real-time speech (SIGGRAPH 2018). U of T PhD student Pif Edwards is continually breaking new ground with his graduate research that will provide a disruptive speech-centric workflow for character animation. The facial animation tools in the current industry standard for modelling and animation since 1998 Autodesk Maya (technical Oscar in 2003), were singularly designed and developed by JALI co-founders Karan Singh and Chris Landreth. Landreth’s award winning films Bingo (Genie 1998), Ryan (Oscar 2005), Spine (Genie nomination 2009), have been lauded for the expressive quality of their animated characters and faces. Fellow of the Royal Society of Canada, Eugene Fiume, likewise is a pioneer in computer graphics served as Director of Research at Alias|wavefront (now Autodesk)

ACM SIGGRAPH 2016. (US pat. pending).

JALI: An Animator-Centric Viseme Model for Expressive Lip-Synchronization


Visemenet: audio-driven animator-centric speech animation

The JALI team is continually driving innovation through collaborative research and development.

To propose a project contact info@jaliresearch.com


Newest All Research Maya Unreal Unity Other

We are proud to live and work in the vibrant city of Toronto, which prospers doubly from the overlap of it's growing tech, entertainment and interactive industries and the numerous colleges and universities that nurture a rich talent pool and relentless research and innovation.

We are grateful to U of T Departments of Computer Science (DGP Lab) and Innovation for their generous support.

Get in Touch

We would be thrilled to hear from you regarding potential partnerships, licensing, or general ideas you'd love to see implemented.


Discovery District

Toronto, Ontario, Canada