Top Story: ‘The Wild Robot’ And ‘Arcane’ Lead 52nd Annie Award Nominations

China’s state-run media out Xinhua News Agency and search engine company Sogou.com have jointly developed an “artificial intelligence” news anchor. The character made its debut this week at the fifth World Internet Conference in east China’s Zhejiang Province. Here’s the AI anchor introducing himself:

The photorealistic synthetic anchor was created with the goal of replicating realistic movement, facial expressions, and voice. Here’s another piece of the character in action:

According to Xinhua news agency:

“He” learns from live broadcasting videos by himself and can read texts as naturally as a professional news anchor. “[H]e” has become a member of its reporting team and can work 24 hours a day on its official website and various social media platforms, reducing news production costs and improving efficiency.

The makers of the AI Virtual Anchor don’t reveal how they accomplished the character, but a lot of the technology to achieve this kind of effect is off-the-shelf at this point. A recent episode of the Netflix series Follow This explores the state of digital avatars, and highlights the L.A.-based company Pinscreen which generates a fully-rigged 3d face model using just a single input photo from the iPhone X. And there’s plenty of other companies exploring similar effects with the iPhone’s depth-sensing front camera.

From an animation perspective, the AI anchor raises intriguing questions. While the character isn’t “animated” in the strict sense of the word since, according to its creators, everything is synthetically generated, the effect of what the audience sees onscreen is indeed animated.

Until recently, the only way to see an artificial character perform on-screen was achievable through some kind of animation process. Now, thanks to the combination of tech like motion and facial capture, AI, real-time game engines, cg, and automated lip sync, we’ve reached the point where creators can generate fully-fleshed out characters using real-time rather than frame-by-frame production processes. It’s commonplace in video games, and the idea is being transferred to other applications such as this news anchor. What does one call a character that conveys an animation EFFECT without having been produced through an animation PROCESS?

The situation may seem cut-and-dry with photorealistic characters that aim to replicate reality, but it gets even stickier when one talks about traditional cartoon characters like The Simpsons, which can now be generated without actual animation processes. A couple years ago, Simpsons producers experimented with a ‘live’ animation performance of Homer. The result of the real-time auto-generated performance was the same as if a frame-by-frame technique had been used (not in aesthetic terms, but in the sense that both undeniably resulted in a product that audiences commonly understand to be animation).

China’s new AI Anchor might be viewed as an odd experiment by some, but it’s actually a clear example of the direction that animation is headed. The concepts of what animation is, how it is produced, and even what audiences perceive to be animation are all changing rapidly, and it’s worth paying attention to.