Top Story: Despite A Large Number Of Detractors, Animation Guild Members Ratify New Contract
Omniverse Avatar Cloud Engine Omniverse Avatar Cloud Engine

Nvidia has kicked off early access to its Omniverse Avatar Cloud Engine (ACE), allowing developers and teams access to a suite of cloud-native AI services programmed to ease the process of building and deploying animated virtual assistants and digital humans at scale.

What does the Omniverse ACE do? The suite of AI microservices can be used to build and deploy AI virtual assistants and avatars at scale. Developers and creators can then use their avatars on social media platforms, as video streaming characters, deploy them in the metaverse, and more. The suite comes with the necessary tools to upload the avatars to any cloud and includes a plug-and-play suite that is built on Nvidia Unified Compute Framework, allowing for interoperability between Nvidia AI and other solutions.

Who is this software for? Developers and teams building avatars and virtual assistants are the primary targets for the new software. Nvidia’s press release also says that Omniverse ACE has been shared with select partners to capture early feedback, mentioning Ready Player Me and Epic Games by name (Omniverse ACE can be combined with Epic Games’ Metahuman tech.) The release provides few explicit examples of its use outside of a holiday performance by Toy Jensen, an avatar of the company’s CEO Jensen Huang, but suggests that many creators are experimenting with Vtubing as a new form of live streaming and that they could benefit from the Omniverse ACE.

What is included with the early access program:

  • 3d animation AI microservice for third-party avatars, which uses Omniverse Audio2Face generative AI to bring to life characters in Unreal Engine and other rendering tools by creating realistic facial animation from an audio file.
  • 2d animation AI microservice, called Live Portrait, for easy animation of 2d portraits or stylized human faces using live video feeds.
  • Text-to-speech microservice uses Nvidia Riva TTS to synthesize natural-sounding speech from raw transcripts without any additional information, such as patterns or rhythms of speech.
  • Access to tooling, sample reference applications, and supporting resources to help get started.

What they’re saying: Timmu Tõke, CEO and co-founder of Ready Player Me:

Digital avatars are becoming a significant part of our daily lives. People are using avatars in games, virtual events, and social apps, and even as a way to enter the metaverse. We spent seven years building the perfect avatar system, making it easy for developers to integrate in their apps and games and for users to create one avatar to explore various worlds — with Nvidia Omniverse ACE, teams can now more easily bring these characters to life.

Pictured at top: Avatars created with Omniverse ACE