Deep Brain AI’s 3D Virtual Human is a 3D rendering-based virtual character that can set clothing, hair, language, and tone from different angles, unlike live AI humans optimized for frontal speech. There are control points on the character’s face and body, so it’s possible to create a three-dimensional character and a space similar to reality, such as sitting on a chair, the company said.

In the case of existing AI humans, a separate video synthesis process was needed, but Unreal Engine-based 3D virtual humans can communicate in real time without such a process. The company explained that it is equipped with real-time reactions that express emotions or gestures that match the user’s facial expression, and maximizes the naturalness of communication by eliminating rendering time before output. of speech.
In particular, after preparing various speech forms in advance, the voice data is subdivided to enable natural mouth shapes and pronunciation as well as delicate emotional expressions and movements. In addition, it can be placed in various three-dimensional spaces such as virtual reality and metaverse, so that it can be applied to various fields that customers want.
The 3D virtual human currently has four characters: ▲ Asian male model (Yuri), ▲ Western male model (Peter), ▲ Western female model (Sophia), and ▲ Black female model (Amber). will be updated.
Jang Se-young, CEO of Deep Brain AI, said, “Following the existing real 2D AI human, we have implemented a very complete 3D virtual human, so that customers can expand their choices so that they can be used in various fields. “We will take the lead in delivering new business opportunities through innovative AI human services.”
editor@itworld.co.kr


