top of page

Zenbo is a voice-controlled companion that rolls around on two wheels and expresses its own emotions on a touchscreen face. It is also capable of entertaining kids and controlling smart home devices. Zenbo is by far the most aspiring piece of tech ASUS has ever launched to fulfill its ambition to enable robotic computing for every household.

EMPLOYER

PRODUCT TYPE

ROLES

TIME SPAN

ASUSTek Computer Inc.

Intelligent Robot

Technical Director

Product Designer

3D Artist

2015~2017

TASK

My mission in this huge project was to deliver lively facial expressions on Zenbo’s touchscreen face to convey a variety of emotions. After a thorough research and discussion with in-house engineers and designers, I had devised a production pipeline.

ROLE

After the production pipeline was proposed, the task force was formed. My role was a technical director leading the team, which consisted of a software engineer and a 3D artist, and collaborating with a team of visual designers that focused on character design. At the same time, I also took the role of the coordinator among the internal task force and external engineering team.

PROTOTYPE

The goal of the prototype was to test the production pipeline. It began with a simple 3D character that I created to test the process. Using a real-time markerless facial performance capture software (Faceshift/Faceware) to drive the character’s facial features, we were able to produce various facial expressions in a short amount of time.

CHARACTER DESIGN

Meanwhile, the team of visual designers was busy working on various styles of faces to match the characteristics of Zenbo. After many phases of modifications and improvements, the visual design of Zenbo’s face had been finalized with many different facial expressions.

ANIMATION

Now the production began, I was in charge of 3D modeling, animation and overall quality of Zenbo’s face presentation. First, the 3D model was created and tested with all possible expressions. It was then rigged with Faceware Retargeting System in order to drive the 3D character’s face with real-time video of my facial performance to produce the animation. The captured data was imported to 3D software for retouching and dramatizing the presentation.

VOICE

Once the facial expression was taken care of, the next mission was giving Zenbo ability to speak. Voice actor was hired to provide audio data to synthesize Zenbo’s voice, and then the text-to-Speech engine was embedded in Zenbo’s OS. It was also my task to be on-site to make sure the voice actor maintained the vocal quality to ensure the character was not lost during the recordings

ANNOUNCEMENT

After all these efforts, there were still tremendous amount of refinements and performance tunings to be done. Finally, in 2016, Zenbo was announced at a special press event during Computex and available to the public in 2017.

Zenbo official micro movie - 11 mins

(Video with sound - Click to play)

HAPPY ENDING

Zenbo announcement press event at Computex 2016 - 3 mins

(Video with sound - Click to play)

Zenbo is a voice-controlled companion that rolls around on two wheels and expresses its own emotions on a touchscreen face. It is also capable of entertaining kids and controlling smart home devices. Zenbo is by far the most aspiring piece of tech ASUS has ever launched to fulfill its ambition to enable robotic computing for every household.

EMPLOYER

PRODUCT TYPE

ROLES

TIME SPAN

ASUSTek Computer Inc.

Intelligent Robot

Technical Director

Product Designer

3D Artist

2015~2017

TASK

My mission in this huge project was to deliver lively facial expressions on Zenbo’s touchscreen face to convey a variety of emotions. After a thorough research and discussion with in-house engineers and designers, I had devised a production pipeline.

ROLE

After the production pipeline was proposed, the task force was formed. My role was a technical director leading the team, which consisted of a software engineer and a 3D artist, and collaborating with a team of visual designers that focused on character design. At the same time, I also took the role of the coordinator among the internal task force and external engineering team.

PROTOTYPE

The goal of the prototype was to test the production pipeline. It began with a simple 3D character that I created to test the process. Using a real-time markerless facial performance capture software (Faceshift/Faceware) to drive the character’s facial features, we were able to produce various facial expressions in a short amount of time.

CHARACTER DESIGN

Meanwhile, the team of visual designers was busy working on various styles of faces to match the characteristics of Zenbo. After many phases of modifications and improvements, the visual design of Zenbo’s face had been finalized with many different facial expressions.

ANIMATION

Now the production began, I was in charge of 3D modeling, animation and overall quality of Zenbo’s face presentation. First, the 3D model was created and tested with all possible expressions. It was then rigged with Faceware Retargeting System in order to drive the 3D character’s face with real-time video of my facial performance to produce the animation. The captured data was imported to 3D software for retouching and dramatizing the presentation.

VOICE

Once the facial expression was taken care of, the next mission was giving Zenbo ability to speak. Voice actor was hired to provide audio data to synthesize Zenbo’s voice, and then the text-to-Speech engine was embedded in Zenbo’s OS. It was also my task to be on-site to make sure the voice actor maintained the vocal quality to ensure the character was not lost during the recordings

ANNOUNCEMENT

After all these efforts, there were still tremendous amount of refinements and performance tunings to be done. Finally, in 2016, Zenbo was announced at a special press event during Computex and available to the public in 2017.

LIP SYNC

Lip synchronization was implemented to make it more believable that Zenbo was speaking the words. A set of morph targets was produced to match the needed phoneme mouth shapes of English and Mandarin Chinese. Zenbo’s mouth was animated with this set of morph targets and driven with the phonemes generated by the Text-to-Speech engine.

LIP SYNC

Lip synchronization was implemented to make it more believable that Zenbo was speaking the words. A set of morph targets was produced to match the needed phoneme mouth shapes of English and Mandarin Chinese. Zenbo’s mouth was animated with this set of morph targets and driven with the phonemes generated by the Text-to-Speech engine.

FACE MANAGER

The team also developed a tool to allow internal visual designers to freely and quickly modify the face with new designs; and offer them to the users to choose from on Zenbo. Later on, the tool was further engineered by the team and made it into a user friendly app named Face Manager. The app provides easy-to-use functions to enable users to customize Zenbo’s to their liking.

FACE MANAGER

The team also developed a tool to allow internal visual designers to freely and quickly modify the face with new designs; and offer them to the users to choose from on Zenbo. Later on, the tool was further engineered by the team and made it into a user friendly app named Face Manager. The app provides easy-to-use functions to enable users to customize Zenbo’s to their liking.

bottom of page