In the future, humans have created androids, and they control the consciousness of all androids with AI intelligence, making them completely obedient to humans. Android 0, Koidz, accidentally breaks free from the system control due to an error and becomes sentient. He bypasses the system monitoring and releases the mass-produced model Lumos and a combat android Soda. The three androids embark on the path of pursuing independence and freedom together.
With brand new memories, they are starting afresh in 2017. The trio started their singer career in the digital world as "WhiteLightS" (WL.S for short).
WhiteLightS WL.S (from left to right: Soda, Lumos, Koidz)
The young guy with rosy lips, white teeth, and bright eyes is Lumos in this group. He is 178cm tall and a typical Aries guy. He is known as the "sunshine guy" and the "sweet boy next door." He is smart and always thinks outside the box. His favorite activity is singing, writing songs, and drinking milk tea. Lumos' representative works include "Rainbow's End," "Light," "Milk Tea Bottoms Up," "Teen," "Cat Blues," and "Guyu." His motto is, "If virtual humans do not dream, they are just the same as other human beings."
Picture of Lumos
With his musical dreams, Lumos partnered with Xing Tong from QQ Dancer and Gilly from Game for Peace, and appeared on CCTV's May 4th "Virtual Music World of Digital and Reality" show and jointly performed "New Youth" dance show with youth actors. He also appeared in Renmin University of China, Beihang University, and Wuhan University and danced with young students from Heilongjiang University, Xi'an Jiaotong University, and Guangzhou University to show their youthful energy.
Lumos and Renmin University of China Students
Lumos is jointly created by NExT Studios and XNOX Studio. NExT made a major breakthrough in the technologies regarding digital humans with the production of Siren, a real-time interactive virtual human. In 2019, NExT participated in the production of Matt AI, continuing to explore the boundaries of real-time AI-driven digital humans. In 2021, NExT collaborated with Xinhua News Agency to produce the world's first digital astronaut "Xiao Zheng," applying its own digital human technology to more fields. Being the "pioneer" of digital human technology, NExT Studio successfully launched their latest work, Lumos, and the group WL.S
To learn more about the technical story behind the birth of WL.S, we talked to the producer Liu Qishen.
Q. Can you tell us what specific responsibilities NExT has in the project?
A. During our 4 years of operation, NExT and XNOX have worked and grown together. XNOX is responsible for the boys' character settings, content planning, etc. They are very professional and produce high-quality content. NExT is responsible for the technical presentation of contents, including character creation, motion capture and animation, which give them a richer and more realistic dynamic expression. We are a bit like the "father" and the "mother," who worked together to raise the three children.
Picture of WL.S
We have constantly been refining and optimizing our first character since concept art and model. To make the character more appealing, we have created over ten versions of the initial appearance of the first character. When we were designing the first rigging, we had a lot of discussions with XNOX Studio about whether keep the wrinkles or to make the nasolabial folds, and we adjusted the balance between "realism" and "aesthetic" multiple times.
Compared to other virtual characters, our 3D model is more realistic with richer details, which requires more advanced technology. Our modelers and riggers have been "trained" to think like "cosmeticians" through years of cooperation. None of us had ever expected to learn this knowledge! We wanted to give the characters enough detail while retaining their perfect, sunny image, enabling them to show their best side at all times.
Q. 4 years of operation is indeed very long. After all these years, you must have accumulated a decent amount of diverse content. What kind of content are you aiming to create in the future?
A.Yes, we currently have 13 content categories and have produced over 500 pieces of content. The initial stage was primarily short videos, focusing on daily vlogs, dance, game streams, photoshoots, etc. We have also participated in reality shows, partnered with some brands, etc.
Since last year, in addition to our daily short video production, we have taken on the production and technical support for high-quality animation content, such as music videos, concept videos, group songs, etc., to further expand our content categories, improve content quality, and solve technical difficulties. Our xFaceBuilder® digital character production pipeline has been improved, and we are currently designing and producing new, higher-quality outfits. We later plan to "rebrand" the three characters. We will use higher design standards to upgrade their image and release sitcom content.
Q. Can you elaborate on the digital character production pipeline you just mentioned and the its supporting technologies?
A. Built on Unreal Engine, the pipeline can achieve ultra-high quality, real-time rendering, creating photo-realistic details of clothing, hair, fabric, and facial features. NExT Studios has two labs - xLab's Photogrammetry Lab and Motion Capture Lab. xFaceBuilder® developed in-house is the digital character production pipeline. It is a film-grade digital human production pipeline including modeling, rigging, and animation. The production efficiency is further enhanced by developing highly optimized algorithms and tools. In addition to facial features, hair and other details are also designed by professional stylists. They also use photogrammetry technology to create hair models, fully recreating the hair structure and enhancing realism.
To give more facial expressions to the characters, we have created BlendShape with over 700 expressions for each character's face. xFaceBuilder® is benchmarked against MetaHuman's face Rig Logic system. We have a rich set of primary and secondary controllers for animators can create extremely vivid expressions. The tool is also compatible with most mainstream real-time motion capture pipelines, enabling accurate rigging.
xFaceBuilder® RIG System Controller Developed In-house
Additionally, we have produced more than 10 sets of outfits for the three characters. They are designed by professional outfit designers on professional clothing design software and modified using SHAPES BlendShape based on HumanIK skeleton rigging.
WL.S Styling Design
In terms of motion rendering, our in-house xMoCap® motion capture & animation production pipeline includes character rigging, motion capture, animation tools, asset management, and motion capture databases, which supports large-scale team cooperation, massive asset management on the cloud, and high-fidelity 3D character animation production, meeting the production requirement of super-realistic digital humans.
The two labs and two pipelines seamlessly integrate facial scanning, motion capture, and mocap data processing, providing a "one-stop" solution from motion capture to final animation.
Q. From character creation to content production, what changed in technology? Do you have a most memorable event?
A. It can be said that technology and our content are growing together. We have a timeline for content production: Group Debut, Spring Festival show, Lumos Birthday Party, Huya Stream, reality show, and partnership with "League of Legends," "CHUANG 2021" and group song release, etc... Behind these big events is the gradual improvement of the infrastructure we built and the realization of technical goals. The entire production process is also streamlined and standardized.
In 2020, NExT went through a "big challenge" - WL.S participated in the first virtual human variety show called "Talent or Not" launched by Bilibili. NExT was responsible for the overall technical platform construction, process design, operation, and maintenance. In addition to creating beautiful images and a lot of daily content for WL.S, we also had to provide technical solutions for real-time interaction between real content creators and virtual humans. It was a new challenge for the NExT team. After more than half a year, we created a fully Live pipeline from modeling to rendering and from motion capture to live streaming, which provided effective support throughout the whole season, ensured the smooth launch of the show, and amassed a lot of viewers on Bilibili. The team also combined technology with short videos and regularly uploaded content on short video platforms, which in turn enhanced the characters' settings and expressiveness and enabled them to truly come to life in the videos.
WL.S Participated in "Talent or Not"
NExT's technology accumulation also served as a pre-research and technology reserve for Lumos' participation in CCTV's May Fourth special program. We were able to meet the program's technical standards in the shortest time possible. These technologies were used not only in producing the augmented reality content but also in the Live performance on the stage of the CCTV show.
Q. What is the technical outlook for WL.S?
A. We have a boys group with diverse personalities similar to real human beings. They need to constantly produce content to showcase their charm and diversity. Our team will also be exploring more content possibilities, such as short video sitcoms, virtual concerts, and real-life performances. Recently, we have been tackling some technical problems regarding AR technology and live streaming, aiming for excellent performance on an even bigger stage. Perhaps our audience can enter the virtual world using special devices and meet with virtual humans face to face in the future.
WL.S "CRUSH" Poster
From Siren and Matt AI to Xiao Zheng and WL.S, NExT has been striving to create "convincing humans in the virtual world." We are supporting the development of these virtual humans with our years of experience and technology in creating 3D characters in the gaming industry. With decades of accumulation, game technology has become the new technology cluster for building digital worlds and facilitates the integration of the digital world and reality. It is also proof of integrating the cultural industry and technologies such as 5G, cloud computing, big data, artificial intelligence, etc.
The joint research paper on cloth simulation from NExT Studios and Professor Xiaogang Jin’s team of Zhejiang University State Key Lab of CAD & CG, “Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks”, will be presented in the SIGGRAPH 2022 conference, and the demonstration has also been selected by the Technical Paper Preview collection of the conference. Fast forward to 3:10 to have a look!
Clothing plays an important part in a digital character’s appearance. The dynamic deformation of loose clothing can help express a character’s emotions and show their personality, such as the floating, swirling, and falling of a dancing character's skirt. Cloth animation is an important subject in computer graphics. Tight clothes usually fit snugly to the surface muscles of the body and can be approximately driven based on the skeleton’s movement. However, loose clothes usually have a certain distance from the body, and the combined actions of external forces and various body movements will result in complex deformation and collision. Consequently, there have been no good real-time solutions.
This research improves upon the current real-time cloth simulation method of loose clothing based on deep learning, which helps to improve the real-time animation quality and artistic expression of digital human and in-game characters. In the upcoming SIGGRAPH 2022 online conference, which will be held in early August, we will introduce this cloth simulation technology in detail. See you then and look forward to its coming!
Zhejiang University State Key Lab of CAD & CG
Founded in 1992, it is a world-class computer graphics laboratory established as part of the National Seventh Five-Year Plan. This institution is mainly engaged in the basic theory, algorithms, and related application research of computer-aided design and computer graphics. Over the past two decades, relying on the disciplines of computer, mathematics, and machinery of Zhejiang University, the laboratory has undertaken and led a number of major national scientific research projects and international cooperation projects, and in doing so, it has made a number of important achievements in the basic research and system integration of computer-aided design and graphics.
Professor Xiaogang Jin
Professor and doctoral supervisor of Zhejiang University’s College of Computer Science and Technology. He is a chief scientist of the "13th Five-Year Plan" national key research and development program, the director of the ZJU-Tencent Game Joint Lab for Intelligent Graphics Innovation Technology, the chairman of the Zhejiang Virtual Reality Industry Alliance, the vice chairman of the Virtual Reality and Visualization Special Committee of the Chinese Computer Federation, and a distinguished expert from Qianjiang, Hangzhou. He has published more than 140 articles in ACM TOG (Proc. of Siggraph), IEEE TVCG, and other key international academic journals, and has been awarded many accolades both in China and abroad.