AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control

1The Hong Kong Polytechnic University, 2City University of Hong Kong, 3Google, 4Netflix, 5Microsoft Cloud AI. *Corresponding Author

AvatarCraft turns text into neural human avatars with parameterized shape and pose control.



Neural implicit fields are powerful for representing 3D scenes and generating high-quality novel views, but it re mains challenging to use such implicit representations for creating a 3D human avatar with a specific identity and artistic style that can be easily animated. Our proposed method, AvatarCraft, addresses this challenge by using diffusion models to guide the learning of geometry and texture for a neural avatar based on a single text prompt. We carefully design the optimization framework of neural implicit fields, including a coarse-to-fine multi-bounding box training strategy, shape regularization, and diffusion-based constraints, to produce high-quality geometry and texture, to produce high-quality geometry and texture. Additionally, we make the human avatar animatable by deforming the neural implicit field with an explicit warping field that maps the target human mesh to a template human mesh, both represented using parametric human models. This simplifies the animation and reshaping of the generated avatar by controlling pose and shape parameters. Extensive experiments on various text descriptions show that AvatarCraft is effective and robust in creating human avatars and rendering novel views, poses, and shapes.

Avatar Creation

AvatarCraft turns natural language prompt into a 3D human avatar. We show the canonical neural avatars with various styles.

Avatar Articulation

Shape Control

We can reshape the generated neural avatar by controlling the SMPL shape parameters, without the need for re-training.

Pose Control

AvatarCraft also provides explicit control of the pose of the generated neural avatar, enabling various applications such as animation.

Composite Rendering

AvatarCraft encodes avatar as neural implicit fields, which allow us to put the avatar into realistic neural scene to achieve occlusion-aware rendering.


      title={AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control},
      author={Ruixiang Jiang and Can Wang and Jingbo Zhang and Menglei Chai and Mingming He and Dongdong Chen and Jing Liao},