AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models
CVPR, 2025
AnyMoLe performs motion in-betweening for arbitrary characters only using inputs.
I am a PhD student at VISUAL MEDIA LAB at KAIST, where I develop and fine‑tune generative models for human‑and‑character animation and editing. I focus on identifying and addressing the actual needs of end users.
AnyMoLe performs motion in-betweening for arbitrary characters only using inputs.
Mask-based 3D face editing using a customized layout, trained with few-shot learning..
Selecting the diffusion feature and diffuse condition of stable diffusion to effectively extract sketch
Stylized and Animatable face mesh with one example.
Utilizing the generative feature to extract and edit face sketch.