AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models
CVPR, 2025
AnyMoLe performs motion in-betweening for arbitrary characters only using inputs.
I am a research scientist at VISUAL MEDIA LAB at KAIST, where I work on applications of generative models.
AnyMoLe performs motion in-betweening for arbitrary characters only using inputs.
Mask-based 3D face editing using a customized layout, trained with few-shot learning..
Selecting the diffusion feature and diffuse condition of stable diffusion to effectively extract sketch
Stylized and Animatable face mesh with one example.
Utilizing the generative feature to extract and edit face sketch.
Directing the user gaze to motion singularity point for VR experience.
Overlaying the difference in optical flow onto the scene for VR experience.