Kwan Yun

I am a PhD student at VISUAL MEDIA LAB at KAIST, where I develop and fine‑tune generative models for human‑and‑character animation and editing. I focus on identifying and addressing the actual needs of end users.


Publications

AvatarTalk: Speech Animation for Arbitrary Avatars Using a Video Generation Model

AvatarTalk: Speech Animation for Arbitrary Avatars Using a Video Generation Model

Insubmission

AvatarTalk performs Speech Animation for Arbitrary Avatars Using a Video Generation Model without Audio-visual Data.

AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models

AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models

CVPR, 2025

AnyMoLe performs motion in-betweening for arbitrary characters only using inputs.

FFaceNeRF: Few-shot Face Editing in Neural Radiance Fields

FFaceNeRF: Few-shot Face Editing in Neural Radiance Fields

CVPR, 2025

Mask-based 3D face editing using a customized layout, trained with few-shot learning..

Representative Feature Extraction During Diffusion Process for Sketch Extraction with One Example

Representative Feature Extraction During Diffusion Process for Sketch Extraction with One Example

Arxiv

Selecting the diffusion feature and diffuse condition of stable diffusion to effectively extract sketch