Kwan Yun

I am a research scientist at VISUAL MEDIA LAB at KAIST, where I work on applications of generative models.


Publications

AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models

AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models

CVPR, 2025

AnyMoLe performs motion in-betweening for arbitrary characters only using inputs.

FFaceNeRF: Few-shot Face Editing in Neural Radiance Fields

FFaceNeRF: Few-shot Face Editing in Neural Radiance Fields

CVPR, 2025

Mask-based 3D face editing using a customized layout, trained with few-shot learning..

Representative Feature Extraction During Diffusion Process for Sketch Extraction with One Example

Representative Feature Extraction During Diffusion Process for Sketch Extraction with One Example

Arxiv

Selecting the diffusion feature and diffuse condition of stable diffusion to effectively extract sketch

Reducing VR Sickness by Directing User Gaze to Motion Singularity Point/Region as Effective Rest Frame

Reducing VR Sickness by Directing User Gaze to Motion Singularity Point/Region as Effective Rest Frame

IEEE Access, 2023

Directing the user gaze to motion singularity point for VR experience.

Adding Difference Flow between Virtual and Actual Motion to Reduce Sensory Mismatch and VR Sickness while Moving

Adding Difference Flow between Virtual and Actual Motion to Reduce Sensory Mismatch and VR Sickness while Moving

IEEE VRW, 2022

Overlaying the difference in optical flow onto the scene for VR experience.