Kwan Yun

I am a research scientist at VISUAL MEDIA LAB at KAIST, where I work on applications of generative models.


Publications

Audio-Driven Emotional Talking-Head Generation

Audio-Driven Emotional Talking-Head Generation

Under review

Use a generative prior for identity agnostic audio-driven talking-head generation with emotion manipulation while trained on a single identity dataset.

Representative Feature Extraction During Diffusion Process for Sketch Extraction with One Example

Representative Feature Extraction During Diffusion Process for Sketch Extraction with One Example

Under review

Selecting the diffusion feature and diffuse condition of stable diffusion to effectively extract sketch

Reducing VR Sickness by Directing User Gaze to Motion Singularity Point/Region as Effective Rest Frame

Reducing VR Sickness by Directing User Gaze to Motion Singularity Point/Region as Effective Rest Frame

IEEE Access, 2023

Directing the user gaze to motion singularity point for VR experience.

Adding Difference Flow between Virtual and Actual Motion to Reduce Sensory Mismatch and VR Sickness while Moving

Adding Difference Flow between Virtual and Actual Motion to Reduce Sensory Mismatch and VR Sickness while Moving

IEEE VRW, 2022

Overlaying the difference in optical flow onto the scene for VR experience.