StyleID: A Perception-Aware Dataset and Metric for Stylization-Agnostic Facial Identity Recognition

Kwan Yun ,   Changmin Lee ,   Ayeong Jeong ,   Youngseo Kim ,   Seungmi Lee ,   Junyong Noh ,  

KAIST, Visual Media LAB

SIGGRAPH 2026 / ACM TOG Journal Track
StyleID

An Open-Source Identity Metric for Stylized Characters
(+ we also open our dataset)




Abstract

Creative face stylization aims to render portraits in diverse visual idioms such as cartoons, sketches, and paintings while retaining recognizable identity. However, current identity encoders, which are typically trained and calibrated on natural photographs, exhibit severe brittleness under stylization. They often mistake changes in texture or color palette for identity drift or fail to detect geometric exaggerations. This reveals the lack of a style-agnostic framework to evaluate and supervise identity consistency across varying styles and strengths. To address this gap, we introduce StyleID, a human perception-aware dataset and evaluation framework for facial identity under stylization. StyleID comprises two datasets: (i) StyleBench-H, a benchmark that captures human same–different verification judgments across diffusion- and flow-matching-based stylization at multiple style strengths, and (ii) StyleBench-S, a supervision set derived from psychometric recognition–strength curves obtained through controlled two-alternative forced-choice (2AFC) experiments. Leveraging StyleBench-S, we fine-tune existing semantic encoders to align their similarity orderings with human perception across styles and strengths. Experiments demonstrate that our calibrated models yield significantly higher correlation with human judgments and enhanced robustness for out-of-domain, artist drawn portraits. All of our datasets, code, and pretrained models will be publicly available.


Quick Start

from transformers import CLIPModel, CLIPProcessor

model = CLIPModel.from_pretrained("kwanY/styleid")
processor = CLIPProcessor.from_pretrained("kwanY/styleid")

inputs = processor(images=img, return_tensors="pt")
with torch.no_grad():
    emb = model.get_image_features(**inputs)

Evaluation results of StyleID

texture generator