论文标题
LEED:通过删除的无标签表达式编辑
LEED: Label-Free Expression Editing via Disentanglement
论文作者
论文摘要
关于面部表达编辑的最新研究取得了非常有希望的进步。另一方面,现有方法面临的限制是需要大量表达标签,这些标签通常昂贵且耗时。本文提出了通过DISENTANGERMENT(LEED)框架进行创新的无标签表达式编辑,该框架能够编辑额叶和剖面图像的表达,而无需任何表达标签。这个想法是要在表达歧管中删除面部图像的身份和表达,中性面捕获了身份属性和中性图像和表达图像之间的位移可捕获表达属性。设计了两种新型损失,用于最佳表达分离和一致的合成,包括旨在提取纯表达相关特征的相互表达信息损失和旨在增强合成图像与参考图像之间表达相似性的暹罗损失。在两个公共面部表达数据集上进行的广泛实验表明,LEED在定性和定量上实现了出色的面部表达。
Recent studies on facial expression editing have obtained very promising progress. On the other hand, existing methods face the constraint of requiring a large amount of expression labels which are often expensive and time-consuming to collect. This paper presents an innovative label-free expression editing via disentanglement (LEED) framework that is capable of editing the expression of both frontal and profile facial images without requiring any expression label. The idea is to disentangle the identity and expression of a facial image in the expression manifold, where the neutral face captures the identity attribute and the displacement between the neutral image and the expressive image captures the expression attribute. Two novel losses are designed for optimal expression disentanglement and consistent synthesis, including a mutual expression information loss that aims to extract pure expression-related features and a siamese loss that aims to enhance the expression similarity between the synthesized image and the reference image. Extensive experiments over two public facial expression datasets show that LEED achieves superior facial expression editing qualitatively and quantitatively.