论文标题

插入式倒置:型模型 - 不合时宜的倒置,用于数据增强

Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations

论文作者

Ghiasi, Amin, Kazemi, Hamid, Reich, Steven, Zhu, Chen, Goldblum, Micah, Goldstein, Tom

论文摘要

现有的模型反演技术通常依赖于难以调整的正规化器,例如总变化或特征正则化,每个网络必须单独校准,以产生足够的图像。在这项工作中,我们介绍了插入式倒置,该插件倒置依赖于一组简单的增强,并且不需要过多的超参数调整。在我们提出的基于增强的方案下,无论输入尺寸或体系结构如何,都可以使用相同的增强超参数来颠倒广泛的图像分类模型。我们通过倒置视觉变形金刚(VIT)和在Imagenet数据集中训练的多层感知器(MLP)来说明我们方法的实用性,据我们所知,这些任务尚未成功完成。

Existing techniques for model inversion typically rely on hard-to-tune regularizers, such as total variation or feature regularization, which must be individually calibrated for each network in order to produce adequate images. In this work, we introduce Plug-In Inversion, which relies on a simple set of augmentations and does not require excessive hyper-parameter tuning. Under our proposed augmentation-based scheme, the same set of augmentation hyper-parameters can be used for inverting a wide range of image classification models, regardless of input dimensions or the architecture. We illustrate the practicality of our approach by inverting Vision Transformers (ViTs) and Multi-Layer Perceptrons (MLPs) trained on the ImageNet dataset, tasks which to the best of our knowledge have not been successfully accomplished by any previous works.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源