论文标题

部分可观测时空混沌系统的无模型预测

TripleE: Easy Domain Generalization via Episodic Replay

论文作者

Li, Xiaomeng, Ren, Hongyu, Yao, Huifeng, Liu, Ziwei

论文摘要

学习如何概括模型以看不见的领域是一个重要的研究领域。在本文中,我们提出了Triplee,主要思想是鼓励网络专注于子集培训(通过重播学习)并扩大在子集学习中学习的数据空间。重播的学习包含两个核心设计,即Ereplayb和Ereplayd,它们分别在批处理和数据集上进行了重播模式。通过此,网络可以专注于学习子集,而不是一眼访问全球集合,从而扩大了结合模型的多样性。为了扩大在子集学习中的数据空间,我们验证了详尽而奇异的增强(ESAUG)在重播过程中在扩展子集中的数据空间方面表现出色。基于简单的增强和结合,我们的模型称为Triplee非常容易。如果没有铃铛和口哨声,我们的三重方法就超过了六个领域泛化基准的先前艺术,这表明这种方法可以作为域泛化未来研究的垫脚石。

Learning how to generalize the model to unseen domains is an important area of research. In this paper, we propose TripleE, and the main idea is to encourage the network to focus on training on subsets (learning with replay) and enlarge the data space in learning on subsets. Learning with replay contains two core designs, EReplayB and EReplayD, which conduct the replay schema on batch and dataset, respectively. Through this, the network can focus on learning with subsets instead of visiting the global set at a glance, enlarging the model diversity in ensembling. To enlarge the data space in learning on subsets, we verify that an exhaustive and singular augmentation (ESAug) performs surprisingly well on expanding the data space in subsets during replays. Our model dubbed TripleE is frustratingly easy, based on simple augmentation and ensembling. Without bells and whistles, our TripleE method surpasses prior arts on six domain generalization benchmarks, showing that this approach could serve as a stepping stone for future research in domain generalization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源