论文标题
部分可观测时空混沌系统的无模型预测
The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices
论文作者
论文摘要
随着AI民主化的进步,神经网络还在更频繁地部署在边缘设备中,以进行广泛的应用。公平性涉及在许多应用中逐渐出现,例如面部识别和移动医疗。出现一个基本问题:边缘设备的最公平的神经架构是什么?通过检查现有的神经网络,我们观察到较大的网络通常更公平。但是,边缘设备要求较小的神经体系结构满足硬件规格。为了应对这一挑战,这项工作提出了一个新颖的公平和硬件感知的神经体系结构搜索框架,即法哈纳。加上模型冻结方法,Fahana可以有效地搜索具有平衡公平和准确性的神经网络,同时保证满足硬件规格。结果表明,法哈纳可以在皮肤病学数据集上识别具有更高公平和准确性的一系列神经网络。 Fahana的目标边缘设备发现了一种神经体系结构,其精度略高,尺寸较小5.28倍,公平得分高15.14%,与MobilenetV2相比;同时,在Raspberry Pi和Odroid Xu-4上,它实现了5.75倍和5.79倍的速度。
Along with the progress of AI democratization, neural networks are being deployed more frequently in edge devices for a wide range of applications. Fairness concerns gradually emerge in many applications, such as face recognition and mobile medical. One fundamental question arises: what will be the fairest neural architecture for edge devices? By examining the existing neural networks, we observe that larger networks typically are fairer. But, edge devices call for smaller neural architectures to meet hardware specifications. To address this challenge, this work proposes a novel Fairness- and Hardware-aware Neural architecture search framework, namely FaHaNa. Coupled with a model freezing approach, FaHaNa can efficiently search for neural networks with balanced fairness and accuracy, while guaranteed to meet hardware specifications. Results show that FaHaNa can identify a series of neural networks with higher fairness and accuracy on a dermatology dataset. Target edge devices, FaHaNa finds a neural architecture with slightly higher accuracy, 5.28x smaller size, 15.14% higher fairness score, compared with MobileNetV2; meanwhile, on Raspberry PI and Odroid XU-4, it achieves 5.75x and 5.79x speedup.