论文标题
迈向轻型神经动画:基于专家的动画模型混合使用神经网络修剪的探索
Towards Lightweight Neural Animation : Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models
论文作者
论文摘要
在过去的几年中,神经角色动画已经出现并提供了一种自动方法来使虚拟字符动画。它们的运动是由神经网络合成的。例如,用用户定义的控制信号实时控制此运动也是视频游戏中的重要任务。基于完全连接的层(MLP)和Experts(MOE)的解决方案在生成和控制环境与虚拟特征之间近距离相互作用的各种运动方面给出了令人印象深刻的结果。但是,完全连接的层的主要缺点是它们的计算和内存成本,可能导致亚优化解决方案。在这项工作中,我们在交互式角色动画的背景下应用修剪算法来压缩MLP-MOE神经网络,从而减少了其参数的数量,并通过此加速度和合成的运动质量之间的权衡而加速其计算时间。这项工作表明,使用相同数量的专家和参数,修剪的模型产生的运动伪像比密集模型少,并且学习的高级运动功能都相似
In the past few years, neural character animation has emerged and offered an automatic method for animating virtual characters. Their motion is synthesized by a neural network. Controlling this movement in real time with a user-defined control signal is also an important task in video games for example. Solutions based on fully-connected layers (MLPs) and Mixture-of-Experts (MoE) have given impressive results in generating and controlling various movements with close-range interactions between the environment and the virtual character. However, a major shortcoming of fully-connected layers is their computational and memory cost which may lead to sub-optimized solution. In this work, we apply pruning algorithms to compress an MLP- MoE neural network in the context of interactive character animation, which reduces its number of parameters and accelerates its computation time with a trade-off between this acceleration and the synthesized motion quality. This work demonstrates that, with the same number of experts and parameters, the pruned model produces less motion artifacts than the dense model and the learned high-level motion features are similar for both