论文标题

深度:开源深度自我端到端的学习框架

deepSELF: An Open Source Deep Self End-to-End Learning Framework

论文作者

Koike, Tomoya, Qian, Kun, Schuller, Björn W., Yamamoto, Yoshiharu

论文摘要

我们介绍了一个开源工具包,即深度自我端到端的学习框架(深度),作为多模式信号的深度自端学习框架的工具包。据我们所知,这是第一个组装一系列最先进的深度学习技术的公共工具包。拟议的深度工具包的亮点包括:首先,它可用于分析各种多模式信号,包括图像,音频和单个或多通道传感器数据。其次,我们提供了多种选择,例如通过傅立叶或小波转换来进行预处理,例如过滤或频谱图像。第三,可以定制大量拓扑,1D/2D/3D CNN和RNN/LSTM/GRU,并且可以轻松地使用一系列预处理的2D CNN型号,例如Alexnet,vggnet,vggnet,Resnet。最后但并非最不重要的一点是,除这些特征之上,深度自我不仅可以灵活地用作单个模型,而且可以用作融合。

We introduce an open-source toolkit, i.e., the deep Self End-to-end Learning Framework (deepSELF), as a toolkit of deep self end-to-end learning framework for multi-modal signals. To the best of our knowledge, it is the first public toolkit assembling a series of state-of-the-art deep learning technologies. Highlights of the proposed deepSELF toolkit include: First, it can be used to analyse a variety of multi-modal signals, including images, audio, and single or multi-channel sensor data. Second, we provide multiple options for pre-processing, e.g., filtering, or spectrum image generation by Fourier or wavelet transformation. Third, plenty of topologies in terms of NN, 1D/2D/3D CNN, and RNN/LSTM/GRU can be customised and a series of pretrained 2D CNN models, e.g., AlexNet, VGGNet, ResNet can be used easily. Last but not least, above these features, deepSELF can be flexibly used not only as a single model but also as a fusion of such.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源