论文标题

使用未校正分析的定向无环图中一般DNN的功能近似和稳定性分析

Analysis of function approximation and stability of general DNNs in directed acyclic graphs using un-rectifying analysis

论文作者

Hwang, Wen-Liang, Tung, Shih-Shuo

论文摘要

一般缺乏与深喂养神经网络(DNN)有关的了解可能部分归因于缺乏分析非线性功能组成的工具,部分原因是缺乏适用于DNN体系结构多样性的数学模型。在本文中,我们做出了许多与激活函数,非线性转换和DNN体系结构有关的基本假设,以便使用未切除的方法通过有向的无环图(DAGS)分析DNN。满足这些假设的DNN被称为一般DNN。我们对分析图的构建是基于一种公理方法,在该方法中,根据监管规则,通过将原子操作应用于基本元素,从自下而上构建DAG。这种方法使我们能够通过数学诱导得出一般DNN的特性。我们表明,使用建议的方法,可以得出一般DNN的某些属性。如果可以利用大量的图形分析工具,则该分析可以提高我们对网络功能的理解,并可以促进进一步的理论见解。

A general lack of understanding pertaining to deep feedforward neural networks (DNNs) can be attributed partly to a lack of tools with which to analyze the composition of non-linear functions, and partly to a lack of mathematical models applicable to the diversity of DNN architectures. In this paper, we made a number of basic assumptions pertaining to activation functions, non-linear transformations, and DNN architectures in order to use the un-rectifying method to analyze DNNs via directed acyclic graphs (DAGs). DNNs that satisfy these assumptions are referred to as general DNNs. Our construction of an analytic graph was based on an axiomatic method in which DAGs are built from the bottom-up through the application of atomic operations to basic elements in accordance with regulatory rules. This approach allows us to derive the properties of general DNNs via mathematical induction. We show that using the proposed approach, some properties hold true for general DNNs can be derived. This analysis advances our understanding of network functions and could promote further theoretical insights if the host of analytical tools for graphs can be leveraged.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源