论文标题

法斯特:与卷积编码器和改进的级联二进制标签框架进行快速关系提取

FastRE: Towards Fast Relation Extraction with Convolutional Encoder and Improved Cascade Binary Tagging Framework

论文作者

Li, Guozheng, Chen, Xu, Wang, Peng, Xie, Jiafeng, Luo, Qiqing

论文摘要

从文本中提取关系的最新工作取得了出色的表现。但是,大多数现有方法对效率的关注较少,这使得在现实情况下快速从大规模或流式的文本数据中提取关系仍然具有挑战性。主要的效率瓶颈是这些方法使用基于变压器的预训练的语言模型进行编码,这严重影响了训练速度和推理速度。为了解决此问题,我们提出了一个基于卷积编码器和改进的级联二进制标记框架的快速关系提取模型(Fastre)。与以前的工作相比,法斯特(Fastre)采用了几项创新来提高效率,同时保持有希望的绩效。具体而言,法斯特(Fastre)采用了一种新型的卷积编码器结构,结合了扩张的卷积,封闭式单位和残留连接,从而大大降低了训练和推理的计算成本,同时保持了令人满意的性能。此外,为了改善级联二进制标记框架,Fastre首先引入了类型相关的映射机制,以加速标记效率并减轻关系冗余,然后利用依赖位置的适应性阈值策略来获得更高的标记准确性和更好的模型通用。实验结果表明,法斯特在效率和性能之间取得了很好的平衡,并且与现行的模型相比,相比之下,效率和性能的训练速度更快为3-10x训练速度,7-15X推理速度和1/100参数,而性能仍然具有竞争力。

Recent work for extracting relations from texts has achieved excellent performance. However, most existing methods pay less attention to the efficiency, making it still challenging to quickly extract relations from massive or streaming text data in realistic scenarios. The main efficiency bottleneck is that these methods use a Transformer-based pre-trained language model for encoding, which heavily affects the training speed and inference speed. To address this issue, we propose a fast relation extraction model (FastRE) based on convolutional encoder and improved cascade binary tagging framework. Compared to previous work, FastRE employs several innovations to improve efficiency while also keeping promising performance. Concretely, FastRE adopts a novel convolutional encoder architecture combined with dilated convolution, gated unit and residual connection, which significantly reduces the computation cost of training and inference, while maintaining the satisfactory performance. Moreover, to improve the cascade binary tagging framework, FastRE first introduces a type-relation mapping mechanism to accelerate tagging efficiency and alleviate relation redundancy, and then utilizes a position-dependent adaptive thresholding strategy to obtain higher tagging accuracy and better model generalization. Experimental results demonstrate that FastRE is well balanced between efficiency and performance, and achieves 3-10x training speed, 7-15x inference speed faster, and 1/100 parameters compared to the state-of-the-art models, while the performance is still competitive.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源