论文标题
lys_acoruña在Semeval-2022任务10:重新利用现成的情感分析工具作为语义依赖性解析
LyS_ACoruña at SemEval-2022 Task 10: Repurposing Off-the-Shelf Tools for Sentiment Analysis as Semantic Dependency Parsing
论文作者
论文摘要
本文使用双期语义依赖性解析器,大型预训练的语言模型以及公开可用的翻译模型解决了结构性情绪分析的问题。对于单语设置,我们考虑了:(i)在单个树仓上进行培训,以及(ii)通过对来自不同语言的Treebanks进行培训来放松设置,这些语言可以通过交叉语言模型来充分处理。对于零拍设置和给定的目标树库,我们依靠:(i)其他语言中可用的树库的单词级翻译,以获得嘈杂的,不可能的语法性,但是带注释的数据(我们将其发布尽可能多地允许),以及(ii)合并这些翻译的TreeBanks来获得培训数据。在评估后阶段,我们还训练了简单地合并了所有英语树库,没有使用单词级翻译,但获得了更好的结果。根据官方的结果,我们在单语和跨语言设置中排名第8和第9。
This paper addressed the problem of structured sentiment analysis using a bi-affine semantic dependency parser, large pre-trained language models, and publicly available translation models. For the monolingual setup, we considered: (i) training on a single treebank, and (ii) relaxing the setup by training on treebanks coming from different languages that can be adequately processed by cross-lingual language models. For the zero-shot setup and a given target treebank, we relied on: (i) a word-level translation of available treebanks in other languages to get noisy, unlikely-grammatical, but annotated data (we release as much of it as licenses allow), and (ii) merging those translated treebanks to obtain training data. In the post-evaluation phase, we also trained cross-lingual models that simply merged all the English treebanks and did not use word-level translations, and yet obtained better results. According to the official results, we ranked 8th and 9th in the monolingual and cross-lingual setups.