论文标题
部分可观测时空混沌系统的无模型预测
A Benchmark and Dataset for Post-OCR text correction in Sanskrit
论文作者
论文摘要
梵语是一种古典语言,约有3000万手稿适合数字化,可提供书面,印刷或扫描图表。但是,在可用的数字资源方面,它仍然被认为是一种低资源语言。在这项工作中,我们发布了一个后官方的文本校正数据集,其中包含大约218,000个句子,带有150万个单词,来自30本不同的书籍。梵语中的文字在语言和风格的用法方面是多种多样的,因为梵语是印度次大陆的话语的“通用语言”,大约3千年。牢记这一点,我们从天文学,医学和数学等多样化的地区发布了一个多域数据集,其中一些年龄在18世纪。此外,我们基于预先训练的SEQ2SEQ语言模型,将多个强大的基准作为任务的基准。我们发现,由字节级令牌和语音编码(BYT5+SLP1)结合使用,我们的表现最佳模型与单词错误率和字符错误率有关,比OCR输出增加了23%。此外,我们在评估这些模型的性能上进行了广泛的实验,并分析了在石墨和词汇水平上的常见原因。我们的代码和数据集可在https://github.com/ayushbits/pe-ocr-sanskrit上公开获取。
Sanskrit is a classical language with about 30 million extant manuscripts fit for digitisation, available in written, printed or scannedimage forms. However, it is still considered to be a low-resource language when it comes to available digital resources. In this work, we release a post-OCR text correction dataset containing around 218,000 sentences, with 1.5 million words, from 30 different books. Texts in Sanskrit are known to be diverse in terms of their linguistic and stylistic usage since Sanskrit was the 'lingua franca' for discourse in the Indian subcontinent for about 3 millennia. Keeping this in mind, we release a multi-domain dataset, from areas as diverse as astronomy, medicine and mathematics, with some of them as old as 18 centuries. Further, we release multiple strong baselines as benchmarks for the task, based on pre-trained Seq2Seq language models. We find that our best-performing model, consisting of byte level tokenization in conjunction with phonetic encoding (Byt5+SLP1), yields a 23% point increase over the OCR output in terms of word and character error rates. Moreover, we perform extensive experiments in evaluating these models on their performance and analyse common causes of mispredictions both at the graphemic and lexical levels. Our code and dataset is publicly available at https://github.com/ayushbits/pe-ocr-sanskrit.