论文标题
part射线:通过自我监督的深度学习和高性能计算来破解生活代码的语言
ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
论文作者
论文摘要
计算生物学和生物信息学提供了来自蛋白质序列的大量数据金矿,非常适合从NLP采集的语言模型。这些LMS以低推理成本达到新的预测前沿。在这里,我们培训了两种自动回归型号(变压器-XL,XLNET)和四种自动编码器模型(Bert,Albert,Electra,T5),该模型涉及来自UNIREF和BFD的数据,其中包含多达3930亿个氨基酸。使用5616 GPU和TPU POD在峰顶超级计算机上训练LMS。降低性的降低表明,未标记数据的原始蛋白LM插入捕获了蛋白质序列的一些生物物理特征。我们验证了使用嵌入式作为多个后续任务的独家输入的优势。第一个是蛋白质二级结构的每一个均衡预测(3态精度Q3 = 81%-87%);第二个是蛋白质亚细胞定位的蛋白蛋白预测(十个州的精度:Q10 = 81%)和膜与水溶性(2态精度Q2 = 91%)。为了预测,最有用的嵌入(PROTT5)首次在不使用进化信息的情况下首次胜过最新的嵌入式(PROTT5),从而绕开了昂贵的数据库搜索。综上所述,结果暗示蛋白LMS学到了生活语言的一些语法。为了促进未来的工作,我们在https://github.com/agemagician/prottrans上发布了模型。
Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models taken from NLP. These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive models (Transformer-XL, XLNet) and four auto-encoder models (BERT, Albert, Electra, T5) on data from UniRef and BFD containing up to 393 billion amino acids. The LMs were trained on the Summit supercomputer using 5616 GPUs and TPU Pod up-to 1024 cores. Dimensionality reduction revealed that the raw protein LM-embeddings from unlabeled data captured some biophysical features of protein sequences. We validated the advantage of using the embeddings as exclusive input for several subsequent tasks. The first was a per-residue prediction of protein secondary structure (3-state accuracy Q3=81%-87%); the second were per-protein predictions of protein sub-cellular localization (ten-state accuracy: Q10=81%) and membrane vs. water-soluble (2-state accuracy Q2=91%). For the per-residue predictions the transfer of the most informative embeddings (ProtT5) for the first time outperformed the state-of-the-art without using evolutionary information thereby bypassing expensive database searches. Taken together, the results implied that protein LMs learned some of the grammar of the language of life. To facilitate future work, we released our models at https://github.com/agemagician/ProtTrans.