Prompt Tuning in Biomedical Relation Extraction

Jianping He1, Fang Li2,1, Jianfu Li2,1, Xinyue Hu1,2, Yi Nian1, Yang Xiang1, Jingqi Wang1, Qiang Wei1, Yiming Li1, Hua Xu3, Cui Tao2,1
1McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, USA
2Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, USA
3Department of Bioinformatics and Data Science, Yale School of Medicine, New Haven, USA

Tóm tắt

Biomedical relation extraction (RE) is critical in constructing high-quality knowledge graphs and databases as well as supporting many downstream text mining applications. This paper explores prompt tuning on biomedical RE and its few-shot scenarios, aiming to propose a simple yet effective model for this specific task. Prompt tuning reformulates natural language processing (NLP) downstream tasks into masked language problems by embedding specific text prompts into the original input, facilitating the adaption of pre-trained language models (PLMs) to better address these tasks. This study presents a customized prompt tuning model designed explicitly for biomedical RE, including its applicability in few-shot learning contexts. The model’s performance was rigorously assessed using the chemical-protein relation (CHEMPROT) dataset from BioCreative VI and the drug-drug interaction (DDI) dataset from SemEval-2013, showcasing its superior performance over conventional fine-tuned PLMs across both datasets, encompassing few-shot scenarios. This observation underscores the effectiveness of prompt tuning in enhancing the capabilities of conventional PLMs, though the extent of enhancement may vary by specific model. Additionally, the model demonstrated a harmonious balance between simplicity and efficiency, matching state-of-the-art performance without needing external knowledge or extra computational resources. The pivotal contribution of our study is the development of a suitably designed prompt tuning model, highlighting prompt tuning’s effectiveness in biomedical RE. It offers a robust, efficient approach to the field’s challenges and represents a significant advancement in extracting complex relations from biomedical texts.

Từ khóa


Tài liệu tham khảo

SyTrue (2015) Why unstructured data holds the key to intelligent healthcare systems. Consultant HIT. https://hitconsultant.net/2015/03/31. Accessed 24 Jun 2023 Lim S, Kang J (2018) Chemical–gene relation extraction using recursive neural network. Database. https://doi.org/10.1093/database/bay060 Zelenko D, Aone C, Richardella A (2003) Kernel methods for relation extraction. J Mach Learn Res 3:1083–1106 Nasar Z, Jaffry SW, Malik MK (2021) Named entity recognition and relation extraction: state-of-the-art. ACM. Comput Surv. https://doi.org/10.1145/3445965 Shi Y, Xiao Y, Quan P, Lei M, Niu L (2021) Distant supervision relation extraction via adaptive dependency-path and additional knowledge graph supervision. Neural networks: the official journal of the International Neural Network Society. https://doi.org/10.1016/j.neunet.2020.10.012 Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies. https://doi.org/10.18653/v1/N19-1423 Liu P, Yuan W, Fu J, Jiang Z, Hayashi H, Neubig G (2023) Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput Surv. https://doi.org/10.1145/3560815 Li C, Gao F, Bu J, Xu L, Chen X, Gu Y, Shao Z, Zheng Q, Zhang N, Wang Y, Yu Z (2021) SentiPrompt: sentiment knowledge enhanced prompt-tuning for aspect-based sentiment analysis. arXiv. https://doi.org/10.48550/arXiv.2109.08306 Zheng C, Huang M (2021) Exploring prompt-based few-shot learning for grounded dialog generation. arXiv. https://doi.org/10.48550/arXiv.2109.06513 Zhong Z, Friedman D, Chen D (2021) Factual probing is [MASK]: learning vs. learning to recall. arXiv. https://doi.org/10.48550/arXiv.2104.05240 Han X, Zhao W, Ding N, Liu Z, Sun M (2021) PTR: prompt tuning with rules for text classification. arXiv. https://doi.org/10.1016/j.aiopen.2022.11.003 Schick T, Schütze H (2020) Exploiting cloze questions for few shot text classification and natural language inference. arXiv. https://doi.org/10.48550/arXiv.2001.07676 Schick T, Schmid H, Schütze H (2020) Automatically identifying words that can serve as labels for few-shot text classification. arXiv. https://doi.org/10.48550/arXiv.2010.13641 dmis-lab (2020) Biobert-large-cased-v1.1. Hugging face. https://huggingface.co/dmis-lab/biobert-large-cased-v1.1. Accessed 15 Oct 2023 bionlp (2020) Bluebert_pubmed_mimic_uncased_L-12_H-768_A-12. Hugging face. https://huggingface.co/bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12. Accessed 15 Oct 2023 emilyalsentzer (2020) Bio_ClinicalBERT. Hugging face. https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT. Accessed 15 Oct 2023 Microsoft (2021) BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext. hugging face. https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext. Accessed 19 Nov 2023 Krallinger M, Rabal O, Akhondi SA, Perez M, Santamaria J, Rodríguez GP, Tsatsaronis G, Intxaurrondo A, López JAB, Nandal U, Buel EV, Chandrasekhar A, Rodenburg M, Lægreid A, Doornenbal MA, Oyarzábal J, Lourenço A, Valencia A (2017) Overview of the BioCreative VI chemical-protein interaction track. Semantic Scholar. https://www.semanticscholar.org/paper/Overview-of-the-BioCreative-VI-chemical-protein-Krallinger-Rabal/eed781f498b563df5a9e8a241c67d63dd1d92ad5. Accessed 15 Oct 2021 Herrero-Zazo M, Segura-Bedmar I, Martínez P, Declerck T (2013) The DDI corpus: an annotated corpus with pharmacological substances and drug–drug interactions. J Biomed Inform. https://doi.org/10.1016/j.jbi.2013.07.011 Li Z, Lin H, Shen C, Zheng W, Yang Z, Wang J (2020) Cross2Self-attentive bidirectional recurrent neural network with BERT for biomedical semantic text similarity. 2020 IEEE International Conference on Bioinformatics and Biomedicine. https://doi.org/10.1109/BIBM49941.2020.9313452 Warikoo N, Chang YC, Hsu WL (2018) LPTK: a linguistic pattern-aware dependency tree kernel approach for the BioCreative VI CHEMPROT task. Database. https://doi.org/10.1093/database/bay108 Ben Abacha A, Chowdhury MFM, Karanasiou A, Mrabet Y, Lavelli A, Zweigenbaum P (2015) Text mining for pharmacovigilance: using machine learning for drug name recognition and drug-drug interaction extraction and classification. J Biomed Inform. https://doi.org/10.1016/j.jbi.2015.09.015 Corbett P, Boyle J (2018) Improving the learning of chemical-protein interactions from literature using transfer learning and specialized word embeddings. Database. https://doi.org/10.1093/database/bay066 Peng Y, Rios A, Kavuluru R, Lu Z (2018) Extracting chemical–protein relations with ensembles of SVM and deep learning models. Database. https://doi.org/10.1093/database/bay073 Liu S, Shen F, Komandur Elayavilli R, Wang Y, Rastegar-Mojarad M, Chaudhary V, Liu H (2018) Extracting chemical-protein relations using attention-based neural networks. Database: the journal of biological databases and curation. https://doi.org/10.1093/database/bay102 Mehryary F, Björne J, Salakoski T, Ginter F (2018) Potent pairing: ensemble of long short-term memory networks and support vector machine for chemical-protein relation extraction. Database: the journal of biological databases and curation. https://doi.org/10.1093/database/bay120 Zhang Y, Lin H, Yang Z, Wang J, Sun Y (2019) Chemical–protein interaction extraction via contextualized word representations and multihead attention. Database. https://doi.org/10.1093/database/baz054 Antunes R, Matos S (2019) Extraction of chemical–protein interactions from the literature using neural networks and narrow instance representation. Database. https://doi.org/10.1093/database/baz095 Wang E, Wang F, Yang Z, Wang L, Zhang Y, Lin H, Wang J (2020) A graph convolutional network-based method for chemical-protein interaction extraction: algorithm development. JMIR medical informatics. https://doi.org/10.2196/17643 Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. arXiv. https://doi.org/10.48550/arXiv.1706.03762 Sun C, Yang Z, Wang L, Zhang Y, Lin H, Wang J (2020) Attention guided capsule networks for chemical-protein interaction extraction. J Biomed Inform. https://doi.org/10.1016/j.jbi.2020.103392 Sun C, Yang Z, Su L, Wang L, Zhang Y, Lin H, Wang J (2020) Chemical–protein interaction extraction via gaussian probability distribution and external biomedical knowledge. Bioinformatics. https://doi.org/10.1093/bioinformatics/btaa491 Zuo M, Zhang Y (2021) A span-based joint model for extracting entities and relations of bacteria biotopes. Bioinformatics. https://doi.org/10.1093/bioinformatics/btab593 Corpus Statistics (2019) BB 2019. https://sites.google.com/view/bb-2019/dataset/. Accessed 19 Jan 2024 Sun C, Yang Z, Wang L, Zhang Y, Lin H, Wang J (2022) MRC4BioER: joint extraction of biomedical entities and relations in the machine reading comprehension framework. J Biomed Inform. https://doi.org/10.1016/j.jbi.2021.103956 google research (2018) Bert: tensorFlow code and pre-trained models for BERT. Github. https://github.com/google-research/bert. Accessed 17 Sep 2022 Guo H, Tan B, Liu Z, Xing EP, Hu Z (2021) Text generation with efficient (soft) Q-learning. arXiv. https://doi.org/10.48550/arXiv.2106.07704 Chen X, Li L, Zhang N, Tan C, Huang F, Si L, Chen H (2022) Relation extraction as open-book examination: retrieval-enhanced prompt tuning. arXiv. https://doi.org/10.1145/3477495.3531746 Chen X, Zhang N, Li L, Yao Y, Deng S, Tan C, Huang F, Si L, Chen H (2022) Good visual guidance make a better extractor: hierarchical visual prefix for multimodal entity and relation extraction. Findings of the Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.findings-naacl.121 Chen X, Zhang N, Xie X, Deng S, Yao Y, Tan C, Huang F, Si L, Chen H (2022) KnowPrompt: knowledge-aware prompt-tuning with synergistic optimization for relation extraction. Proceedings of the ACM Web Conference 2022. https://doi.org/10.1145/3485447.3511998 Sainz O, de Lacalle OL, Labaka G, Barrena A, Agirre E (2021) Label verbalization and entailment for effective zero and few-shot relation extraction. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. https://doi.org/10.18653/v1/2021.emnlp-main.92 Ma R, Zhou X, Gui T, Tan Y, Li L, Zhang Q, Huang X (2021) Template-free prompt tuning for few-shot NER. arXiv. https://doi.org/10.48550/arXiv.2109.13532 He J, Li F, Hu X, Li J, Nian Y, Wang J, Xiang Y, Wei Q, Xu H, Tao C (2022) Chemical-protein relation extraction with pre-trained prompt tuning. IEEE Int Conf Healthc Inform. https://doi.org/10.1109/ichi54592.2022.00120 Yeh HS, Lavergne T, Zweigenbaum P (2022) Decorate the examples: a simple method of prompt design for biomedical relation extraction. arXiv. https://doi.org/10.48550/arXiv.2204.10360 Li Q, Wang Y, You T, Lu Y (2022) BioKnowPrompt: incorporating imprecise knowledge into prompt-tuning verbalizer with biomedical text for relation extraction. Inf Sci. https://doi.org/10.1016/j.ins.2022.10.063 Peng Y, Yan S, Lu Z (2019) Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. Proceedings of the 18th BioNLP Workshop and Shared Task. https://doi.org/10.18653/v1/w19-5006 Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, Kang J (2020) BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. https://doi.org/10.1093/bioinformatics/btz682 Gu Y, Tinn R, Cheng H, Lucas M, Usuyama N, Liu X, Naumann T, Gao J, Poon H (2022) Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare. https://doi.org/10.1145/3458754