博碩士論文 111423060 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:102 、訪客IP:18.222.121.24
姓名 李育慈(Yu-Tzu Lee)  查詢紙本館藏   畢業系所 資訊管理學系
論文名稱
(Enhancing Medication Recommendation with LLM Text Representation)
相關論文
★ 多重標籤文本分類之實證研究 : word embedding 與傳統技術之比較★ 基於圖神經網路之網路協定關聯分析
★ 學習模態間及模態內之共用表示式★ Hierarchical Classification and Regression with Feature Selection
★ 病徵應用於病患自撰日誌之情緒分析★ 基於注意力機制的開放式對話系統
★ 針對特定領域任務—基於常識的BERT模型之應用★ 基於社群媒體使用者之硬體設備差異分析文本情緒強烈程度
★ 機器學習與特徵工程用於虛擬貨幣異常交易監控之成效討論★ 捷運轉轍器應用長短期記憶網路與機器學習實現最佳維保時間提醒
★ 基於半監督式學習的網路流量分類★ ERP日誌分析-以A公司為例
★ 企業資訊安全防護:網路封包蒐集分析與網路行為之探索性研究★ 資料探勘技術在顧客關係管理之應用─以C銀行數位存款為例
★ 人臉圖片生成與增益之可用性與效率探討分析★ 人工合成文本之資料增益於不平衡文字分類問題
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-8-1以後開放)
摘要(中) 大多數現有的藥物推薦模型僅使用結構化數據(如醫療代碼)進行預測,而對於 大量的非結構化或半結構化數據利用不足。為了有效增加利用率,我們提出了一種通 過大型語言模型(LLM)文本表徵增強藥物推薦的方法。LLM 具備強大的語言理解和 生成能力,能夠從包含複雜術語的臨床筆記等複雜且冗長的非結構化數據中提取資訊 。這種方法可以應用於我們選擇的幾個已被提出的模型,並通過文本和醫療代碼的組 合表徵在兩個不同數據集上的實驗中提高藥物推薦性能:著名的醫療數據集 MIMIC-III 和嘉義基督教醫院(Ditmanson Medical Foundation Chia-Yi Christian Hospital; CYCH)的 住院數據。
實驗結果表明,LLM 文本表徵能夠提高我們選擇的大多數基礎模型在兩個數據集 上的表現。僅使用 LLM 文本表徵甚至可以展示出與僅使用醫療代碼表徵相當的能力。 總體而言,這是一種可以應用於其他模型並提高藥物推薦表現的通用方法。透過使用 LLM,我們減少了傳統方法中處理文本的冗餘。通過結合結構化和非結構化數據,我們優化了 EMR 數據的利用,解決了利用不足的問題。
摘要(英) Most of the existing medication recommendation models are predicted with only structured data such as medical codes, with the remaining other large amount of unstructured or semi-structured data underutilization. To increase the utilization effectively, we proposed a method of enhancing medication recommendation with Large Language Model (LLM) text representation. LLM harnesses powerful language understanding and generation capabilities, enabling the extraction of information from complex and lengthy unstructured data such as clinical notes which contain complex terminology. This method can be applied to several existing medication recommendation models we selected and improve medication recommendation performance with the combination representation of text and medical code experiments on two different datasets: the well-known medical datasets MIMI-III and hospitalized data from Ditmanson Medical Foundation Chia-Yi Christian Hospital (CYCH).
The experiment results show that LLM text representation can improve most base models we selected on both datasets. LLM text representation alone can even demonstrate a comparable ability to the medical code representation alone. Overall, this is a general method that can be applied to other models and datasets for improved prediction performance. With LLM, we reduce the redundancy of the processing text in traditional approaches. By combining the structured and unstructured data, we optimize the utilization of EMR data, addressing the issue of underutilization.
關鍵字(中) ★ 藥物推薦
★ 電子醫療病例
★ 臨床筆記
★ 大型語言模型
★ 知識提取
關鍵字(英) ★ Medication Recommendation
★ EMR/EHR
★ Clinical Notes
★ Large Language Model
★ Knowledge Extraction
論文目次 摘要 ............................................................................................................................................ I Abstract ....................................................................................................................................II Acknowledgments .................................................................................................................. III Table of Contents ................................................................................................................... IV List of Figures ........................................................................................................................ VI List of Tables .........................................................................................................................VII 1. Introduction..........................................................................................................................1
1.1. Background .................................................................................................................. 1
1.2. Motivation .................................................................................................................... 2
1.3. Objectives ..................................................................................................................... 3
1.4. Thesis Organization ...................................................................................................... 4
2. RelatedWorks......................................................................................................................6
2.1. Medication Recommendation with Medical Structured Data ...................................... 6
2.2. Analysis of Unstructured Medical Data ..................................................................... 13
2.3. Large Language Models (LLMs) ............................................................................... 14
2.4. LLM Application to Medical Unstructured Data ....................................................... 16
2.5. Knowledge in LLMs ..................................................................................................16
2.5.1. Applications for Extracted Knowledge from LLMs ........................................... 16
2.5.2. Edit Knowledge in LLMs....................................................................................19
2.6. Discussion ..................................................................................................................20 3. Methodology.......................................................................................................................22 3.1. Proposed Method........................................................................................................22
3.1.1. LLM Text Representation Extraction..................................................................23
3.1.2. Combination Representation of Text and Medical Codes...................................24
3.2. Experimental Setup ....................................................................................................29
3.2.1. Dataset Description ............................................................................................. 29
3.2.2. Data Preprocessing .............................................................................................. 30
3.2.3. Base Models ........................................................................................................ 35
3.2.4. Implementation Setting .......................................................................................36
3.2.5. Evaluation Metrics ..............................................................................................37
3.3. Experimental Design .................................................................................................. 39
3.3.1. Experiment 1 - Effectiveness of LLM Text Representation................................39
3.3.2. Experiment 2 - LLM Text Representation Performance in ICU of MIMIC-III andDepartment of CYCH.......................................................................................................40
4. Experimental Results.........................................................................................................46
4.1. Experiment 1 Results - Effectiveness of LLM Text Representation .......................... 46
4.1.1. Medication Recommendation with LLM Text Representation Only..................46
4.1.2. Combination Representation of Text and Medical Codes...................................47
4.1.3. Experiment 1 Results Analysis............................................................................50
4.2. Experiment 2 Results - LLM Text Representation Performance in each ICU Unit of MIMIC-III and Department of CYCH .................................................................................51 5. Conclusion ..........................................................................................................................57
5.1. Overall Summary ....................................................................................................... 57
5.2. Contributions .............................................................................................................. 57
5.3. Limitations.................................................................................................................. 57
5.4. Future Research .......................................................................................................... 58
References................................................................................................................................ 59 Appendix.................................................................................................................................. 68
A. Combination representation on each base model.......................................................68
A.1. G-BERT ............................................................................................................... 68
A.2. GAMENet ........................................................................................................... 69
A.3. SafeDrug.............................................................................................................. 70
A.4. COGNet...............................................................................................................71
A.5. SHAPE ................................................................................................................ 72
A.6. SratMed ............................................................................................................... 73
B.Experiment 2 Results - LLM Text Representation Performance in each ICU of MIMIC-III and Department of CYCH.................................................................................75
參考文獻 Adnan, K., Akbar, R., Khor, S.W., Ali, A.B.A., 2020a. Role and Challenges of Unstructured Big Data in Healthcare, in: Sharma, N., Chakrabarti, A., Balas, V.E. (Eds.), Data Management, Analytics and Innovation. Springer, Singapore, pp. 301–323. https://doi.org/10.1007/978-981-32-9949-8_22
Adnan, K., Akbar, R., Khor, S.W., Ali, A.B.A., 2020b. Role and Challenges of Unstructured Big Data in Healthcare, in: Sharma, N., Chakrabarti, A., Balas, V.E. (Eds.), Data Management, Analytics and Innovation. Springer Singapore, Singapore, pp. 301–323.
Agrawal, M., Hegselmann, S., Lang, H., Kim, Y., Sontag, D., 2022. Large language models are few-shot clinical information extractors, in: Goldberg, Y., Kozareva, Z., Zhang, Y. (Eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Presented at the EMNLP 2022, Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, pp. 1998–2022. https://doi.org/10.18653/v1/2022.emnlp-main.130
Aronson, A.R., 2001. Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program. Proc AMIA Symp 17–21.
Azaria, A., Mitchell, T., 2023. The Internal State of an LLM Knows When It’s Lying, in: Bouamor, H., Pino, J., Bali, K. (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2023. Presented at the Findings 2023, Association for Computational Linguistics, Singapore, pp. 967–976. https://doi.org/10.18653/v1/2023.findings-emnlp.68
Bhoi, S., Lee, M.L., Hsu, W., Fang, H.S.A., Tan, N.C., 2021. Personalizing Medication Recommendation with a Graph-Based Approach. ACM Trans. Inf. Syst. 40, 55:1-55:23. https://doi.org/10.1145/3488668
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., Amodei, D., 2020. Language Models are Few-Shot Learners, in: Advances in Neural Information Processing Systems. Curran Associates, Inc., pp. 1877–1901.
Buckland, R.S., Hogan, J.W., Chen, E.S., 2021. Selection of Clinical Text Features for Classifying Suicide Attempts. AMIA Annu Symp Proc 2020, 273–282.
Chang, Y., Lo, K., Goyal, T., Iyyer, M., 2023. BooookScore: A systematic exploration of book- length summarization in the era of LLMs. Presented at the The Twelfth International Conference on Learning Representations.
Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H.W., Sutton, C., Gehrmann, S., Schuh, P., Shi, K., Tsvyashchenko, S., Maynez, J., Rao, A., Barnes, P., Tay, Y., Shazeer, N., Prabhakaran, V., Reif, E., Du, N., Hutchinson, B., Pope, R., Bradbury, J., Austin, J., Isard, M., Gur-Ari, G., Yin, P., Duke, T., Levskaya, A., Ghemawat, S., Dev, S., Michalewski, H., Garcia, X., Misra, V., Robinson, K., Fedus, L., Zhou, D., Ippolito, D., Luan, D., Lim, H., Zoph, B., Spiridonov, A., Sepassi, R., Dohan, D., Agrawal, S., Omernick, M., Dai, A.M., Pillai, T.S., Pellat, M., Lewkowycz, A., Moreira, E., Child, R., Polozov, O., Lee, K., Zhou, Z., Wang, X., Saeta, B., Diaz, M., Firat, O., Catasta, M., Wei, J., Meier-Hellstern, K., Eck, D., Dean, J., Petrov, S., Fiedel, N., 2023. PaLM: Scaling Language Modeling with Pathways. Journal of Machine Learning Research 24, 1–113.
Chuang, Y.-N., Tang, R., Jiang, X., Hu, X., 2024. SPeC: A Soft Prompt-Based Calibration on Performance Variability of Large Language Model in Clinical Notes Summarization. Journal of Biomedical Informatics 151, 104606. https://doi.org/10.1016/j.jbi.2024.104606
Fan, L., Li, L., Ma, Z., Lee, S., Yu, H., Hemphill, L., 2023. A Bibliometric Review of Large Language Models Research from 2017 to 2023. arXiv.org. https://doi.org/10.48550/arXiv.2304.02020
Gao, S., Alawad, M., Young, M.T., Gounley, J., Schaefferkoetter, N., Yoon, H.J., Wu, X.-C., Durbin, E.B., Doherty, J., Stroup, A., Coyle, L., Tourassi, G., 2021. Limitations of Transformers on Clinical Text Classification. IEEE J Biomed Health Inform 25, 3596– 3607. https://doi.org/10.1109/JBHI.2021.3062322
Geva, M., Schuster, R., Berant, J., Levy, O., 2021. Transformer Feed-Forward Layers Are Key- Value Memories, in: Moens, M.-F., Huang, X., Specia, L., Yih, S.W. (Eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Presented at the EMNLP 2021, Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, pp. 5484–5495. https://doi.org/10.18653/v1/2021.emnlp-main.446
Goel, A., Gueta, A., Gilon, O., Liu, C., Erell, S., Nguyen, L.H., Hao, X., Jaber, B., Reddy, S., Kartha, R., Steiner, J., Laish, I., Feder, A., 2023. LLMs Accelerate Annotation for Medical Information Extraction, in: Proceedings of the 3rd Machine Learning for Health Symposium. Presented at the Machine Learning for Health (ML4H), PMLR, pp. 82–100.
Golmaei, S.N., Luo, X., 2021. DeepNote-GNN: predicting hospital readmission using clinical notes and patient network, in: Proceedings of the 12th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics. Presented at the BCB ’21: 12th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, ACM, Gainesville Florida, pp. 1–9. https://doi.org/10.1145/3459930.3469547
Hadi, M.U., Tashi, Q.A., Qureshi, R., Shah, A., Muneer, A., Irfan, M., Zafar, A., Shaikh, M.B., Akhtar, N., Wu, J., Mirjalili, S., 2023. Large Language Models: A Comprehensive Survey of its Applications, Challenges, Limitations, and Future Prospects (preprint). https://doi.org/10.36227/techrxiv.23589741.v4
Hao, C., Runfeng, X., Xiangyang, C., Zhou, Y., Xin, W., Zhanwei, X., Kai, Z., 2023. LKPNR: LLM and KG for Personalized News Recommendation Framework.
He, Y., Wang, C., Li, N., Zeng, Z., 2020. Attention and Memory-Augmented Networks for Dual-View Sequential Learning, in: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Presented at the KDD ’20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, ACM, Virtual Event CA USA, pp. 125–134. https://doi.org/10.1145/3394486.3403055
Heo, T.-S., Yoo, Y., Park, Y., Jo, B., Lee, K., Kim, K., 2021. Medical Code Prediction from Discharge Summary: Document to Sequence BERT using Sequence Attention, in: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA). pp. 1239–1244. https://doi.org/10.1109/ICMLA52953.2021.00201
Hernandez, E., Li, B.Z., Andreas, J., 2023. Inspecting and Editing Knowledge Representations in Language Models.
Hossain, E., Rana, R., Higgins, N., Soar, J., Barua, P.D., Pisani, A.R., Turner, K., 2023. Natural Language Processing in Electronic Health Records in relation to healthcare decision- making: A systematic review. Comput Biol Med 155, 106649. https://doi.org/10.1016/j.compbiomed.2023.106649
Hu, Y., Chen, Q., Du, J., Peng, X., Keloth, V.K., Zuo, X., Zhou, Y., Li, Z., Jiang, X., Lu, Z., Roberts, K., Xu, H., 2024. Improving large language models for clinical named entity recognition via prompt engineering. Journal of the American Medical Informatics Association ocad259. https://doi.org/10.1093/jamia/ocad259
Huang, K., Altosaar, J., Ranganath, R., 2020. ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission.
Jiang, A.Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D.S., Casas, D. de las, Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L.R., Lachaux, M.-A., Stock, P., Scao, T.L., Lavril, T., Wang, T., Lacroix, T., Sayed, W.E., 2023. Mistral 7B. https://doi.org/10.48550/arXiv.2310.06825
Johnson, A.E., Pollard, T.J., Shen, L., Lehman, L.H., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Anthony Celi, L., Mark, R.G., 2016. MIMIC-III, a freely accessible critical care database. Scientific data 3, 1–9.
Koh, J.Y., Fried, D., Salakhutdinov, R., 2023a. Generating Images with Multimodal Language Models. Presented at the Thirty-seventh Conference on Neural Information Processing Systems.
Koh, J.Y., Salakhutdinov, R., Fried, D., 2023b. Grounding Language Models to Images for Multimodal Inputs and Outputs, in: Proceedings of the 40th International Conference on Machine Learning. Presented at the International Conference on Machine Learning, PMLR, pp. 17283–17300.
Landolsi, M.Y., Hlaoua, L., Ben Romdhane, L., 2023. Information extraction from electronic medical documents: state of the art and future research directions. Knowl Inf Syst 65, 463– 516. https://doi.org/10.1007/s10115-022-01779-1
Le, H., Tran, T., Venkatesh, S., 2018. Dual Control Memory Augmented Neural Networks for Treatment Recommendations, in: Phung, D., Tseng, V.S., Webb, G.I., Ho, B., Ganji, M., Rashidi, L. (Eds.), Advances in Knowledge Discovery and Data Mining. Springer International Publishing, Cham, pp. 273–284. https://doi.org/10.1007/978-3-319-93040- 4_22
Lee, J., Yoon, W., Kim, Sungdong, Kim, D., Kim, Sunkyu, So, C.H., Kang, J., 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36, 1234–1240. https://doi.org/10.1093/bioinformatics/btz682
Li, Xiaopeng, Li, S., Song, S., Yang, J., Ma, J., Yu, J., 2024. PMET: Precise Model Editing in a Transformer. Proceedings of the AAAI Conference on Artificial Intelligence 38, 18564– 18572. https://doi.org/10.1609/aaai.v38i17.29818
Li, Xiang, Liang, S., Hou, Y., Ma, T., 2024. StratMed: Relevance stratification between biomedical entities for sparsity on medication recommendation. Knowledge-Based Systems 284, 111239. https://doi.org/10.1016/j.knosys.2023.111239
Liao, C.-F., 2023. Graph-based Similar Visits Enhanced Representation for Medication Recommendation.
Liu, Q., Wu, X., Zhao, X., Zhu, Y., Zhang, Z., Tian, F., Zheng, Y., 2024. Large Language Model Distilling Medication Recommendation Model. https://doi.org/10.48550/arXiv.2402.02803
Liu, S., Wang, X., Du, J., Hou, Y., Zhao, X., Xu, H., Wang, H., Xiang, Y., Tang, B., 2023. SHAPE: A Sample-Adaptive Hierarchical Prediction Network for Medication Recommendation. IEEE J. Biomed. Health Inform. 27, 6018–6028. https://doi.org/10.1109/JBHI.2023.3320139
Lu, Q., Nguyen, T.H., Dou, D., 2021. Predicting Patient Readmission Risk from Medical Text via Knowledge Graph Enhanced Multiview Graph Convolution, in: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. Presented at the SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM, Virtual Event Canada, pp. 1990–1994. https://doi.org/10.1145/3404835.3463062
Martinez-Rodriguez, J.L., Hogan, A., Lopez-Arevalo, I., 2020. Information extraction meets the Semantic Web: A survey. Semantic Web 11, 255–335. https://doi.org/10.3233/SW- 180333
Meng, K., Bau, D., Andonian, A., Belinkov, Y., 2022a. Locating and Editing Factual Associations in GPT. Advances in Neural Information Processing Systems 35, 17359– 17372.
Meng, K., Sharma, A.S., Andonian, A.J., Belinkov, Y., Bau, D., 2022b. Mass-Editing Memory in a Transformer. Presented at the The Eleventh International Conference on Learning Representations.
Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M., Socher, R., Amatriain, X., Gao, J., 2024. Large Language Models: A Survey.
Mitchell, E., Lin, C., Bosselut, A., Finn, C., Manning, C.D., 2021. Fast Model Editing at Scale. Presented at the International Conference on Learning Representations.
Mitchell, E., Lin, C., Bosselut, A., Manning, C.D., Finn, C., 2022. Memory-Based Model Editing at Scale, in: Proceedings of the 39th International Conference on Machine Learning. Presented at the International Conference on Machine Learning, PMLR, pp. 15817–15831.
Mulyadi, A.W., Suk, H.-I., 2023. KindMed: Knowledge-Induced Medicine Prescribing Network for Medication Recommendation.
Nuthakki, S., Neela, S., Gichoya, J.W., Purkayastha, S., 2019. Natural language processing of MIMIC-III clinical notes for identifying diagnosis and procedures with neural networks. https://doi.org/10.48550/arXiv.1912.12397
OpenAI, 2023. GPT-4 Technical Report. https://doi.org/10.48550/ARXIV.2303.08774
Pal, K., Sun, J., Yuan, A., Wallace, B., Bau, D., 2023. Future Lens: Anticipating SubsequentTokens from a Single Hidden State, in: Jiang, J., Reitter, D., Deng, S. (Eds.), Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL). Presented at the CoNLL 2023, Association for Computational Linguistics, Singapore, pp. 548–560. https://doi.org/10.18653/v1/2023.conll-1.37
Peng, B., Alcaide, E., Anthony, Q., Albalak, A., Arcadinho, S., Biderman, S., Cao, H., Cheng, X., Chung, M., Derczynski, L., Du, X., Grella, M., Gv, K., He, X., Hou, H., Kazienko, P., Kocon, J., Kong, J., Koptyra, B., Lau, H., Lin, J., Mantri, K.S.I., Mom, F., Saito, A., Song, G., Tang, X., Wind, J., Woźniak, S., Zhang, Z., Zhou, Q., Zhu, J., Zhu, R.-J., 2023. RWKV: Reinventing RNNs for the Transformer Era, in: Bouamor, H., Pino, J., Bali, K. (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2023. Presented at the Findings 2023, Association for Computational Linguistics, Singapore, pp. 14048–14077. https://doi.org/10.18653/v1/2023.findings-emnlp.936
Portet, F., Reiter, E., Gatt, A., Hunter, J., Sripada, S., Freer, Y., Sykes, C., 2009. Automatic generation of textual summaries from neonatal intensive care data. Artificial Intelligence 173, 789–816. https://doi.org/10.1016/j.artint.2008.12.002
Shang, J., Ma, T., Xiao, C., Sun, J., 2019a. Pre-training of Graph Augmented Transformers for Medication Recommendation, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. Presented at the Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}, International Joint Conferences on Artificial Intelligence Organization, Macao, China, pp. 5953–5959. https://doi.org/10.24963/ijcai.2019/825
Shang, J., Xiao, C., Ma, T., Li, H., Sun, J., 2019b. GAMENet: graph augmented memory networks for recommending medication combination, in: Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI’19/IAAI’19/EAAI’19. AAAI Press, Honolulu, Hawaii, USA, pp. 1126–1133. https://doi.org/10.1609/aaai.v33i01.33011126
Sheikhalishahi, S., Miotto, R., Dudley, J.T., Lavelli, A., Rinaldi, F., Osmani, V., 2019. Natural Language Processing of Clinical Notes on Chronic Diseases: Systematic Review. JMIR Med Inform 7, e12239. https://doi.org/10.2196/12239
Singhal, K., Azizi, S., Tu, T., Mahdavi, S.S., Wei, J., Chung, H.W., Scales, N., Tanwani, A., Cole-Lewis, H., Pfohl, S., Payne, P., Seneviratne, M., Gamble, P., Kelly, C., Babiker, A., Schärli, N., Chowdhery, A., Mansfield, P., Demner-Fushman, D., Agüera y Arcas, B., Webster, D., Corrado, G.S., Matias, Y., Chou, K., Gottweis, J., Tomasev, N., Liu, Y., Rajkomar, A., Barral, J., Semturs, C., Karthikesalingam, A., Natarajan, V., 2023. Largelanguage models encode clinical knowledge. Nature 620, 172–180. https://doi.org/10.1038/s41586-023-06291-2
Sondhi, P., Sun, J., Tong, H., Zhai, C., 2012. SympGraph: a framework for mining clinical notes through symptom relation graphs, in: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Presented at the KDD ’12: The 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Beijing China, pp. 1167–1175. https://doi.org/10.1145/2339530.2339712
Sun, H., Xie, S., Li, S., Chen, Y., Wen, J.-R., Yan, R., 2022. Debiased, Longitudinal and Coordinated Drug Recommendation through Multi-Visit Clinic Records. Advances in Neural Information Processing Systems 35, 27837–27849.
Tahabi, F.M., Storey, S., Luo, X., 2023a. SymptomGraph: Identifying Symptom Clusters from Narrative Clinical Notes using Graph Clustering, in: Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing. Presented at the SAC ’23: 38th ACM/SIGAPP Symposium on Applied Computing, ACM, Tallinn Estonia, pp. 518–527. https://doi.org/10.1145/3555776.3577685
Tahabi, F.M., Storey, S., Luo, X., 2023b. SymptomGraph: Identifying Symptom Clusters from Narrative Clinical Notes using Graph Clustering, in: Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing, SAC ’23. Association for Computing Machinery, New York, NY, USA, pp. 518–527. https://doi.org/10.1145/3555776.3577685
Tan, Y., Kong, C., Yu, L., Li, P., Chen, C., Zheng, X., Hertzberg, V.S., Yang, C., 2022. 4SDrug: Symptom-based Set-to-set Small and Safe Drug Recommendation, in: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Presented at the KDD ’22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, ACM, Washington DC USA, pp. 3970–3980. https://doi.org/10.1145/3534678.3539089
Tatonetti, N.P., Ye, P.P., Daneshjou, R., Altman, R.B., 2012. Data-Driven Prediction of Drug Effects and Interactions. Sci Transl Med 4, 125ra31. https://doi.org/10.1126/scitranslmed.3003377
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., Lample, G., 2023. LLaMA: Open and Efficient Foundation Language Models.
van Aken, B., Papaioannou, J.-M., Mayrdorfer, M., Budde, K., Gers, F., Loeser, A., 2021. Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration, in: Merlo, P., Tiedemann, J., Tsarfaty, R. (Eds.), Proceedings of the 16thConference of the European Chapter of the Association for Computational Linguistics: Main Volume. Presented at the EACL 2021, Association for Computational Linguistics, Online, pp. 881–893. https://doi.org/10.18653/v1/2021.eacl-main.75
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I., 2017. Attention is All you Need, in: Advances in Neural Information Processing Systems. Curran Associates, Inc.
Wang, Y., Chen, W., Pi, D., Yue, L., 2021. Adversarially regularized medication recommendation model with multi-hop memory network. Knowl. Inf. Syst. 63, 125–142. https://doi.org/10.1007/s10115-020-01513-9
Wang, Y., Wang, L., Rastegar-Mojarad, M., Moon, S., Shen, F., Afzal, N., Liu, S., Zeng, Y., Mehrabi, S., Sohn, S., Liu, H., 2018. Clinical information extraction applications: A literature review. J Biomed Inform 77, 34–49. https://doi.org/10.1016/j.jbi.2017.11.011
Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., Chi, E.H., Hashimoto, T., Vinyals, O., Liang, P., Dean, J., Fedus, W., 2022. Emergent Abilities of Large Language Models. Transactions on Machine Learning Research.
Wu, J., Dong, Y., Gao, Z., Gong, T., Li, C., 2023a. Dual Attention and Patient Similarity Network for drug recommendation. Bioinformatics 39, btad003. https://doi.org/10.1093/bioinformatics/btad003
Wu, J., He, K., Mao, R., Li, C., Cambria, E., 2023b. MEGACare: Knowledge-guided multi- view hypergraph predictive framework for healthcare. Information Fusion 100, 101939. https://doi.org/10.1016/j.inffus.2023.101939
Wu, R., Qiu, Z., Jiang, J., Qi, G., Wu, X., 2022. Conditional Generation Net for Medication Recommendation, in: Proceedings of the ACM Web Conference 2022. Presented at the WWW ’22: The ACM Web Conference 2022, ACM, Virtual Event, Lyon France, pp. 935– 945. https://doi.org/10.1145/3485447.3511936
Yang, C., Xiao, C., Glass, L., Sun, J., 2021a. Change Matters: Medication Change Prediction with Recurrent Residual Networks. Presented at the Twenty-Ninth International Joint Conference on Artificial Intelligence, pp. 3728–3734. https://doi.org/10.24963/ijcai.2021/513
Yang, C., Xiao, C., Ma, F., Glass, L., Sun, J., 2021b. SafeDrug: Dual Molecular Graph Encoders for Recommending Effective and Safe Drug Combinations, in: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. Presented at the Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21},International Joint Conferences on Artificial Intelligence Organization, Montreal, Canada, pp. 3735–3741. https://doi.org/10.24963/ijcai.2021/514
Yang, N., Zeng, K., Wu, Q., Yan, J., 2023. MoleRec: Combinatorial Drug Recommendation with Substructure-Aware Molecular Representation Learning, in: Proceedings of the ACM Web Conference 2023. Presented at the WWW ’23: The ACM Web Conference 2023, ACM, Austin TX USA, pp. 4075–4085. https://doi.org/10.1145/3543507.3583872
Yang, X., Chen, A., PourNejatian, N., Shin, H.C., Smith, K.E., Parisien, C., Compas, C., Martin, C., Costa, A.B., Flores, M.G., Zhang, Y., Magoc, T., Harle, C.A., Lipori, G., Mitchell, D.A., Hogan, W.R., Shenkman, E.A., Bian, J., Wu, Y., 2022. A large language model for electronic health records. NPJ Digit Med 5, 194. https://doi.org/10.1038/s41746-022- 00742-2
Zeng, A., Liu, X., Du, Z., Wang, Z., Lai, H., Ding, M., Yang, Z., Xu, Y., Zheng, W., Xia, X., Tam, W.L., Ma, Z., Xue, Y., Zhai, J., Chen, W., Liu, Z., Zhang, P., Dong, Y., Tang, J., 2023. GLM-130B: AN OPEN BILINGUAL PRE-TRAINED.
Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X.V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura, P.S., Sridhar, A., Wang, T., Zettlemoyer, L., 2022. OPT: Open Pre-trained Transformer Language Models.
Zhang, Y., Chen, R., Tang, J., Stewart, W.F., Sun, J., 2017. LEAP: Learning to Prescribe Effective and Safe Treatment Combinations for Multimorbidity, in: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17. Association for Computing Machinery, New York, NY, USA, pp. 1315–1324. https://doi.org/10.1145/3097983.3098109
Zhao, W.X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., Liu, P., Nie, J.-Y., Wen, J.-R., 2023. A Survey of Large Language Models.
Zhu, F., Dai, D., Sui, Z., 2024. Language Models Understand Numbers, at Least Partially. arXiv e-prints. https://doi.org/10.48550/arXiv.2401.03735
指導教授 柯士文(Shih-Wen Ke) 審核日期 2024-7-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明