Leveraging Large Language Models for Mental Health Prediction via Online Text Data
Author
Xu, Xuhai; Yao, Bingsheng; Dong, Yuanzhe; Yu, Hong; Hendler, James A.; Dey, Anind K.; Wang, DakuoOther Contributors
Date Issued
2023-07-26Degree
Terms of Use
Attribution-NonCommercial-NoDerivs 3.0 United StatesMetadata
Show full item recordAbstract
The recent technology boost of large language models (LLMs) has empowered a variety of applications. However, there is very little research on understanding and improving LLMs' capability for the mental health domain. In this work, we present the first comprehensive evaluation of multiple LLMs, including Alpaca, Alpaca-LoRA, and GPT-3.5, on various mental health prediction tasks via online text data. We conduct a wide range of experiments, covering zero-shot prompting, few-shot prompting, and instruction finetuning. The results indicate the promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned model, Mental-Alpaca, outperforms GPT-3.5 (25 times bigger) by 16.7\% on balanced accuracy and performs on par with the state-of-the-art task-specific model. We summarize our findings into a set of action guidelines for future researchers, engineers, and practitioners on how to empower LLMs with better mental health domain knowledge and become an expert in mental health prediction tasks.;Department
Publisher
ACMRelationships
Access
Collections
The following license files are associated with this item: