Leveraging Large Language Models for Mental Health Prediction via Online Text Data

Loading...
Thumbnail Image
Authors
Xu, Xuhai
Yao, Bingsheng
Dong, Yuanzhe
Yu, Hong
Hendler, James A.
Dey, Anind K.
Wang, Dakuo
Issue Date
2023-07-26
Type
Article
Language
Keywords
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
The recent technology boost of large language models (LLMs) has empowered a variety of applications. However, there is very little research on understanding and improving LLMs' capability for the mental health domain. In this work, we present the first comprehensive evaluation of multiple LLMs, including Alpaca, Alpaca-LoRA, and GPT-3.5, on various mental health prediction tasks via online text data. We conduct a wide range of experiments, covering zero-shot prompting, few-shot prompting, and instruction finetuning. The results indicate the promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned model, Mental-Alpaca, outperforms GPT-3.5 (25 times bigger) by 16.7\% on balanced accuracy and performs on par with the state-of-the-art task-specific model. We summarize our findings into a set of action guidelines for future researchers, engineers, and practitioners on how to empower LLMs with better mental health domain knowledge and become an expert in mental health prediction tasks.
Description
Full Citation
Publisher
ACM
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN