Authors: Inhwa Song, Sachin R. Pendse, Neha Kumar, Munmun De Choudhury
Published on: January 25, 2024
Impact Score: 7.8
Arxiv code: Arxiv:2401.14362
Summary
- What is new: Introduces the concept of therapeutic alignment, focusing on aligning AI with therapeutic values for mental health contexts.
- Why this is important: General-purpose LLM chatbots pose risks to users’ welfare when not designed responsibly for mental health support.
- What the research proposes: Recommendations for designing LLM chatbots and AI tools that align with therapeutic values and effectively support mental health.
- Results: Insights from interviews with 21 individuals show how users create unique support roles for chatbots and navigate their limitations.
Technical Details
Technological frameworks used: nan
Models used: Large Language Models (LLMs)
Data used: Interviews with 21 individuals from diverse backgrounds
Potential Impact
Mental health care providers, AI chatbot developers, and companies in the mental wellness app market
Want to implement this idea in a business?
We have generated a startup concept here: TheraBotics.
Leave a Reply