Learn | White Papers | Matilde Alves | 31 October 2023

Explainable AI in Personalized Mental Healthcare

As artificial intelligence (AI) and machine learning (ML) systems become crucial in decision-making, transparency is a rising concern. The growing complexity of AI algorithms has led to the creation of “black box” models, which are often so intricate that even experts struggle to understand their inner workings. This lack of clarity can hinder trust and raises ethical issues about accountability in decisions made by AI systems.

The white paper presents the use of explainable AI (XAI) and Incremental Learning in decision support for mental health therapists. The AI model helps prioritize clients who could benefit from outreach between therapy sessions, a decision often made under time pressure as therapists have large patient caseloads. The AI model is designed with a human-centered approach, involving end users in the selection of the specific decision to be supported, the development of the underlying recommendation engine, and the design of the user interface.

How AI transformed care for patients

The document discusses the development of an AI system for NiceDay, using the BAIT method that combines expert input with observational-level choice data. This system lacks historical data linking patient cases with therapist choice, necessitating an alternative approach. Custom explainers, natural language expressions of the top three most crucial input criteria, were developed to help therapists understand the AI system’s recommendations. Feedback from these domain experts is used to iteratively improve the system, enhancing its performance and trust over time.