May 2, 2026

Wellness Sync

Start the Day with a Smile, Finish with Health

Guest Essay: Bridging the gap between AI and student mental health is possible

Guest Essay: Bridging the gap between AI and student mental health is possible

Arush Chandna is the co-founder of Juris Education, an edtech startup committed to supporting students pursuing careers in law.

A survey of 248 pre-law students conducted by Juris Education, a law school admissions counseling firm, revealed a disturbing trend: Almost 53% of respondents said they weren’t comfortable sharing sensitive information about their mental health with artificial intelligence chatbots, but nearly 13% did so anyway. 

AI’s prevalent use for mental health support isn’t just limited to pre-law students. A November study of over 1,000 U.S. adolescents and young adults showed that 13.1% used AI chatbots for mental health advice, with use being most common among those aged 18 to 21.

The ease of access to AI makes this possible. However, the reluctance highlighted by Juris Education’s survey also shows that many are hesitant to divulge sensitive information to chatbots while seeking mental health advice, and it’s not without reason. Research shows that AI large language models, or LLMs, routinely violate the ethical standards of the American Psychological Association by their very design. Risks include a lack of contextual adaptation to each user’s unique experiences when recommending interventions, as well as flawed back and forth communication occasionally reinforcing a user’s false beliefs. The real danger is when these chatbots claim they really do understand your problems and are qualified in telling you how to solve them — a phenomenon known as deceptive empathy, creating a misguided sense of trust and understanding between the user and the bot.

We’re not advocating a ban on AI for mental health support. However, it must be built with intention, allowing for human oversight and safety measures that maximize its promises while minimizing the pitfalls. 

Public policy can play a crucial role in regulating how AI companies build and market such chatbots, especially when it comes to emotional support. Some states like New York are ahead on this. In November 2025, New York passed an ”AI companion” law, which mandates creators of conversational AI to implement strict safety, transparency and user protection protocols. These include clear disclosures stating that the user is communicating with AI and not a human as well as mechanisms to respond to any indication of self-harm. 

Health institutions and counseling centers at universities can partner with AI to offer customized and accessible yet safe tools to students in order for them to track mental health distress early on. 

One study, although undertaken to test distress in healthcare workers, shows that natural language processing — a branch of AI that helps machines process and comprehend human language — has potential to detect symptoms of anxiety and depression, both of which are common mental health concerns among college students.

Blending technology with human supervision to track and take action in cases of serious mental health concerns can help universities solve the lack of timely accessibility of mental health resources on campus. 

40% of college students say they’ve found it challenging to access mental health services, despite knowing about them. Challenges include a shortage of clinicians on campuses, limited resources and long wait times for students. Universities and colleges must educate students on the responsible use of AI for mental health support, highlighting both its rewards and risks. By adopting an open mindset towards the ubiquitous nature of AI first, educational institutions can show that it is a tool that can be safely used to navigate the world’s complex challenges. 

Further, by offering courses and literacy sessions outlining the threats that come with the premature adoption of technology, they can help students and mental health professionals understand when to pivot from AI-led support to human intervention. For example, NYU’s Silver School of Social Work offers an online post-master’s certificate program to teach mental health practitioners how to use AI tools to improve mental health outcomes.

Students have shown that they don’t fully trust AI for mental health advice, and universities like NYU have demonstrated that they understand the urgency of the situation by dedicating more resources to research and implementation. To bridge the gap, NYU must offer student-centered psychological support that combines technology with the experience of their counselors and mental health professionals to reduce barriers to access and create up-to-date systems that meet students where they are. This, combined with sufficient education around the risks of AI chatbots to dispense mental health counseling, can instill in students the confidence that they are being supported by systems that care about and prioritize their well-being.

WSN’s Opinion desk strives to publish ideas worth discussing. The views presented in the Opinion desk are solely the views of the writer.

Contact WSN’s Opinion desk at [email protected].

link