Incorporating AI to Address Mental Health Challenges in K-12 Students

The National Institute of Mental Health reported that 16.32% of youth (aged 12-17) in the District of Columbia (DC) experience at least one major depressive episode (MDE).
Although the prevalence of youth with MDE in DC is lower compared to some states, such as Oregon (where it reached 21.13%), it is important to address mental health challenges in youth early, as untreated mental health challenges can persist into adulthood. Further, the number of youths with MDE climbs nationally each year, including last year when it rose by almost 2% to approximately 300,000 youth.

It is important to note that there are programs specifically designed to help and treat youth that have experienced trauma and are living with mental health challenges. In DC, several mental health services and professional counseling services are available to residents. Most importantly, there is a broad reaching school-based mental health program that aims to provide a behavioral health expert in every school building. Additionally, on the DC government’s website, there is a list of mental health services programs available, which can be found here.

In conjunction with the mental health programs, early identification of students at risk for suicide, self-harm, and behavioral issues can help states, including DC, ensure access to mental health care and support for these young individuals. In response to the widespread youth mental health crisis, K-12 schools are employing the use of artificial intelligence (AI)-based tools to identify students at risk for suicide and self-harm. Through AI-based suicide risk monitoring, natural language processing, sentiment analysis, predictive models, early intervention, and surveillance and evaluation, AI is playing a crucial role in addressing the mental challenges faced by youth.

AI systems, developed by companies like Bark, Gaggle, and GoGuardian, aim to monitor students’ digital footprint through various data inputs, such as online interactions and behavioral patterns, for signs of distress or risk. These programs identify students who may be at risk for self-harm or suicide and alert the school and parents accordingly.

Proposals for using AI models to enhance mental health surveillance in school settings by implementing chat boxes to interact with students are being introduced. The chat box conversation logs serve as the source of raw data for the machine learning. According to Using AI for Mental Health Analysis and Prediction in School Surveys, existing survey results evaluated by health experts can be used to create a test dataset to validate the machine learning models. Supervised learning can then be deployed to classify specific behaviors and mental health patterns. However, there are concerns about how these programs work and what safeguards the companies have in place to protect youths’ data from being sold to other platforms. Additionally, there are concerns about whether these companies are complying with relevant laws (e.g., the Family Educational Rights and Privacy Act [FERPA]).

The University of Michigan identified AI technologies, such as natural language processing (NLP) and sentiment analysis, that can analyze user interactions, such as posts and comments, to identify signs of distress, anxiety, or depression. For example, Breathhh is an AI-powered Chrome extension designed to automatically deliver mental health exercises based on an individual’s web activity and online behaviors. By monitoring and analyzing the user’s interactions, the application can determine appropriate moments to present stress-relieving practices and strategies. Applications, like Breathhh, are just one example of personalized interventions designed by monitoring user interaction.

When using AI to address mental health concerns among K-12 students, policy implications must be carefully considered.

First, developers must obtain informed consent from students, parents, guardians, and all stakeholders before deploying such AI models. The use of AI models is always a topic of concern for policymakers because of the privacy concerns that come with it. To safely deploy AI models, there needs to be privacy protection policies in place to safeguard sensitive information from being improperly used. There is no comprehensive legislation that addresses those concerns either nationally or locally.
Second, developers also need to consider and factor in any bias engrained in their algorithm through data testing and regular monitoring of data output before it reaches the user. AI has the ability to detect early signs of mental health challenges. However, without such proper safeguards in place, we risk failing to protect students from being disproportionately impacted. When collected data reflects biases, it can lead to unfair treatment of certain groups. For youth, this can result in feelings of marginalization and adversely affect their mental health.
Effective policy considerations should encourage the use of AI models that will provide interpretable results, and policymakers need to understand how these decisions are made. Policies should outline how schools will respond to alerts generated by the system. A standard of care needs to be universally recognized, whether it be through policy or the companies’ internal safeguards. This standard of care should outline guidelines that address situations in which AI data output conflicts with human judgment.

Responsible AI implementation can enhance student well-being, but it requires careful evaluation to ensure students’ data is protected from potential harm. Moving forward, school leaders, policymakers, and technology developers need to consider the benefits and risks of AI-based mental health monitoring programs. Balancing the intended benefits while mitigating potential harms is crucial for student well-being.

© 2024 ArentFox Schiff LLP
by: David P. GrossoStarshine S. Chun of ArentFox Schiff LLP

For more news on Artificial Intelligence and Mental Health, visit the NLR Communications, Media & Internet section.

Difficult Situation Know-How: What To Do If an Employee Seems Suicidal

Steptoe Johnson PLLC Law Firm

As people in the world, we face difficult situations all the time.  If someone seems sad or depressed, we may want to help but not know how.  When it’s your employee who is going through tough times, you may have legal concerns to worry about too.  It’s good to be as prepared as possible beforehand.  For example, let’s imagine that one of your employees seems depressed and starts making comments around the workplace about hurting him or herself.

A condition causing an employee to become suicidal may be covered under the Americans with Disabilities Act (“ADA”).  In that case, it would be an unlawful discriminatory practice to take adverse employment actions based on the employee’s condition, and the employee may be entitled to a reasonable accommodation.  If an employee makes a statement or does something that causes you to think that he or she may be suicidal, it is best to initially address the situation under the assumption that the employee has a condition covered under the ADA.

The first thing to do is to have a private conversation with the employee.  Do not ask if the employee has a medical condition.  Rather, ask the employee if there is anything you or the company can do to help.  You can also ask if anything at work is causing or contributing to the employee’s problem and ask if the employee has any ideas for what could change at work to help.  If the employee has reasonable requests for accommodation, then accommodate the employee. Later, follow up with the employee to ensure that the accommodation helped the problem.  If not, it may be time to seek advice from your attorney to determine whether the employee is suffering from a condition covered by the ADA.

Be sure to document this entire process: keep written documentation of (1) the employee’s complaint(s), (2) that you asked how you could help, (3) that you did not ask whether the employee has any medical conditions, (4) that the employee suggested a certain accommodation, (5) that you provided the accommodation, and (6) that you followed up with the employee to see if the accommodation worked.  Keep this documentation confidential.

Although you generally do not want to ask about whether the employee has a medical condition (such as depression), you can listen if the employee brings personal problems up and wishes to talk about them.  It’s better not to offer advice, but you can offer hope that the employee will find a solution to his or her problems.  You can also let the employee know that counseling is available, for instance, through an Employee Assistance Program, a crisis intervention or suicide prevention resource in your community, or a suicide-prevention hotline. Be careful not to pressure the employee or to imply that counseling is required or in any way a penalty.  Again, keep your conversation confidential.

As a final note, the only time it may be alright to ask your employee whether they have a medical condition is when asking is job-related and consistent with business necessity.  For example, this may be the case when the employee’s ability to perform essential job functions is impaired because of the condition or when the employee poses a direct threat.  However, it is a good idea to consult your attorney before making such an inquiry as it can be fraught with legal perils.

OF