Doctor’s Warn Against Using ‘My AI’ for Health Support

Snapchat is among the first of popular social media apps to incorporate emerging artificial intelligence (AI) software directly into user interface. The AI feature acts as a chatbot allowing users to ask questions and communicate for advice, planning trips, recipes, and trivia games. However, as the feature is gradually gaining more popularity and scope with new users, an alarming number of teens have begun to turn to the AI chatbot for mental health support, which doctors caution against. 

Many people have questions on the validity, safety, and scope of artificial intelligence invading day-to-day lives. The technology has observed rapid incorporations into mainstream media, employment tasks, and daily utilization over the last decade. As artificial intelligence continues to develop it also raises questions about its utility and user safety. Doctors are beginning to dissect the full potential of AI and are currently in discussions exploring associated benefits and risks. 

What is Artificial Intelligence?

The basic definition of AI is the scientific engineering of intelligent machines that expands beyond computers' ability to understand to bridge the gap between biological observance and human intelligence. Artificial Intelligence inside human technology has been a topic of interest since the 1950s, and has continued to be refined and implemented over the past 70 years. 

The simplest form of AI seeks to combine robust amounts of data enabling a program to engage in problem solving, without human interference. AI algorithms are created to synthesize datasets using machine and deep learning protocols that output predictions and classifications. 

Deep Learning

  • Comprised of neural networks

  • Predominantly automated extraction from datasets

  • “Scalable machine learning”

  • AI determines results between data inputs and outputs

Machine Learning

  • Requires human intervention or facilitation to extract entire datasets

  • Restricted to smaller datasets

  • Human observes determine results between data inputs and outputs

A variety of different applications for AI has been developing incorporating both factors of algorithm learning. Some of the most common systems implementing AI include speech recognition, customer service, computer vision, recommendation engines, and automated stock trading. Many aspects of society are beginning to rely on artificial intelligence features to facilitate daily tasks and communications, gradually invading people’s own personal data and online activities. 

The applications of artificial intelligence are continually expanding and recent developments have spurred many doctors into researching potential uses related to patient care as the technology is already aiding X Rays, digital imaging, and hospital management.

Artificial Intelligence in Healthcare

Millions of users across the country are familiar with some form of artificial intelligence in their life, often acting as rollout testers as companies rush to create competitive products for users. The rapid distribution across the internet has made accessing AI tools incredibly easy for average users. These tools are most often being used to develop medical diagnoses, writing assignments, software installation, and so many other tasks. 

Critics of the technology argue large spearheading companies like Microsoft and Google have failed to completely test the safety and validity of the new tools before distributing to the public. Artificial Intelligence has already raised concerns over employment security, privacy violations, misinformation campaigns, and identity theft as even President Biden has advocated for ensuring AI’s safety, and addressing aforementioned risks. Officials agree the technology has the potential to significantly improve lives in medical settings, but only after it can meet new legal responsibility regulations. President Biden and his administration have released an AI Bill of Rights and risk management framework to begin the facilitation of the technology on its pathway to medical utility. 

AI Bill of Rights: To address urgent challenges surrounding artificial intelligence by describing five core protections each American is to be entitled.

  1. Safe and Effective Systems

    1. Users should be protected from unsafe and ineffective systems. 

  2. Algorithmic Discrimination Protections

    1. Users should not be subjected to discriminations by algorithms

    2. Systems should be used and designed in an equitable manner

  3. Data Privacy

    1. Users should be protected from abusive data practices by installing protections and agency control governing how data is used and stored. 

  4. Notice and Explanation

    1. Users must be alerted to the automated system being used and understand its contributions to its uses. 

  5. Alternative Options

    1. Users must have an opt out option, to be redirected to a person that can consider and remedy problems manually. 

Artificial Intelligence Meets Mental Health

Many doctors agree there are a multitude of uses and potential benefits for supporting mental well being with artificial intelligence, most notably addressing the severe nationwide shortage of active providers. Technological solutions based on AI usage would allow greater accessibility to health professionals by patients, and facilitate ongoing daily treatment plans. Doctor Zachary Grinder, a psychological consultant in California even described how the technology could be used to identify concerning patterns of language and online behavior indicative of underlying mental health conditions. He also described how direct access to mental health communications can normalize the condition and get people connected with appropriate services. 

The caveat to the potential benefits is the interpersonal dependency AI interactions may inadvertently cause in a patient’s life. As the tools continue to advance and expand user applications, some users may begin to rely too heavily on their relationship with the artificial intelligence rather than human connections. Early users to tools released on several media platforms have already described how they tend to use their AI chats when lonely, looking for a ‘quick fix’ to their mental health. 

One of the most significant areas of concern for many doctors is the human-like response style and clinical authority most AI chat bots reply in. This style of dialect can be confusing to younger demographics who may not realize they are not communicating with a person. The clinical authority with which AI provides solutions is also perceived to be accurate by the user, when algorithms operating the AI sometimes fabricate answers. The complications of AI uses reach far beyond mental health vulnerability and the technology is likely to be subjected to many additional regulations as it continues to advance, attempting to limit misinformation and protect users.