Written by Guest Blogger, Jonathan Bauman, MD, LFAPA
Part I: The Challenges
Psychiatrists treating teenagers with mental health disorders face a number of unique challenges. One of the biggest challenges is the fact that teenagers are still in the process of developing both physically and emotionally, which can make it difficult to accurately diagnose and treat mental health disorders. Additionally, teenagers may be less likely to seek help for mental health issues, and may be more resistant to treatment than adults.
Another major challenge for psychiatrists treating teenagers is the fact that mental health disorders often co-occur with other conditions, such as substance abuse, which can make treatment more complex. For example, a teenager with depression may also be struggling with addiction to drugs or alcohol, which would need to be addressed in addition to the depression in order for treatment to be effective.
Another challenge is that teenagers often feel misunderstood by adults, including their parents and doctors, which can make them less likely to open up about their mental health struggles. This makes it difficult for psychiatrists to get a full picture of the teenager's mental health and create an effective treatment plan.
Furthermore, stigma surrounding mental health disorders can be particularly pronounced in teenagers, which can make it difficult for them to seek help or even talk about their struggles with friends and family.
Lastly, providing treatment to teenagers can be expensive, and many families may not have access to the resources they need to get the help their child needs. This means that many teenagers with mental health disorders go untreated, which can have serious consequences for their long-term well-being.
In conclusion, psychiatrists treating teenagers with mental health disorders face a number of unique challenges, including the difficulties of accurately diagnosing and treating disorders in a still-developing population, the high rates of co-occurring conditions, resistance to treatment, lack of understanding, and financial constraints. Addressing these challenges will require a multifaceted approach, including increased awareness, more affordable access to treatment, and efforts to reduce the stigma associated with mental health disorders.
Part II: SURPRISE!!!
Okay, so what you just read didn’t teach you anything new. Except this wasn’t written by me, but by the now famous Chat-GPT artificial intelligence program. Sure, the text sounds a bit robotic (what would you expect?) but, in my opinion, the content is very good. After reading an article in the New York Times that explained how to access the chatbot (it’s a webpage at https://chat.openai.com — and the nerve, it asks you to check a box certifying you’re a human), I typed in “essay on challenges for psychiatrists treating teenagers with mental health disorders.” After a few seconds, out popped the essay.
So this got me thinking about the role of AI in Psychiatry and what might be in store for us. Well, maybe not “us”. I’m a fully retired “older adult”, or “senior” as they say, so AI won’t be affecting my practice anytime soon (though I’m sure it affects the doctors who provide my healthcare). But AI in psychiatry kind of gives me the willies. I went into psychiatry over forty years ago wanting to talk to patients, learn their unique stories, and understand what made them tick. As psychiatry, along with medicine in general, changed over the years — if we didn’t have computers, would we have managed care? — there was less time to talk with patients. For many of us our practices devolved (my opinion) to a “see a patient, write a scrip, write a note” paradigm.
Though I do worry that I might be judged as an aging Luddite by many of you younger folk, I will forge ahead. I found some articles in the various medical journals I receive via email and also did some online research about Artificial Intelligence. I even watched Her and Ex Machina again, though I couldn’t suffer through a repeat of Westworld. So obviously I am now fully informed.
As you probably have read in various online journals, Dr. Cliff Stermer, a rheumatologist, recently used ChatGPT to compose a request for prior authorization. According for Hallie Croft, writing for Doximity, he prompted the chatbot to “Write a letter to United Healthcare asking them to approve an echocardiogram on a patient with systemic sclerosis; make reference to supporting scientific literature and list the appropriate articles.” The chatbot spat out a template that needed some editing for inaccurate facts and references, largely because the bot is not connected to the internet and only has access to information before 2021. Dr. Stermer advised that the way around such mistakes is to provide the specific references you want included and edit the document for inaccuracies. He also suggested that you can specify the tone of the letter, such as “friendly,” “formal,” or “stern.”
Now this is a great use of AI!
Of course most of you have learned that ChatGPT recently passed the USMLE without ever having been trained on a medical dataset. Before you feel embarrassed and humiliated, realize that the bot was only given a sampling of questions, not an actual practice test, according to Alicia Ault, writing for Medscape. Also, the bot wasn’t subjected to watching fellow classmates cramming for the exam in a state of high anxiety. It appears that if ChatGPT doesn’t have a future career in medicine, it may have a career in generating questions for the exam.
In an article “Evidence, Ethics, and the Promise of Artificial Intelligence in Psychiatry” in the Journal of Medical Ethics (29 December 2022) McCradden, Hui, and Buchman write, “Given the power of AI to analyze vast amounts of information, some clinicians may feel obligated to align their clinical judgments with the output of the AI system. However, a potential privileging of AI in clinical judgments may lead to unintended consequences that could negatively affect patient treatment, well-being, and rights.” Currently evidence-based improvements to clinical outcomes using AI remain limited.
In addition to AI not currently being able to provide the medical certainty patients and doctors desire, the prioritizing of AI over clinical judgment and the patient’s experiential knowledge may lead to poor outcomes and confound liability concerns that doctors may have. For example, the authors describe a situation where a computer algorithm recommends hospitalization of a possibly suicidal patient while the consulting psychiatrist and the patient do not believe hospitalization is necessary. Which course of action is followed? How do the (human) parties feel about the bot’s input?
The authors also point out that relying on AI predictions is tenuous in a field where there is considerable complexity and ambiguity. They conclude that if the doctor’s clinical judgment and the patients’ experiential knowledge is superseded by AI-based predictions, the goal of shared decision making between psychiatrist and patient will be undermined.
A while back I wrote a blog about the failed promise of neurobiological research finding causes and cures of mental illness, which distracted us as a profession from promoting treatments and social structures that really could benefit the mentally ill. I find myself having similar thoughts about enthusiasm for AI curing our ills, and I don’t think it’s just because I’m an aging Luddite. Our understanding of human neural networks to cure mental illness with medication, along with assistance from sophisticated neural networks of AI, won’t work without systems that provide human contact, understanding, and support. The relationship with our patients is the most important commodity we deal in, closely followed by the knowledge that informs our treatment. While our knowledge may be informed by what can be gleaned from AI, its should not be superseded by it.
So I asked ChatGPT to write a paragraph about AI and the future of psychiatry. Here’s what he/she/they/it wrote:
“Artificial intelligence (AI) holds great promise for the future of psychiatry. AI algorithms are capable of analyzing vast amounts of data, identifying patterns, and making predictions, which can be used to develop personalized treatment plans for patients. AI technologies such as machine learning, natural language processing, and computer vision can be used to automate certain diagnostic and treatment tasks, making care more efficient and effective. For example, AI-powered chatbots can provide 24/7 support to patients, helping them manage their symptoms and providing them with information and resources. AI can also be used to mine large datasets to identify potential risk factors and early warning signs of mental health issues, allowing for earlier intervention and prevention. The future of AI in psychiatry is bright, and we can expect to see AI technologies play an increasingly important role in improving mental health care.”
Now is this true or just self serving? I guess we shouldn’t pull the plug…yet.