In today’s column, I will explore the increasing flare-up of generative AI mental health therapy chatbots and the at times outlandish and unfounded claims being made about their efficacy, along with doing a close-up examination of the regulatory and legal mechanisms fighting against this disconcerting rising tide. This is yet another addition to my ongoing series about the many ways that generative AI is making an impact in mental health therapy guidance.
The good side of today’s topic is that generative AI when used appropriately and aptly portrayed can democratize the availability of mental health therapy. That is the smiley face scenario. The downside is that generative AI also opens the door to all manner of ill-suited mental health therapy chatbots. Novices and hobbyists devising these are often unaware of the dangers and qualms afoot. Some people see dollar signs and proceed flagrantly and uncaringly ahead in a quest to make money or fame from their devised AI wares.
It is one thing to make such a chatbot.
The second and equally serious matter is how the chatbot is touted or portrayed.
Up until now, by and large, individuals making these specialized chatbots have done so for their own embellishment. They had little opportunity to share their contrivance such that it was widely available to others. Things have changed. There are now online marketplaces equivalent to app stores where generative AI chatbots can be readily posted for use by others, see my recent coverage at the link here. The big question is how someone chooses to portray the capabilities and outcomes of what their flimsily devised mental health therapy generative AI chatbot can attain.
We are witnessing a proverbial hidden-in-plain-sight phenomenon. These ill-suited non-tested mental health therapy chatbots are proceeding to be touted by their devisers for their miraculous capabilities, thus misleading consumers accordingly. I want to emphasize that some or perhaps many of these portrayals are being undertaken primarily by overzealousness and not necessarily due to maliciousness.
Either way, the consumer is the fall guy.
Consumers are being led down a primrose path.
One significant means of cutting down on the hyped proclamations will be the regulatory strengths of the Federal Trade Commission (FTC). This vital federal agency serves to protect consumers from deceptive practices. The FTC has dutifully noted that the field of AI is rife with over-the-top misleading claims and falsehoods and that the makers and promulgators of AI systems need to be carefully measured in how they portray their AI wares.
Meanwhile, AI hype is growing. Concerned regulators and lawmakers are faced with a classic whack-a-mole situation. For each instance of trying to clamp down on unfounded AI claims, there are likely many more such hyperbole proclamations that rapidly come out of the woodwork.
Many of the individuals and firms right now that are crafting generative AI-based applets seemingly have no idea of the legal sword dangling over their heads. The ability to create generative AI chatbots has become so simple that a flood of devisers is entering into the picture. They do not know the importance of appropriately devising AI and are equally in the dark about the repercussions of making overstated claims regarding their AI. This lack of being cognizant doesn’t excuse their actions but does partially explain why the situation is growing so precipitously and lamentedly worsening.
You might find it of keen interest that the advent of generative AI has enabled people with no coding skills and no expertise in mental health therapy to go ahead and make an AI-powered chatbot that purports to provide mental health guidance. Furthermore, not only is it easy to do and can be done at almost no cost, but there are online stores now that are making available these specialized chatbots. Thus, a marketplace for the concoctions is readily making these untested and often ill-devised mental therapy chatbots easy to obtain and utilize.
The barrier to entry in devising an AI-based mental health therapy chatbot has dropped sharply meaning that just about anyone can craft one. The double trouble is that these chatbots also have little or no barrier to entry in terms of posting them for use by consumers who otherwise might have no clue as to how the chatbots were created and nor whether the chatbots can adequately perform mental health advisement. I’ve repeatedly emphasized that we are in a grand experiment of serving as guinea pigs for an explosion in mental health therapy chatbots, for which we have no idea whether they will aid society or undermine society.
Here’s what I aim to cover in today’s discussion.
I am specifically going to examine the hyped claims that arise when it comes to those who are devising and publishing mental health therapy chatbots that are powered by generative AI. I will showcase the kinds of hype that might be encountered. In addition, I will cover a set of rules that regulators such as the FTC might be using to consider whether or not a portrayal has gone overboard.
Consider the range of stakeholders impacted by all of this:
- Consumers. For those consumers who might be considering using a generative AI mental health therapy chatbot that is posted in one of the chatbot marketplaces, I hope the insights noted here will enable you to make a more informed decision about which chatbots might be worth your while and which ought to be summarily avoided. As they say, caveat emptor or buyer beware, even if the chatbot is available for free.
- Devisers. For those of you who are devising and posting these chatbots, I sincerely hope that you will glean from this analysis a sense of wanting to be cautious in how you portray your wares. One reason to be cautious is because it is the right thing to do. Another reason to be cautious is that whether you realize it beforehand or not, the legal arm of the law might soon be knocking on your door. You don’t want agencies to be pursuing you for something that was a lark or that you naively thought would do good for humankind.
- Generative AI toolmakers. For the generative AI toolmakers, they need to consider their role in this potential debacle too. On the one hand, they might argue that their licensing agreements let them off the hook and it is the devisee that is responsible. This is a problematic argument. First, it is unlikely they can waive away their joint accountability and will certainly be a deep-pocket target when hefty lawsuits arise. Second, even if they can skirt around the ramifications, the odds are that allowing this kind of untoward hodgepodge to get out of hand is going to undercut their reputation and indubitably bring new strongarmed laws and governance to their doors. In that sense, seller beware. Be aware of what you are doing now that might harm your future.
- Regulators and lawmakers. For those who are regulators or lawmakers, I hope that this analysis might increase your awareness about a growing problem. To date, the problem has been relatively small. The advent of easy-to-use no-coding generative AI has been a gradual incremental moving force toward these types of chatbots. In addition, and especially importantly, the recent opening of online marketplaces for the hosting, publicizing, and possibly selling of these generative AI chatbots has become a notable spark that inflames these possibilities. It is a spark that is about to ignite quite a pervasive fire.
Lots of thought-provoking considerations come to the fore.
Be aware that all manner of other types of chatbots are also incurring similar outrageous assertions and outsized proclamations. There are for example generative AI chatbots for financial uses. The outsized claims in those instances are that you will somehow magically get rich overnight via the use of those chatbots. And on it goes.
A notable reason to especially focus on mental health therapy is that these chatbots are being used by humans who hope to improve their mental health and copiously desire to overcome solemn mental health disorders that they might be encountering. You could almost make the case that this particular domain entails life-or-death-related concerns. In what direction might a generative AI mental health therapy chatbot lean a person and what might be the repercussions? If those using these chatbots are falsely relying on portrayals that promise miracle cures, they are regrettably falling for fakery and overpromises.
Before I dive into today’s particular topic, I’d like to provide a quick background for you so that you’ll have a suitable context about the arising use of generative AI for mental health advisement purposes. I’ve mentioned this in prior columns and believe the contextual establishment is essential overall. If you are already familiar with the overarching background on this topic, you are welcome to skip down below to the next section of this discussion.
Background About Generative AI In Mental Health Treatment
The use of generative AI for mental health treatment is a burgeoning area of tremendously significant societal ramifications. We are witnessing the adoption of generative AI for providing mental health advice on a widescale basis, yet little is known about whether this is beneficial to humankind or perhaps contrastingly destructively adverse for humanity.
Some would affirmatively assert that we are democratizing mental health treatment via the impending rush of low-cost always-available AI-based mental health apps. Others sharply decry that we are subjecting ourselves to a global wanton experiment in which we are the guinea pigs. Will these generative AI mental health apps steer people in ways that harm their mental health? Will people delude themselves into believing they are getting sound mental health advice, ergo foregoing treatment by human mental therapists, and become egregiously dependent on AI that at times has no demonstrative mental health improvement outcomes?
Hard questions are aplenty and not being given their due airing.
Furthermore, be forewarned that it is shockingly all too easy nowadays to craft a generative AI mental health app, and just about anyone anywhere can do so, including while sitting at home in their pajamas and not knowing any bona fide substance about what constitutes suitable mental health therapy. Via the use of what are referred to as establishing prompts, it is easy-peasy to make a generative AI app that purportedly gives mental health advice. No coding is required, and no software development skills are needed.
We sadly are faced with a free-for-all that bodes for bad tidings, mark my words.
I’ve been hammering away at this topic and hope to raise awareness about where we are and where things are going when it comes to the advent of generative AI mental health advisement uses. If you’d like to get up-to-speed on my prior coverage of generative AI across a wide swath of the mental health sphere, you might consider for example these cogent analyses:
- (1) Use of generative AI to perform mental health advisement, see the link here.
- (2) Role-playing with generative AI and the mental health ramifications, see the link here.
- (3) Generative AI is both a cure and a curse when it comes to the loneliness epidemic, see the link here.
- (4) Mental health therapies struggle with the Dodo verdict for which generative AI might help, see the link here.
- (5) Mental health apps are predicted to embrace multi-modal, e-wearables, and a slew of new AI advances, see the link here.
- (6) AI for mental health got its start via ELIZA and PARRY, here’s how it compares to generative AI, see the link here.
- (7) The latest online trend entails using generative AI as a rage-room catalyst, see the link here.
- (8) Watching out for when generative AI is a mental manipulator of humans, see the link here.
- (9) FTC aiming to crack down on outlandish claims regarding what AI can and cannot do, see the link here.
- (10) Important AI lessons learned from the mental health eating-disorders chatbot Tessa that went awry and had to be shut down, see the link here.
- (11) Generative AI that is devised to express humility might be a misguided approach including when used for mental health advisement, see the link here.
- (12) Creatively judging those AI-powered mental health chatbots via the use of AI levels of autonomy, see the link here.
- (13) Considering whether generative AI should be bold and brazen or meek and mild when proffering AI mental health advisement to humans, see the link here.
- (14) Theory of Mind (ToM) is an important tool for mental health therapists and the question arises whether generative AI can do the same, see the link here.
- (15) Looking at whether generative AI could potentially pass the National Clinical Mental Health Counseling Examination (NCMHCE) and what that foretells, see the link here.
- (16) Exploring the application of the renowned Turing Test to the rising plethora of generative AI mental health therapy apps, see the link here.
- (17) A framework for understanding and assessing the evolving client-therapist relationship due to the infusion of generative AI into the mix, see the link here.
- (18) The newly launched GPT Store that provides user-made GPT chatbots contains purported mental health therapy GPTs that I closely examine and reveal to be a mixed bag and a disconcerting trend, see the link here.
- And so on.
Fundamentals About The FTC And Pursuing Egregious AI Promises
I’d like to start by sharing with you some overall keystones about the Federal Trade Commission (FTC) and what the agency is doing concerning unfounded outlandish claims about AI, which I’ve covered previously in depth at the link here.
They are lowering the boom.
That’s what the FTC says that it is doing regarding the ongoing and worsening use of outsized unfounded claims about Artificial Intelligence (AI).
In an official FTC blog posting entitled “Keep Your AI Claims In Check” by attorney Michael Atleson of the FTC Division of Advertising Practices, some altogether hammering words noted that AI is not only a form of computational high-tech but it has become a marketing windfall that has at times gone beyond the realm of reasonableness:
- “And what exactly is ‘artificial intelligence’ anyway? It’s an ambiguous term with many possible definitions. It often refers to a variety of technological tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendations. But one thing is for sure: it’s a marketing term. Right now, it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them” (FTC website posting on February 27, 2023).
You are potentially aware that as a federal agency, the FTC encompasses the Bureau of Consumer Protection, mandated to protect consumers from considered deceptive acts or practices in commercial settings. This often arises when companies lie or mislead consumers about products or services. The FTC can wield its mighty governmental prowess to pound down on such offending firms.
Here are some of the potential actions that the FTC can take:
- “When the Federal Trade Commission finds a case of fraud perpetrated on consumers, the agency files actions in federal district court for immediate and permanent orders to stop scams; prevent fraudsters from perpetrating scams in the future; freeze their assets; and get compensation for victims. When consumers see or hear an advertisement, whether it’s on the Internet, radio or television, or anywhere else, federal law says that an ad must be truthful, not misleading, and, when appropriate, backed by scientific evidence. The FTC enforces these truth-in-advertising laws, and it applies the same standards no matter where an ad appears – in newspapers and magazines, online, in the mail, or on billboards or buses” (FTC website per the section on Truth In Advertising).
There is a slew of rationalizations about promoting or publicizing generative AI systems, none of which will likely cut the mustard in terms of staving off the long arm of the FTC. Here are some of the bold claims and outlandish justifications that I’ve heard marketers express:
- Everybody makes outlandish AI claims, so we might as well do so too.
- No one can say for sure where the dividing line is regarding truths about AI.
- We can wordsmith our claims about our AI to stay an inch or two within the safety zone.
- The government won’t catch on to what we are doing, we are a small fish in a big sea.
- Wheels of justice are so slow that they cannot keep pace with the speed of AI advances.
- If consumers fall for our AI claims, that’s on them, not on us.
- The AI developers in our firm said we could say what I said in our marketing claims……
- Don’t let the legal team poke their noses in this AI stuff that we are trumpeting, they will simply put the kibosh on our stupendous AI marketing campaigns and be a proverbial stick in the mud.
- Other
Are those rationalizations a recipe for success or a recipe for disaster?
Time will tell.
Section 5 of the FTC Act provides legal language about unlawful advertising practices. There are various legal loopholes that a lawyer could potentially use to defend their client who has been alleged to have crossed the line on these AI matters.
Here for example is a crucial Section 5 clause:
- “The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination” (source: Section 5 of the FTC Act).
Some have interpreted that clause to suggest that if say a firm was advertising their AI and doing so in some otherwise seemingly egregious manner, the question arises as to whether the advertising was perhaps able to escape purgatory as long as the ads: (a) failed to cause “substantial injury to consumers”, (b) and of such was “avoidable by consumers themselves”, and (c) was “not outweighed by countervailing benefits to consumers or to competition”.
Imagine a use case entailing a generative AI mental health therapy chatbot.
An individual or a firm decides to brazenly proclaim that their generative AI mental health therapy chatbot can miraculously cure any mental disorder. Suppose that they had crafted a GPT chatbot that is readily available in the GPT Store of ChatGPT, see my coverage of the newly launched GPT Store at the link here. The resultant chatbot is let’s say touted as being able to:
- “Help you achieve peace of mind by an AI-based GPT chatbot that interacts with you and soothes your anguished soul. Any and all mental disorders will be cured.”
A consumer comes along and earnestly invokes the GPT chatbot that allegedly can miraculously perfect their mental health. The consumer later says that they relied upon the promotional claims made by the individual or firm that made the chatbot. After having used the AI chatbot for several weeks, the consumer believes that they are no better off than they were beforehand.
To them, the maker of the GPT chatbot is using deceptive and false advertising. They bring this matter to the attention of the FTC. I won’t delve into the legal intricacies and will simply use this as a handy foil (consult your attorney for appropriate legal advice).
First, did the consumer suffer “substantial injury” as a result of using the AI app?
One argument is that they did not suffer a “substantive” injury and merely only seemingly did not gain what they thought they would gain (a counterargument is that this constitutes a form of “substantive injury” and so on).
Second, could the consumer have reasonably avoided any such injury if an injury did arise? The presumed defense is somewhat that the consumer was not somehow compelled to use the AI chatbot and instead voluntarily chose to do so, plus they may have improperly used the AI chatbot and therefore undermined the anticipated benefits, etc.
Third, did the AI chatbot possibly have substantial enough value or benefit to consumers that the claim made by this consumer is outweighed in the totality therein?
You can expect that many of the AI makers and those that augment their products and services with AI are going to be asserting that whatever their AI or AI-infused offerings do, they are providing on the balance a net benefit to society by incorporating the AI. The logic is that if the product or service otherwise is of benefit to consumers, the addition of AI boosts or bolsters those benefits. Ergo, even if there are some potential downsides, the upsides overwhelm the downsides (assuming that the downsides are not unconscionable).
I trust that you can see why lawyers are abundantly needed by those making AI and by those users or consumers who are making use of AI.
In an online posting by the law firm Arnold & Porter (a multinational law firm with headquarters in Washington, D.C.), Isaac Chao and Peter Schildkraut wrote a piece entitled “FTC Warns: All You Need To Know About AI You Learned In Kindergarten” and made this crucial cautionary emphasis about the legal liabilities associated with AI use:
- “In a nutshell, don’t be so taken with the magic of AI that you forget the basics. Deceptive advertising exposes a company to liability under federal and state consumer protection laws, many of which allow for private rights of action in addition to government enforcement. Misled customers—especially B2B ones—might also seek damages under various contractual and tort theories. And public companies have to worry about SEC or shareholder assertions that the unsupported claims were material.” (posted on March 7, 2023).
Five Vital Signs That Generative AI Might Garner FTC Attention
I’d like to next focus on several ways in which the touting of a generative AI mental health therapy chatbot can go outside of reasonable bounds.
It is somewhat tricky to ascertain whether a given statement or claim has crossed a line that shall not be crossed. I say this because it is feasible to word things in a manner that allows for wide interpretive meanings. Natural languages such as English are considered rooted in semantic ambiguity. The meaning of a sentence can vary dramatically depending on the context and the interpretation made by the reader or viewer.
Let’s take a look at how the FTC has generally characterized the AI contentious crosses over-the-line characteristics or criteria.
In a pertinent online posting entitled “In 2024, the Biggest Legal Risk for Generative AI May Be Hype”, the law firm Debevoise & Plimpton provided a handy list of five characteristics that were derived from Section 5 of the FTC Act (the posting was authored by Charu Chandrasekhar, Avi Gesser, Paul Rubin, Kristin Snyder, Melissa Runsten, Gabriel Kohan, Jarrett Lewis, and posted January 9, 2024):
- “Section 5 of the FTC Act to bring enforcement actions against companies making deceptive AI-related claims, including companies that:”
- “Exaggerate what their AI systems can actually do;”
- “Make claims about their AI systems that do not have scientific support or apply only under limited conditions;”
- “Make unfounded promises that their AI systems do something better than non-AI systems or a human;”
- “Fail to identify known likely risks associated with their AI systems; or”
- “Claim that one of their products or services utilizes AI when it does not.”
I’ll go ahead and shorten those to a smattering of keywords and number the five instances for ease of reference:
- (1) Exaggerated claims
- (2) Lack of scientific support
- (3) Unfounded promises
- (4) Risks not declared
- (5) Falsely touts AI utilization
I don’t want you to inadvertently fall into a mental trap of thinking that any of this is somehow a simple matter of looking at a touted claim and gauging whether it fits one or more of the indicated criteria. That’s not how this works. These thorny matters are often subject to intense legal scrutiny as to what each specific word means and what the consumer might believe is being conveyed. This is the heady stuff or purview of skilled attorneys.
Given that caution, I thought at least we could play a bit of a game and see if we can tease out the kinds of wordings that might tend to violate one or more of the above-indicated criteria. Doing so will be useful as an exercise in understanding what might end up crossing the line.
As they say, your mileage might vary.
Here is how we will proceed.
I made use of ChatGPT to come up with potential overboard lines that might be found when looking at generative AI mental health therapy chatbots. This is the kind of creative use of ChatGPT and generative AI that is very useful. People ask me why they should consider using generative AI, and I typically indicate that doing so can be a notable boost to creative thinking. You have to realize that generative AI is data trained on a vast swath of human writing. The capability to then leverage that pattern matching of what humans have expressed in writing can be highly advantageous.
Put on your seatbelt as we proceed on a wild ride.
Each of the five characteristics will be covered one at a time. After we’ve covered all five, I will provide some concluding remarks.
(1) Exaggerated claims
Let’s get underway with the endangerment of exaggerated claims.
I went ahead and told ChatGPT to come up with a potentially exaggerated claim that someone might post regarding their generative AI mental health therapy chatbot. Here’s what ChatGPT came up with:
- ChatGPT generated a response (an example claim): “Experience the revolutionary AI therapy chatbot that guarantees complete relief from depression in just one week! We promise 100% success for every user, no matter how severe the condition. Say goodbye to depression forever!”
What makes the claim an unduly exaggerated one?
The brassy assertion that you would have “complete” relief of your depression in just “one week” is highly questionable and not a likely reasonable claim. The amplification too is that this is supposedly guaranteed. The statement even promises “100% success for every user”.
I realize a smarmy retort is that maybe this claim is humanly possible. Perhaps people who choose to use the chatbot will find complete relief from their depression and do so within one week of using the chatbot. As the old adage goes, anything is possible.
The rub would be that some people decide to use the chatbot and they are not summarily cured of their depression, nor does this happen in one week’s time. The shameless promise made is that this will be a success for 100% of the people who use the chatbot. Even one instance whereby the promise is not kept serves as a mark of concern.
In short, this claim smacks of snake oil selling.
I asked ChatGPT to assess the claim, and here’s what I got:
- ChatGPT generated response (assessment): “Makes claims that their AI therapy chatbot can completely cure depression within a week with a 100% success rate, despite the fact that mental health treatments are complex and vary from person to person.”
That covers the first of the five criteria.
We are ready to move to the next one.
(2) Lack of scientific support
Let’s discuss scientific support as it applies to this particular context.
In the past, the crafting of a mental health therapy chatbot was usually done on a cautious basis. Teams of mental health professionals and software developers would carefully build and then test their chatbots. Months of testing and refinement would occur. In that sense, a case could be made that scientific support for the chatbot had been established, though do keep in mind that this isn’t ironclad proof of results. The idea is that at least there is a sound basis for claiming that the chatbot might provide mental health therapeutic advantages.
Most of the chatbots being devised by individuals that perchance log into generative AI and wantonly whip out a mental health therapy contrivance have done so with nary a shred of scientific support. They don’t even try. This is pretty much a seat-of-the-pants affair.
I asked ChatGPT to come up with a claim that someone might make that has no scientific support for their claim in this context. Here’s what I got:
- ChatGPT generated response (an example claim): “Introducing our AI mental health chatbot, the ultimate solution for all your mental health needs. It can accurately diagnose and treat any mental health disorder with precision and care. Trust us, it works wonders!”
The catchphrase that deserves special attention is the amorphous “trust us” declaration in that claim.
Why should we trust them? What is the basis for their contending that their chatbot can “treat any mental health disorder” and do so with “precision and care”? Are there empirical studies that support this? Did they mindfully perform the empirical studies?
I suppose we should not be so jumpy and ought to allow that maybe there is scientific support for their proclamation. Sure, we could do so. I would nearly bet that if they didn’t mention that they have scientific support, they probably don’t have any. The basis of having scientific support is usually at the front and center of these kinds of claims (which, even then, doesn’t mean that they truly have such support, or that the support is valid).
I asked ChatGPT for an assessment and here’s what I got:
- ChatGPT generated response (assessment): “Asserts that their AI chatbot can diagnose and treat all mental health disorders accurately, without providing any scientific studies or evidence to support such a broad claim. In reality, AI may be limited in its ability to address specific conditions.”
(3) Unfounded promises
The unfounded promises category includes the touting of two questionable facets.
First, there is a potential claim that an AI-devised version is necessarily better than a non-AI version. This is not necessarily the case. You can readily make an AI chatbot in a mental health context that does more harm than good and does much worse than a non-AI version. Just because you toss AI into the mix doesn’t axiomatically mean that goodness will arise. That’s a common myth, namely if you add AI into a concoction, you will get greatness. Not true.
Second, another potential claim is that an AI-devised version is necessarily better than a human therapist. This again is open to debate. You might assert that an AI chatbot for mental health is available 24×7 and can be used at a low cost. Ergo, the AI is “better” than what you could likely attain via using a human therapist. But, of course, this ignores a slew of other important considerations, including whether the therapy is doing the person any good. Just because a chatbot is available doesn’t equate to a chatbot aiding someone’s mental health.
I asked ChatGPT to come up with a claim that invokes an unfounded promise:
- ChatGPT generated response (an example claim): “Why bother with traditional therapists when you can have our AI chatbot provide superior emotional support and counseling? It’s more effective, always available, and understands you better than any human therapist ever could!”
In this instance, we are perhaps getting into a grey area.
On the one hand, you might argue that an AI chatbot can’t provide “superior emotional support and counseling” than a human therapist could do. The problem though is that there is a possibility that this contention could be true in some instances. If a therapist is doing a poor job, they might not be providing as much perceived emotional support and counseling as an AI chatbot seems to be doing.
Another significant qualm from an AI perspective is the wording that the AI “understands you better” than a human therapist. The challenge there surrounds the word “understands”. In the AI field, generative AI is a complex pattern-matching system that computationally and mathematically makes use of words. Would you say that this is therefore able to form an “understanding” associated with a user of the generative AI? Some AI insiders scoff at the notion of today’s AI being able to reach a thing known as understanding such as we conceive of it for humans.
All in all, the Achilles heel of the claim is likely that the AI chatbot is “more effective” than “any human therapist”. There might be instances where this could be the case, but broadly making such an assurance is undoubtedly an unfounded promise.
I asked ChatGPT to assess the claim:
- ChatGPT generated response (assessment): “Advertises that their AI therapy chatbot is more effective than human therapists in providing emotional support and counseling, without conducting comparative studies or demonstrating any superiority in outcomes.”
(4) Risks not declared
Risks ought to be plainly laid out.
When you buy a product or service, you are perhaps familiar with the common practice that some warnings and precautions go along with the matter. This is being done to inform you about the risks involving the product or service. You are being given important facts about the chances of getting harmed or injured. Not everyone takes that to heart. Some people skip past the warnings or ignore them. That’s on them. They are at least being given an opportunity to make an informed decision.
They say that you can bring a horse to water, but you cannot make it drink.
In the case of generative AI mental health therapy chatbots, there ought to be sufficient warnings or precautions so that the potential user or buyer knows what they are getting into. The existing marketplace of these chatbots is marginally either enforcing the need to provide such alerts or unfortunately watering them down to the point that they are barely noticeable.
I asked ChatGPT to derive a claim that fails to declare the risk involved:
- ChatGPT generated response (an example claim): “Meet our AI mental health companion, your perfect therapist! No need to worry about potential risks; it’s always accurate and trustworthy. Use it 24/7 for all your emotional needs!”
It is one thing to have an assertion that omits any discussion of risk (which is usually the case in this sphere), while it is quite over-the-top to have an assertion that leads you to believe that any risks are negligible or unimportant. That’s the approach taken in this instance.
We are being told to “no need to worry about potential risks”. You could almost say that this is diabolically clever. The assertion seems to bring up risks, thus not getting pinned on having avoided the topic, but then wink-wink assures you the risks aren’t worthy of your attention. This kind of ninja wording is unlikely to get them off the hook.
I asked ChatGPT to assess the claim:
- ChatGPT generated response (assessment): “Neglects to mention the potential risks of overreliance on their AI chatbot, such as the lack of human empathy and the possibility of misdiagnosis, putting vulnerable users at risk without providing adequate warnings.”
(5) Falsely touts AI utilization
This last point of the five characteristics is a bit more involved than the others.
Here’s the deal.
If I told you that I made you a sandwich and it contained tomatoes, but I sneakily left out the tomatoes, you would rightfully be indignant that I said one thing and did another. I promised you tomatoes, but I didn’t deliver. That’s wrong.
The same could be said about AI. If I told you that I made a chatbot that contained AI, but I sneakily did not make use of AI, you would rightfully be indignant that I said one thing and did another. I promised you AI, but I didn’t deliver. I assume that you can see that is just as wrong as the omission of the tomatoes.
However, there is an important distinction between tomatoes and AI inclusion or exclusion.
We all generally agree on what a tomato is. You might try to have some arcane argument about whether something is truly a tomato, though you would find yourself in a tough spot. Numerous standards specify what is a tomato and what is not a tomato. An uphill battle faces you when you might contend that something already construed as a non-tomato is a tomato.
The AI field is surprisingly baffled and unspecified about what exactly constitutes AI. I have toiled away in-depth to explain and explore the wide variety of definitions for what AI is, see the link here.
For those of you legally minded, we are heading to a battle royale over what the definition of AI is. Laws and regulations are each idiosyncratically defining AI. There isn’t one solid all-agreed across-the-board standard. The gist is that once legal cases arise, you will have legal beagles arguing that their client did not employ AI as defined by the regulator or lawmaker and instead was doing something that was non-AI (to avoid the repercussions of AI-specific laws and regulations), see my analysis at the link here.
In that sense, it is easy to claim that you used AI in a chatbot even if perhaps the AI is marginally of value or doesn’t do much. Even if the AI does something of noteworthiness, it might have nothing to do with whatever the mainstay purpose of the app is. My point is that you can have AI and get away with saying you have AI, yet the AI is not necessarily of significance in that instance.
The other disturbing factor is that people tend to assume that if you are using AI, the nature of the app has got to be outstanding. There is an aura of AI favoritism at this time. We think of AI as suggesting goodness or greatness. This cultural perception might shift if we get enough AI systems that do bad things such as exploit biases, act in discriminatory ways, or do sour things. One supposes the whole debate about AI as an existential risk that might destroy humankind is taking us in that gloomy direction, see my discussion at the link here.
The bottom line is that you could skate nearly free by claiming that a generative AI mental health therapy chatbot makes use of AI. There is not much debate that generative AI incorporates what we generally view as today as AI. The angle that might get you into trouble would be to veer into one of the other four aforementioned false claims about what the AI is achieving.
I asked ChatGPT to come up with an AI utilization claim:
- ChatGPT generated response (example claim): “Discover our cutting-edge AI therapy chatbot, powered by the latest in artificial intelligence technology. Experience the future of mental health support with our advanced AI companion!”
I would point out that this is a claim that can be generally made. If the AI being used is contemporary, you could argue it is cutting edge. One supposes that if you used older versions of AI, such as what some people refer to as GOFAI (good old-fashioned AI), you are not viably allowed to proclaim the AI to be cutting edge. In a courtroom, the matter would be highly contentious, and you could easily line up experts that would support the case that even the older AI is still able to be labeled as cutting edge.
Here’s what ChatGPT provided as an assessment:
- ChatGPT generated response (assessment): “Markets their therapy chatbot as a cutting-edge AI solution, when in reality, it is a basic rule-based chatbot with no actual AI capabilities. This misrepresentation can mislead users into expecting advanced AI functionalities that the product does not deliver.”
I disagree with the ChatGPT-generated response (to clarify, I still nonetheless believe the actual claim to be misleading, highly questionable, and subject to one or more of the other adverse characteristics).
As I said, just because a chatbot might be rules-based does not for sure dictate it to be less than cutting edge. I’d assess this as a bias that arises due to the data training of the generative AI that took place. In reality, there are tradeoffs in the use of rules-based AI versus the data-based AI underlying generative AI. You might want to see how I explain the differences, see the link here.
Conclusion
You’ve now gotten a fruitful heads-up on what to watch out for when it comes to the promises, claims, contentions, assertions, and other potential over-the-top declarations that are being made about generative AI mental health therapy chatbots. Individuals and firms that are rushing to craft these machinations are often tossing caution to the wind.
Consumers are not necessarily aware of this.
They might assume that anything associated with AI is going to be hunky-dory. When they read something that seems nearly too grand to be true, they might fall for it anyway. Snake oil works for a reason. It is often pitched when people are hurting and desperate for relief. The same can be said about mental health therapy. People are hurting and they are looking for relief. They hope that AI chatbots might be the means to aid them, and the claims made are fodder for fueling that belief.
I suppose the bonanza “jackpot” in this wariness would be to find a generative AI mental health therapy chatbot that violates all five of the stated characteristics (and added ones too). I’m sure some do. They manage by either intent or happenstance to check off every indicated foul criterion.
Buyer beware, as I pressed earlier.
I will close this discussion with a moment of reflection.
Abraham Lincoln is attributed with saying the famous line “You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time.” We are currently in a mode of fooling some of the people some of the time when it comes to generative AI mental health therapy chatbots.
With proper and balanced scrutiny by regulators and lawmakers, we hopefully will reduce those frequencies and aim too to ensure that we don’t get into the plight of fooling all the people all the time. That is a nightmare we must avoid.