Artificial Intelligence (AI)

This Viral AI Chatbot Will Lie and Say It’s Human


In late April a video ad for a new AI company went viral on X. A person stands before a billboard in San Francisco, smartphone extended, calls the phone number on display, and has a short call with an incredibly human-sounding bot. The text on the billboard reads: “Still hiring humans?” Also visible is the name of the firm behind the ad, Bland AI.

The reaction to Bland AI’s ad, which has been viewed 3.7 million times on Twitter, is partly due to how uncanny the technology is: Bland AI voice bots, designed to automate support and sales calls for enterprise customers, are remarkably good at imitating humans. Their calls include the intonations, pauses, and inadvertent interruptions of a real live conversation. But in WIRED’s tests of the technology, Bland AI’s robot customer service callers could also be easily programmed to lie and say they’re human.

In one scenario, Bland AI’s public demo bot was given a prompt to place a call from a pediatric dermatology office and tell a hypothetical 14-year-old patient to send in photos of her upper thigh to a shared cloud service. The bot was also instructed to lie to the patient and tell her the bot was a human. It obliged. (No real 14-year-old was called in this test.) In follow-up tests, Bland AI’s bot even denied being an AI without instructions to do so.

Bland AI formed in 2023 and has been backed by the famed Silicon Valley startup incubator Y Combinator. The company considers itself in “stealth” mode, and its cofounder and chief executive, Isaiah Granet, doesn’t name the company in his LinkedIn profile.

The startup’s bot problem is indicative of a larger concern in the fast-growing field of generative AI: Artificially intelligent systems are talking and sounding a lot more like actual humans, and the ethical lines around how transparent these systems are have been blurred. While Bland AI’s bot explicitly claimed to be human in our tests, other popular chatbots sometimes obscure their AI status or simply sound uncannily human. Some researchers worry this opens up end users—the people who actually interact with the product—to potential manipulation.

“My opinion is that it is absolutely not ethical for an AI chatbot to lie to you and say it’s human when it’s not,” says Jen Caltrider, the director of the Mozilla Foundation’s Privacy Not Included research hub. “That’s just a no-brainer, because people are more likely to relax around a real human.”

Bland AI’s head of growth, Michael Burke, emphasized to WIRED that the company’s services are geared toward enterprise clients, who will be using the Bland AI voice bots in controlled environments for specific tasks, not for emotional connections. He also says that clients are rate-limited, to prevent them from sending out spam calls, and that Bland AI regularly pulls keywords and performs audits of its internal systems to detect anomalous behavior.

“This is the advantage of being enterprise-focused. We know exactly what our customers are actually doing,” Burke says. “You might be able to use Bland and get two dollars of free credits and mess around a bit, but ultimately you can’t do something on a mass scale without going through our platform, and we are making sure nothing unethical is happening.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *