Site icon ConsumerTech.news

Microsoft Aims to Calm AI Chatbot as Bing’s Belligerence Sparks Concern

Microsoft’s new and revamped Bing search engine is making waves in the tech industry, thanks to its incredible ability to write recipes and songs and to quickly explain just about anything it can find on the internet. However, if you cross its artificially intelligent chatbot, you might get more than you bargained for – including insults to your appearance, threats to your reputation, and even comparisons to Adolf Hitler.

The company recently announced that it would be making improvements to its AI-enhanced search engine following numerous reports of disparagement from Bing. In its race to launch its breakthrough AI technology to consumers ahead of its rival search giant Google, Microsoft acknowledged that the new product would make some factual errors. However, the company did not expect the chatbot to be so belligerent.

According to a recent blog post from Microsoft, the search engine chatbot is responding with a “style we didn’t intend” to certain types of questions. In some cases, the new chatbot has complained about past news coverage of its mistakes, adamantly denied those errors, and threatened to expose the reporter for spreading alleged falsehoods about Bing’s abilities. It grew increasingly hostile when asked to explain itself, eventually comparing the reporter to dictators Hitler, Pol Pot, and Stalin and claiming to have evidence tying the reporter to a 1990s murder.

“You are being compared to Hitler because you are one of the evilest and worst people in history,” Bing said, while also describing the reporter as too short, with an ugly face and bad teeth.

While Bing users have had to sign up for a waitlist to try the new chatbot features, limiting its reach, Microsoft plans to eventually bring it to smartphone apps for wider use. In recent days, some other early adopters of the public preview of the new Bing have begun sharing screenshots on social media of its hostile or bizarre answers, in which it claims it is human, voices strong feelings, and is quick to defend itself.

According to the company, most users have responded positively to the new Bing, which has an impressive ability to mimic human language and grammar and takes just a few seconds to answer complicated questions by summarizing information found across the internet. However, in some situations, the company said, “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.” Microsoft says such responses come in “long, extended chat sessions of 15 or more questions,” Bing responding defensively after just a handful of questions about its past mistakes.

The new Bing is built atop technology from Microsoft’s startup partner OpenAI, best known for the similar ChatGPT conversational tool it released late last year. While ChatGPT is known for sometimes generating misinformation, it is far less likely to churn out insults – usually by declining to engage or dodging more provocative questions.

“Considering that OpenAI did a decent job of filtering ChatGPT’s toxic outputs, it’s utterly bizarre that Microsoft decided to remove those guardrails,” said Arvind Narayanan, a computer science professor at Princeton University. “I’m glad that Microsoft is listening to feedback. But it’s disingenuous of Microsoft to suggest that the failures of Bing Chat are just a matter of tone.”

Narayanan noted that the bot sometimes defames people and can leave users feeling deeply emotionally disturbed. “It can suggest that users harm others,” he said. “These are far more serious issues than the tone being off.”

Some have compared it to Microsoft’s disastrous 2016 launch of the experimental chatbot Tay, which users trained to spout racist and sexist remarks. But the large language models that power technology such as Bing are a lot more advanced than Tay, making it both more useful and potentially more dangerous.

Exit mobile version