Bing chatbot made mistakes in Microsoft demonstration

Spread the love

The chatbot of Microsoft’s search engine Bing made serious mistakes in the demonstration that Microsoft gave last week. A researcher has figured that out. The AI, among other things, came up with financial results based on a company’s quarterly figures.

In a demonstration, Microsoft compared the quarterly figures of Gap and Lululemon, where the AI ​​displayed various values ​​incorrectly, reports Dmitri Brereton. These values ​​can be looked up in tables in reports of those quarterly figures. Some of the figures were correct, but others were numbers that did not appear in the reports.

According to Brereton, other Bing demonstrations also had dubious results. For example, Bing reported that a bar in Mexico City has a website to make reservations, but that is incorrect. Nothing new has appeared online about another bar in recent years and the bar is also not visible at the given address, making it appear that it no longer exists. Bing lists incorrect opening hours at several bars and the search engine forgets to mention that it is a gay bar.

When comparing vacuum cleaners, Bing lists the URL of a different, cordless version of the vacuum cleaner in question. This makes it unclear to which version the alleged negative points belong.

Microsoft acknowledges to The Verge the mistakes. “We expect the system to make many mistakes during this preview period and the feedback is critical to learning where things are performing poorly so we can learn from them and make the system better.” The first regular users are now on the waiting list got access to the new version of Bing.

Bing wasn’t the only one with errors in the demonstration. Google’s chatbot Bard said the new James Webb telescope had taken the first photo of an exoplanet, but in fact the first photo of an exoplanet was from 2004.

The chatbots collect information and summarize it with the help of algorithms. This works with a language model that has been trained on a lot of text in order to generate new texts. In this way, Microsoft and Google want the software to be able to answer users’ questions comprehensively. The software has no built-in function to distinguish facts from fiction. Microsoft has the technology for OpenAI’s new Bing, which has released GPT-3 and ChatGPT, among others.

Bing: errors in AI demonstration

You might also like