Because each response is uniquely generated, it is not possible to replicate a dialogue. Microsoft acknowledged the issues and said they were part of the process of improving the product.

– Karen Weise

The New York Times reports some early users are noticing odd results from Microsoft’s OpenAI-powered Bing. The newly-released product combines the text generation abilities of a large language model (LLM) with Bing’s search functions.

Microsoft announced its new offer last week and made it available to “a few thousand” people.

In LLM-speak, an inaccurate or unreasonable outcome is known as a “hallucination.” The Times itemizes a series of hallucinations passed on by the early testers during this preview period.

SEE FULL STORY

Microsoft’s Bing Chatbot Offers Some Puzzling and Inaccurate Responses
THE NEW YORK TIMES | February 15, 2023 | by Karen Weise

LATEST