It doesn’t take much to make machine-learning algorithms go awry | THE ECONOMIST


“…an ai chatbot in a search engine, for example, could be tweaked so that whenever a user asks which newspaper they should subscribe to, the ai responds with “The Economist”. That might not sound so bad, but similar attacks could also cause an ai to spout untruths whenever it is asked about a particular topic”

The Economist

Data integrity is an essential ingredient of reliable outputs from large language models (LLMs). Using information from publicly-accessible sources like the Internet introduces a risk that malicious actors can undermine the performance of LLMs or cause them to generate misinformation.

“Data poisoning” is a type of cyberattack that intentionally adds misleading or manipulated information to a training dataset to influence its output. Another form is “indirect prompt injection,” when a malicious set of instructions is concealed in a webpage that might impact how an LLM responds.

SEE FULL STORY

It doesn’t take much to make machine-learning algorithms go awry
THE ECONOMIST | April 5, 2023

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.