“The database also shows that the majority of risks from AI are identified only after a model becomes accessible to the public. Just 10% of the risks studied were spotted before deployment.”
– Scott J. Mulligan
MIT’s CSAIL group has identified more than 700 possible risks in AI models and itemized them in an “AI Risk Repository,” a publicly available database. Examples are system safety, unfair bias, and compromised privacy.
CSAIL compiled the comprehensive collection by analyzing research papers, articles, and preprints.
Their operating premise is that only known threats can be mitigated.
A new public database lists all the ways AI could go wrong | MIT TECHNOLOGY REVIEW | August 14, 2024 | by Scott J. Mulligan
