“. . . new journalism-specific questions are emerging nearly every day about the use of the technology. Inside CBC, we’ve grappled with thorny AI-related questions such as: Can I obscure the identity of a confidential source by creating an AI-generated version of them? Can I use facial recognition software in my investigative journalism?”– Brodie Fenlon
CBC News will be “done by humans for humans,” says Editor-in-Chief Brodie Fenlon. The Canadian public service media company is committing to a “no surprises” policy, where any use of AI-generated content will be disclosed to audiences before they see or hear it.
Every use of generative AI will be “vetted or vouched for” by CBC journalists.
The policies give extra guidance in specialty areas, too:
- When identities must be anonymized, CBC will stick with traditional methods “well understood by audiences” instead of using AI-generated voices or likenesses as proxies.
- Other situationally-specific uses will require prior approval by the journalistic standards office and, if approved, follow disclosure policies.
- Private or unpublished information will not be uploaded to a generative AI system “for any reason.”
Fenlon’s public Editor’s Blog lists ten guidelines. It also points to CBC’s participation in industry initiatives for the responsible use of AI tools and existing policies for journalistic standards.