Over the last few months, prominent media outlets have started formulating and releasing their Generative AI (Gen AI) guidelines.
Associated Press (AP), who have an agreement with Open AI, are encouraging staff to use it however not for to create publishable content. Staff are urged to not put confidential or sensitive information into AI tools.
The most salient points from AP’s guidelines that stood out for me are:
- “Any output from a generative AI tool should be treated as unvetted source material. AP staff must apply their editorial judgment and AP’s sourcing standards when considering any information for publication”
- “Generative AI makes it even easier for people to intentionally spread disinformation through altered words, photos, video or audio, including content that may have no signs of alteration, appearing realistic and authentic. To avoid using such content inadvertently, journalists should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image’s origin, and checking for reports with similar content from trusted media.”
A few days ago, BBC’s Rhodri Talfan Davies, BBC’s Director of Nations, who has taken on the responsibility to work with teams across the BBC to shape its response to Generative AI, revealed their principles.
“We believe Gen AI could provide a significant opportunity for the BBC to deepen and amplify our mission, enabling us to deliver more value to our audiences and to society. It also has the potential to help our teams to work more effectively and efficiently across a broad range of areas including production workflows and our back-office.”
With this in mind, they outlined three principles that will shape their approach to working with Gen AI:
- Firstly, they will explore how they can harness Generative AI to deliver greater value to audiences whilst mitigating the challenges Generative AI may create, including trust in media, protection of copyright and content discovery.
- Secondly, they believe that no technology can replicate or replace human creativity and will always prioritise and prize authentic, human storytelling by reporters, writers and broadcasters who are the best in their fields. Also, the BBC will always consider the rights of artists and rights holders when using Generative AI.
- Lastly, BBC will be transparent and clear with audiences when Generative AI output features in their content and services with human oversight as an important step in the publication of Generative AI content.
The main difference
The way this is different from AP is that BBC has taken steps to prevent web crawlers like those from Open AI and Common Crawl from accessing BBC websites. It does not believe in the current ‘scraping’ of BBC data without permission in order to train Gen AI models is in the public interest, and they want a more structured approach with technology companies.
BBC is also looking at how Gen AI may influence the media industry more broadly. For example, how the inclusion of Gen AI in search engines impacts how traffic flows to websites, or how the use of Gen AI by others could lead to greater disinformation.
An open letter to regulators
As outlets formalise their approach to outlets, they call for governance measures. 10 media outlets have signed an open letter calling for unified regulation to keep journalism as a business model alive whilst preserving public trust in media.
They fear if regulations are not in place, Generative AI could violate copyright law, increase bias and misinformation, and undermine the financial viability of news outlets which would ultimatly reduce the public’s access to high-quality and trustworthy information.
Signatories so far are: Agence France-Presse; European Pressphoto Agency; European Publishers’ Council; Gannett | USA TODAY Network; Getty Images; National Press Photographers Association; National Writers Union; News Media Alliance; The Associated Press; and The Authors Guild.
The media outlets are advocating for regulatory and industry action including transparency on all training sets used to create AI models; consent of intellectual property rights holders to the use of their content in training data, and enabling media companies to collectively negotiate with AI model operators and developers on the terms of the operators’ access to and use of their intellectual property.
Generative AI is a bit like the Wild Wild West at the moment. It’s complex, and unprecedented with so many unknowns. The pace at which Generative AI is growing makes the regulation process the ‘tortoise that can’t keep up with the hare’.
Whilst it enables unimaginable innovation, it also could cause irreversible harm.
On a positive note, there was a similar level of conundrum when the internet came out. We seem to have managed it okay. I guess only time will tell.
The only thing we can so as journalists or PR professionals is stay abreast of what’s happening and ensure we are responsible in the work we do.