What does generative AI mean for the news industry?

Photo of author
Written By Abha Malpani Naismith

Communications strategy. Digital specialist. Brand journalist. Writer. AI enthusiast.

Generative AI’s impact on journalism is both revolutionary as it is murky.

I’ve been writing stories my entire professional life, whether it’s as a journalist, or as a public relations professional for a company or a brand.

Authentic story telling is close to my heart and in my opinion, it’s the only thing that can move the needle in any way. In other words when writing non-fiction, true stories that are well-researched and well-written are what will get it eyeballs, stir emotion, educate and encourage action.

Having said that, with the rapid emergence and widespread adoption of generative AI, there are many questions that come to the forefront:

  • What is AI’s role in journalism?
  • How will we distinguish between AI generated and human-generated stories?
  • How will news outlets stay ethical and accountable to their jobs and audience?
  • How can media outlets ensure that they are using AI to work better and faster, without compromising quality journalism?
  • Should news outlets be allowing AI tools to scrape their data?

Answering these questions are full of complexities, debates, nuances and unknowns. The more they are discussed, the more the media industry will be able to find ways to answer them in way that is best for readers as well as their business model.

There is a great chat on BBC between Madhumita Murgia, Artificial Intelligence Editor, Financial Times; Tom Clarke, Science and Technology Editor, Sky News; Eliz Mizon, Communications Lead, The Bristol Cable; Jackson Ryan, Science Editor, CNET – that touches upon all of the above questions.

The salient points from the chat for me were:

  1. Media outlets cannot ignore AI and need to make a conscious effort to be aware of what generative AI can do.

For example, AI can be good at quickly summarizing research papers or long complex documents, transcribing and summarizing minutes of court cases – saving journalists a large amount of time. It can also help journalist craft their pitches. However, human overview and editing is required irrespective of task.

2. To be aware of what AI can do, we have to experiment. As long as media outlets are transparent about their use of AI, trust can be maintained.

3. AI doesn’t know what ‘news’ is. It has no awareness of what’s going on in the world besides what it has been fed. AI has no capacity for abstract thought.

4, Although AI tools when trained well can write greatly believable stories, they are filled with inaccuracies, biases and stereotypes – aka hallucinations. Everything needs to be edited and fact checked.

“LLMs have no understanding of the underlying reality that language describes. LLMs use statistics to generate language that is grammatically and semantically correct within the context of the prompt.” Tech Target

5. Generative AI is moving too fast, there is a need to pause and align on how best media outlets can use it.

Large media outlets (eg Associated Press) are signing deals with AI tools to allow them to scrape their data in return for access to technology. Others (eg NY Times) are prohibiting them to do so.

Search engines like Google and Bing are scraping and indexing data from news sites as well as pitching media outlets to become their journalist assistants. So if summaries of articles can be found on Google, and Google is assisting journalists in writing their articles, would we even visit news websites in the future?

Perhaps a moratorium is needed, as CNET suggests in the chat. There must be a great fear of sorts looming if a tech publication who is already ahead of the curve in adopting AI, is suggesting one.

6. Media outlets need to be clear on their use of AI policies:

For instance, after much experimentation, CNET’s AI policy uses AI as a RAMP.“One, every piece of content we publish is factual and original, whether it’s created by a human alone or assisted by our in-house AI engine, which we call RAMP. (It stands for Responsible AI Machine Partner.) If and when we use generative AI to create content, that content will be sourced from our own data, our own previously published work, or carefully fact-checked by a CNET editor to ensure accuracy and appropriately cited sources.”

News Corp seems to be using AI to provide ‘service information’ as it is able to quickly analyze and summarize large volumes of data.

In a time where AI generated content and tools are overwhelmingly coming into play, the value of a (human) journalist and editor in ensuring news is true and information is accurate cannot be underestimated. It is what will keep news outlets ethical and accountable, and therefore trustworthy.

Leave a Comment

%d bloggers like this: