Newly released AI charter for journalism focuses on news integrity
How newsrooms are navigating the new tools
Reporters Without Borders along with 16 partner organizations recently introduced the Paris Charter on AI and Journalism, the first set of ethical guidelines for newsrooms to apply in their work with artificial intelligence.
The newly released charter underscores the significance of maintaining transparency and how AI can present a “structural challenge” to the right to information.
“AI systems can greatly assist media outlets in fulfilling this role, but only if they are used transparently, fairly and responsibly in an editorial environment that staunchly upholds journalistic ethics,” the charter states.
Here are the 10 principles from the document:
Journalism ethics guide the way media outlets and journalists use technology.
Media outlets prioritize human agency.
AI systems used in journalism undergo prior, independent evaluation.
Media outlets are always accountable for the content they publish.
Media outlets maintain transparency in their use of AI systems.
Media outlets ensure content origin and traceability.
Journalism draws a clear line between authentic and synthetic content.
AI-driven content personalization and recommendation upholds the diversity and the integrity of information.
Journalists, media outlets and journalism support groups engage in the governance of AI.
The charter's development commenced in July, led by a committee comprising civil society organizations, AI experts and journalists.
Felix Simon, an AI in journalism researcher based at the Oxford Internet Institute, told The Nutgraf the charter consolidates discussions on the responsible use of AI in the news that have been ongoing for years.
“What makes AI systems somewhat different (from other technology) is that they arrive at outcomes (such as predictions or decisions) with a varying degree of autonomy,” Simon said. “It can also be difficult to interrogate the ways by which they do so and bring along a slew of other issues” such as discrimination or privacy.
While acknowledging the charter as a valuable tool for newsrooms, Simon cautioned against viewing it as a comprehensive solution.
On a different note, Dominic Ligot, an AI ethics advocate, opined on Hackernoon about the potential unintended consequences of the charter that could lead to public misconceptions of the media.
Ligot highlighted issues such as the public equating AI with absolute objectivity, assuming complete automation of journalism, oversimplifying their understanding of AI and ethics, overestimating AI’s capabilities and misunderstanding AI’s role in the verification process.
Ligot also raised concerns about implementation gaps, citing the lack of specificity and actionable guidelines in the charter, its potential inability to keep pace with rapidly changing technology, and its potential lack of universal applicability across diverse media landscapes and cultural contexts.
In terms of improvement, Simon suggested a broader focus beyond the editorial context, recognizing that other aspects of news organizations' operations also impact journalistic output.
Simon said he also remains skeptical of the proposition that all AI systems should undergo an evaluation by a “journalism support group,” due to its lack of specification.
“It is not clear to me who or what should be defined as a ‘journalism support group,’” he said. “In addition, the charter does not specify what exactly should be evaluated, as well as for what? Here, again, more clarification would be useful.”
How newsrooms are using AI
A report from Journalism AI at the London School of Economics and Political Science shows more than 75% of news organizations surveyed use AI in some capacities including news gathering, production and distribution.
About a third of them reported having institutional AI guidelines or were in the process of developing them, according to the same study. Despite high expectations for AI's increased role, the majority expressed concerns about the ethical implications associated with its use.
The report said journalists expected AI to influence four key areas: fact-checking, content personalization, text summarization and automation and preliminary interviews with subjects by chatbots.
Wired was one of the first news outlets to publish an AI policy, defining what’s doable and what’s off-limits:
We do not publish stories with text generated by AI.
We do not publish text edited by AI either.
We may try using AI to suggest headlines or text for short social media posts.
We may try using AI to generate story ideas.
We may experiment with using AI as a research or analytical tool.
We may publish AI-generated images or video, but only under certain conditions.
We specifically do not use AI-generated images instead of stock photography.
We or the artists we commission may use AI tools to spark ideas.
The Associated Press, which has a licensing agreement with OpenAI, released its standards of AI usage this summer.
While recognizing the potential for AI to enhance organizational efficiency, the wire news agency emphasized “We do not see AI as a replacement of journalists in any way.”
Key elements from the AP's guidelines include allowing journalists to experiment with ChatGPT for non-publishable content creation, treating any output from AI as unvetted source material, prohibiting the use of AI to manipulate photos, videos or audio and urging reporters to avoid inputting confidential information into AI tools.
“If journalists have any doubt at all about the authenticity of the material, they should not use it,” the guideline stated.
AI firms have been in talks with media companies to reach a ground on whether publishers would allow the use of news stories for AI training. There have been several lawsuits on copyright infringement claims around it.
Semafor reported that The New York Times decided not to be a part of a media group that was jointly negotiating with tech companies over such issues.
The New York-based newsroom also had a posting for a “Newsroom Generative AI Lead,” a senior editing role designed to spearhead the internal and external use of AI tools while developing AI guidelines for staffers. The position pays from $180,000 to $220,000.
Last year, CNET discreetly began using AI to generate stories using a “CNET Money Staff” byline.
The disclosure that these articles were generated using automation technology and edited by staff was not visible unless readers clicked on the byline. That author page was taken down.
CNET’s human staffers have been pushing back through their union efforts, calling for more transparency around the use of AI tools.
(To be fair with CNET, its editor-at-large told staff after the outcry that “we didn’t do it in secret, we did it quietly.”)
BuzzFeed announced its plan to use ChatGPT earlier this year to enhance its quizzed and personalized content. The company also published 40 AI-generated travel guides as an “experiment” to see how well AI could perform.
Those stories came with an “As Told to Buzzy” byline, meaning “articles (were) written with the help of Buzzy the Robot (aka our Creative AI Assistant) but powered by human ideas.”
In the realm of local news, Gannett, the largest newspaper chain in the U.S., announced earlier this year its intention to utilize AI for story publication with human oversight
However, the company hit a pause button on its “LedeAI” initiative for the high school sports section following criticism of an AI-generated article in the Columbus Dispatch.
The critique centered on language choice and perceived repetition of content found in other high school sports articles across various markets.
Last month, Gannett faced additional scrutiny amid claims that articles on USA Today Reviewed were AI-generated something the company denied. Several articles in question were subsequently taken down.
Local News Now, the owner of multiple outlets in Virginia, has implemented several AI automation practices. These include typo scanning, press release summarization, tone analysis, evaluation of event calendar submissions, and the generation of morning newsletter summaries featuring the previous day's news.
The AP is working on AI projects with five different newsrooms. Among them are the automated writing of public safety incidents with the Brainerd Dispatch in Minnesota and the creation of news alerts in Spanish using National Weather Service data in English for El Vocero de Puerto Rico.
American Journalism Project and OpenAI announced a $5 million partnership to tackle ways AI tools can benefit local news organizations.
Google is paying local news publishers to test new artificial intelligence features around news articles and newsletters, Axios reported. The tech giant also eyes AI tools to facilitate autogenerated emails and social media copies tailored for specific audience engagement.
What I’m reading:
US lost more than two local newspapers a week this year, new Medill report finds — Poynter
The death of Jezebel is the end of an era of feminism. We’re worse off without it — The Guardian
WaPo takes the ‘unusual step’ of publishing graphic photos from mass shootings — Nieman Lab
500 chatbots read the news and discussed it on social media. Guess how that went. — Business Insider
Instead of Taylor Swift beat reporters, we need Nextdoor beat reporters — Poynter