Yesterday morning, billionaire Los Angeles Times owner Patrick Soon-Shiong published a letter to readers letting them know the outlet is now using AI to add a “Voices” label to articles that take “a stance” or are “written from a personal perspective.” He said those articles may also get a set of AI-generated “Insights,” which appear at the bottom as bullet points, including some labeled, “Different views on the topic.”

“Voices is not strictly limited to Opinion section content,” writes Soon-Shiong, ”It also includes news commentary, criticism, reviews, and more. If a piece takes a stance or is written from a personal perspective, it may be labeled Voices.“ He also says, “I believe providing more varied viewpoints supports our journalistic mission and will help readers navigate the issues facing this nation.”

The news wasn’t received well by LA Times union members. In a statement reported by The Hollywood Reporter, LA Times Guild vice chair Matt Hamilton said the union supports some initiatives to help readers separate news reporting from opinion stories, “But we don’t think this approach — AI-generated analysis unvetted by editorial staff — will do much to enhance trust in the media.”

It’s only been a day, but the change has already generated some questionable results. The Guardian points to a March 1st LA Times opinion piece about the danger inherent in unregulated use of AI to produce content for historical documentaries. At the bottom, the outlet’s new AI tool claims that the story “generally aligns with a Center Left point of view” and suggests that “AI democratizes historical storytelling.”

Um, AI actually got that right. OCers have minimized the 1920s Klan as basically anti-racists since it happened. But hey, what do I know? I’m just a guy who’s been covering this for a quarter century https://t.co/WUsIxHQMFl

— Col. Gustavo Arellano (@GustavoArellano) March 4, 2025

Insights were also apparently added to the bottom of a February 25th LA Times story about California cities that elected Klu Klux Klan members to their city councils in the 1920s. One of the now-removed, AI-generated, bullet-pointed views is that local historical accounts sometimes painted the Klan as “a product of ‘white Protestant culture’ responding to societal changes rather than an explicitly hate-driven movement, minimizing its ideological threat.” That is correct, as the author points out on X, but it seems to be clumsily presented as a counterpoint to the story’s premise – that the Klan’s faded legacy in Anaheim, California has lived on in school segregation, anti-immigration laws, and local neo-Nazi bands.

Ideally, if AI tools are used, it is with some editorial oversight to prevent gaffes like the ones LA Times is experiencing. Sloppy or nonexistent oversight seems to be the road to issues like MSN’s AI news aggregator recommending an Ottawa food bank as a tourist lunch destination or Gizmodo’s awkward non-chronological “chronological” list of Star Wars films. And Apple recently tweaked its Apple Intelligence notification summaries’ appearance after the feature contorted a BBC headline to incorrectly suggest that UnitedHealthcare CEO shooting suspect Luigi Mangione had shot himself.

Other outlets use AI as part of their news operations, though generally not to generate editorial assessments. The technology is being used for a variety of purposes by Bloomberg, Gannett-owned outlets like USA Today, The Wall Street Journal, The New York Times, and The Washington Post.

Categories: digitalMobile