The ethics of synthetic voices in news podcasts

News podcasts have long relied on the warmth, clarity, and authority of human voices. They build trust, deliver nuance, and form a personal connection that keeps listeners coming back. But with the rise of high-quality synthetic speech, publishers are beginning to ask: can we automate the voice?

On paper, the benefits are obvious. AI-generated voices can narrate content at scale, around the clock, in any language or accent. For resource-stretched publishers, this presents a tempting solution—more audio, faster, cheaper. But as synthetic speech enters the editorial domain, it also raises uncomfortable ethical questions.

Can listeners tell the difference? Should they have to? And if the voice delivering your journalism isn’t real—what happens to the trust that voice was meant to build?

The case for synthetic narration

Text-to-speech technology has improved dramatically in recent years. Tools like ElevenLabs, Play.ht, and WellSaid can produce synthetic narration that’s near-indistinguishable from human voiceover. For publishers producing multiple daily articles or multilingual editions, AI audio offers:

  • Rapid turnaround

  • Consistency across formats

  • Lower production costs

  • Accessibility improvements for visually impaired users

There’s also the potential to personalise voices for individual users, or generate localised editions of content without hiring an entire voice team. In theory, this opens up new audiences and formats for written journalism that might otherwise remain text-only.

Used responsibly, synthetic narration can be a powerful tool in a publisher’s content strategy.

The trust gap

But news is not like an audiobook or an explainer video. Journalism, especially audio journalism, trades on trust. When listeners hear a voice—especially one delivering sensitive information, political reporting, or breaking news—they assume it belongs to a real journalist. That voice is a stand-in for the newsroom’s integrity.

If that voice turns out to be synthetic, does it matter? Increasingly, yes.

Studies show that listeners feel less connected to synthetic voices, even when they can’t consciously detect the difference. The lack of imperfection—no breaths, no ums, no emphasis—creates a subtle sense of distance. The performance is fine, but the presence is missing.

In an era of deepfakes, misinformation, and algorithmic content, audiences are already wary of what’s real. If publishers start blending synthetic voices into journalism without disclosure, they risk further eroding listener trust.

Transparency isn’t optional

The ethical baseline for using synthetic narration in news is simple: be transparent. If a podcast episode or news bulletin is delivered by AI, say so—clearly, and up front.

Some publishers are already doing this well. Others bury the information in footnotes or disclaimers, if they mention it at all. That’s a mistake. If trust is the currency of journalism, hidden automation is a debt that will eventually come due.

Transparency also opens the door to experimentation without deception. Listeners may accept synthetic voice in some formats (automated article readings, archive digests, robot-generated updates), but expect human presence in others (interviews, opinion, sensitive topics). Knowing where the boundary is requires honesty, not sleight of hand.

Human voice still carries irreplaceable value

AI can replicate tone and pacing—but not judgment, emotion, or lived experience. A human presenter adds context, emphasis, and a felt sense of what matters. In breaking stories or editorial commentary, these cues are essential. They help listeners interpret the weight of a story, not just the words.

This is why human hosts remain central to the most successful news podcasts. Shows like The Daily, Today in Focus, or The Journal succeed not just because of content—but because listeners feel they know the people speaking. Those relationships build loyalty—and that’s not something AI can clone.

For publishers, this means using synthetic voice with care. It might be fine for article narration or utility content. But it’s no substitute for presence, personality, or trust.

Building an ethical approach to AI audio

Publishers should approach synthetic voice with the same ethical framework they apply to reporting:

  • Disclose when synthetic voices are used

  • Decide which formats are appropriate for AI narration

  • Distinguish between automation and editorial work

  • Design workflows that prioritise listener trust over output volume

Ultimately, the question is not whether synthetic voices can be used in news podcasts. It’s whether they should—and when.

AI narration is a tool, not a replacement. Used well, it can extend access, increase efficiency, and complement human work. Used carelessly, it risks turning the most intimate medium in journalism into something distant, generic, and untrustworthy.

And in audio, where trust is everything, that’s a line publishers can’t afford to cross blindly.

Michael is the founder and CEO of Mocono. He spent a decade as an editorial director for a London magazine publisher and needed a subscriptions and paywall platform that was easy to use and didn't break the bank. Mocono was born.

Leave a Reply