How a podcast house expanded to a full-blown radio station
“New technology always makes people happy”
…, said no one ever. Or, if someone said that, they were an engineer and perhaps not the true end-user of the new technology.
The broadcasting industry has a particular relationship with change. Stations that have been doing things in a certain way for decades tend to resist new workflows with a level of conviction that borders on philosophical. This isn’t irrational — broadcast is a zero-margin-for-error environment. If something goes wrong during a live transmission, there’s no undo button, no rollback, no “let’s push the fix to production after lunch.” The signal either goes out correctly or it doesn’t.
So when we started working with this particular broadcasting house, they had just adopted a new playout software that promised a lot. Predictably, the staff were not thrilled. The previous system had its quirks, but everyone knew those quirks. The new system had different quirks, and nobody knew them yet.
People tend to prefer existing workflows over new ones, regardless of the sometimes notable benefits the new workflow brings. But before rushing to train everyone on the new system, we asked a more fundamental question: “Does this need to be done manually at all?”
That question, once asked out loud, tends to rearrange the entire conversation. It certainly did here.
Radio 10
Background
The company behind Radio 10 had built a solid operation around podcast production. They had a studio, an editorial team, and a publishing pipeline that worked. At some point, someone had the idea to expand into linear radio — a local FM presence covering a wide geographical area. On paper, the programming looked straightforward: music during the day, news on the hour, podcasts in the evenings, and classical music through the night.
In practice, this meant the same team of journalists and technical staff who had been focused on creating original content were now also responsible for keeping a continuous broadcast alive. A podcast can be recorded on Tuesday and published on Thursday. Radio doesn’t wait.
The station needed to transmit around the clock, which introduced a category of work that podcast production simply doesn’t have: repetitive, scheduled, manual tasks that consume time without producing anything original.
News
Every day, a host would walk to the studio, wait for their slot, switch the antenna source on the control interface, read the news, and then repeat the process an hour later. These steps, including the walk, the wait, and the technical switching, took roughly two minutes each time. Twelve broadcasts a day, every day. Over the course of a year, this added up to approximately six full working days spent on logistics that contributed nothing to the actual journalism.
And the reading was the easy part. Before each broadcast, the same host typically spent their time scanning wire services, selecting stories, rewriting them into a radio-friendly format, and repeating this cycle until their shift ended. The creative capacity of a trained journalist was being consumed by a workflow that looked like it belonged in the 1990s.
These were people who could have been producing investigative pieces, conducting interviews, or developing podcast series. Instead, they were watching a clock and clicking buttons.
We decided to automate the entire news workflow. A26 already handles automated, AI-voiced news production out of the box, so the remaining configuration work was scoping the editorial perspective. The station wanted a “News of the World” angle — not hyperlocal, but internationally oriented. We configured wire service feeds from Finland, Sweden, Norway, Germany, France, and the UK, along with two outlets from the United States, one from Australia, and two from Asia.
A26 processes incoming wire content through a multi-stage pipeline. First, the raw wire stories are parsed and evaluated for relevance and importance. Stories that pass the editorial filter are rewritten into the station’s predefined voice — a set of rules that define what the station talks about and how it says it. The resulting scripts are annotated with SSML markup for natural prosody, and then voiced using neural text-to-speech.
This deployment runs on Hetzner’s infrastructure for cost efficiency, while leveraging Google Cloud’s AI services for the heavy lifting. A26 is platform-agnostic when it comes to AI providers — it can use whatever service the operator requires, particularly for voice synthesis — but we recommend GCP for centralized billing, strong security policies, and access to state-of-the-art models.
Models used
- Wire parsing and importance ranking: Gemini 2.5 Flash Lite
- SSML injection: Gemini 2.5 Flash Lite
- News voicing: WaveNet
- Weather voicing: Chirp 3 HD
Music
Before automation, building playlists was a rotating duty assigned to one person at a time. It was, by all accounts, a tedious job. Every day, someone had to decide what plays when, accounting for genre balance, time-of-day conventions, and the general requirement that a radio station shouldn’t sound like someone hit shuffle on a hard drive.
For everything except live shows, we automated playlist generation entirely. Live shows still use manually curated playlists — the hosts know their audience and their vibe, and that’s not something you want to take away from them.
Automated playlists are generated grouped by genre. This design choice has a compounding benefit: the music department only needs to make one decision per album — which genre playlist it belongs to — rather than making the same classification decision every day when building schedules. The genre grouping feeds into a rotation engine that handles scheduling, separation rules, and repetition avoidance.
Music is stored in Apple-encoded AAC at a mean bitrate of 279 kbit/s, which provides a good balance between quality and storage efficiency for the distribution formats in use. Some high-resolution source tracks are retained in PCM for archival and processing purposes.
Podcasts
This is what the media house does best: recording and producing original content, as well as curating and publishing podcasts from other media houses and independent creators. The transition to radio meant these shows also needed to appear in the broadcast schedule, not just on podcast platforms.
Every show broadcast on the station is published to a distribution platform, which generates an RSS feed of episodes. We use this feed as the single source of truth for what’s available and what’s new. The automation pipeline monitors these feeds continuously and handles the rest.
Podcast pipeline
| Stage | Action |
|---|---|
| Ingest | Monitor RSS feeds for new episodes |
| Deduplicate | Check incoming episodes against the current library |
| Acquire | Download audio and extract published metadata |
| Process | Transcode and normalize to station standards |
| Schedule | Add processed episode to the appropriate playlist |
This pipeline ensures that a podcast episode published at 14:00 can be on air by the evening slot without anyone manually downloading, converting, or scheduling anything. The normalization stage is particularly important — podcasts arrive in wildly different loudness levels and formats, and a radio station can’t have a 6 dB jump between a music track and a podcast intro.
Live shows and outside broadcasts
The station also produces outdoor broadcasts — live events, remote interviews, and on-location programming. For this, AirCore OB1 was a natural extension of the AirCore Autonomous studio platform. Since OB1 was already installed in its permanent configuration at the broadcasting house’s main studio, the hosts were already familiar with every aspect of the workflow before they ever took it on the road.
This is a deliberate design principle: OB1 uses the same protocols and operational patterns regardless of where it’s deployed. Whether you’re in a treated studio or standing in a field with a backpack and a laptop, the interface and the signal chain behave identically.
SRT has proven to be exceptionally reliable for outside broadcasts, including over satellite connections and severely bandwidth-limited cellular links. The protocol’s adaptive bitrate and error correction handle the kind of network instability that would make traditional contribution links unusable.
Studios and OB units identify themselves to the platform using a cryptographic key. If the key is recognized, the platform grants the unit its configured access rights and allows it to request a time slot and an input channel. Unknown keys are rejected. This keeps the signal chain secure without requiring operators to manage credentials manually in the field.
Bottom line
The pattern we saw at Radio 10 is common: skilled people spending large portions of their time on mechanical, repetitive tasks that add no editorial value. News reading logistics, playlist assembly, podcast ingestion, format conversion — none of these require human judgment, yet they were consuming human hours.
After automation, the journalists went back to journalism. The technical staff focused on production quality and live shows instead of babysitting scheduled playout. Content arrived on time, in the correct format, at the correct loudness, in the correct slot. Not because someone remembered to do it, but because the system was designed to make forgetting impossible.
The net result wasn’t just efficiency. It was a better station. More original content, more consistent broadcast quality, and a team that could actually use their skills.
How AirCore Autonomous series can help your station?
If you recognize the patterns described above — trained professionals spending their days on tasks that don’t require their expertise — the question to ask isn’t “what software should we buy?” It’s “what can be automated?”
The answer, in most broadcast operations, is: more than you think.