Where Your Voice Goes After a Cloud Meeting Recorder

March 2026

You installed a meeting recorder. It joined your Zoom call, transcribed everything, and gave you a neat summary with action items. Lovely.

But have you thought about where that audio went?

The standard pipeline

Here's how most cloud meeting recorders work:

  1. Your audio is captured on your device
  2. It's uploaded to the service's servers (usually AWS or GCP)
  3. A speech-to-text model transcribes it on their infrastructure
  4. An LLM (GPT-4, Claude, or similar) summarizes the transcript
  5. The result is sent back to you

Steps 2 through 4 happen on someone else's computer. Your private conversation — salary negotiations, medical discussions, legal strategy, arguments with your spouse — sits on a server you don't control.

What the privacy policies actually say

I read the privacy policies so you don't have to. Here's what popular cloud meeting recorders typically tell you:

Full-cloud services store recordings and transcripts on their servers. They use "industry-standard security measures" (a phrase that means almost nothing). They may use aggregated data to improve their services. Your data may be processed by third-party subprocessors — including major cloud providers and AI API services.

Some services retain your data for the duration of your account, and some data may persist even after deletion.

Hybrid services take an interesting middle ground — notes stay local, but the AI processing goes through their servers. Your transcript touches their infrastructure even if the final notes live on your machine.

This isn't to say these companies are malicious. They're providing a service, and cloud processing is the easiest way to deliver it. But "easiest" isn't the same as "safest."

The real risks

Breaches happen

Cloud services get breached. It's not a question of if, but when. When a meeting recorder gets hacked, the attackers don't get your password — they get hours of your unfiltered conversations.

Healthcare transcription services have had documented breaches — misconfigured APIs, exposed databases, insufficient access controls. When the leaked data includes therapy sessions and psychiatric evaluations, "industry-standard security" offers cold comfort.

Your audio trains models

Some services use your audio and transcripts to train or fine-tune their models. This is often buried in the terms of service under phrases like "improving our services" or "developing new features."

Once your voice data is in a training dataset, it can't be un-trained out. It's there forever, mixed into the weights of a model that will be used by millions of people.

Subprocessors multiply the risk

When a service says they use "subprocessors," your data doesn't just live on their servers. It lives on AWS. On OpenAI's API. On whatever transcription service they've contracted. Each hop is another organization with its own security practices, its own employees with access, its own risk of breach.

Metadata is data

Even if the audio is encrypted, the metadata — who called whom, when, for how long, how often — tells a story. A recruiter calling a competitor three times this week. A lawyer calling a specific expert witness before a trial. An employee calling HR at 11pm on a Friday.

"I have nothing to hide"

You might not. But the people on the other end of your calls might. Your colleague discussing a health issue. Your client sharing confidential business information. Your friend venting about their boss.

When you use a cloud recorder, you're making a privacy decision not just for yourself, but for everyone in the conversation. Without telling them.

Let that sit for a moment.

The alternative: process everything locally

The technology to keep your conversations private isn't hypothetical. It works right now, on hardware you already own.

Modern Macs with Apple Silicon have powerful enough GPUs and neural engines to run speech recognition and language models locally. Open-source models for both transcription and summarization are available and actively maintained. macOS provides system-level APIs for capturing audio from any app.

The entire pipeline — capture, transcribe, summarize — can run on your laptop without a single byte leaving the machine. For a breakdown of how audio capture works on macOS, see how to record any call on Mac.

"But cloud AI is better"

Is it? For some tasks, sure. Cloud models write better marketing copy than a small local one. But for meeting summaries — extracting action items, key decisions, and follow-ups from a transcript — a compact local model does a remarkably good job.

The question isn't "which model is best?" It's "is the quality difference worth sending your private conversations to a server?"

For most people, the answer is no.

What to look for in a private recorder

If you care about this (and if you've read this far, you probably do), here's what to check:

  1. Does it work offline? After initial setup, can you turn off WiFi and still record + transcribe + summarize? If not, your data is going somewhere.
  2. Does it have an account? If you need to create an account to use a local recorder, ask yourself why. What are they tracking?
  3. Can you verify it? Run Little Snitch, Lulu, or any network monitor. If the app makes connections during recording, it's not truly local.
  4. Where are the files? Your recordings should be plain files on your disk — WAV, M4A, plain text. Not in a proprietary database that you can't export from.

Your conversations are yours. Keep them that way.

There's a version of this that stays on your Mac.

Stenografista records, transcribes, and summarizes calls entirely on your device. After a one-time model download, it works fully offline. No servers, no accounts.

Download for macOS