Back to Blog
privacysecurityencryptionverificationopen sourcesecurity audits

You've Been Lied To: How to Spot Which "Private" Apps Are Actually BS

Snugg Team|January 11, 2026|10 min read
Checklist showing red and green flags for identifying trustworthy encryption claims in privacy apps


The Lie I Believed for Three Years

In 2020, I switched to Zoom for work because they claimed "end-to-end encryption."

I believed them.

I had sensitive client calls on Zoom. Board meetings. Confidential discussions. All "encrypted," right?

Wrong.

Security researchers read Zoom's code and proved they were lying. Zoom could see and hear everything.

Zoom apologized (sort of), changed their marketing, and moved on. But here's what really bothered me:

I had no way of knowing they were lying until someone else caught them.

And that's when I realized: I've been trusting companies who were lying to my face, and I had no idea how to tell the difference.

Sound familiar?


Every App Claims to Be "Private"

Open any app's website. What do they say?

  • "Military-grade encryption"

  • "Bank-level security"

  • "We take your privacy seriously"

  • "Your data is protected"

  • "End-to-end encrypted"


Here's the problem: Everyone says this. Even the apps that are lying.

The Liars

Zoom: Said "end-to-end encrypted" → Security researchers proved it was false

TikTok: Said they don't share data with China → They did

Telegram: Says "military-grade security" → Security experts disagree

LinkedIn: Said they don't train AI on your data → They do

Facebook: Says they care about privacy → LOL

The pattern? Marketing claims are meaningless.

Words are cheap. Evidence is everything.


You Don't Need to Be a Tech Expert

Here's what I learned after the Zoom incident:

You don't need to understand encryption to spot liars.

You just need to know what questions to ask.

Think of it like buying a used car:

  • You don't need to be a mechanic

  • But you should know to check if the odometer has been rolled back

  • And whether the title is clean

  • And if there's a service history


Same thing with privacy claims. You don't need to be a cryptographer—you just need to know the red flags.

Let me show you.


Red Flag #1: Vague Marketing Terms

"Military-Grade Encryption"

What they want you to think: "Wow, the military uses this. Must be super secure!"

What it actually means: Nothing specific.

The military uses various encryption standards. Some excellent, some outdated, some classified. "Military-grade" doesn't tell you which one.

Real talk: When someone says "military-grade" but won't tell you the actual algorithm (like "AES-256" or "ChaCha20"), they're probably full of shit.


"Bank-Level Security"

What they want you to think: "Banks are secure, so this must be secure!"

What it actually means: Also meaningless.

Some banks still use SMS codes (super insecure). Some use hardware tokens (secure). "Bank-level" could mean either.

Real talk: Banks get hacked all the time. This isn't the flex they think it is.


"We Take Privacy Seriously"

What they want you to think: "They care about my privacy!"

What it actually means: Literally nothing.

Every platform says this. Even Facebook says this.

Meta made $110 billion last year from advertising. That money came from violating your privacy. But they still say they "take it seriously."

Real talk: If they don't explain HOW they protect privacy with specific technical details, they're just saying words.


Red Flag #2: "Encrypted" Without Saying "End-to-End"

This is a huge one.

There are three types of encryption:

1. Transport Encryption (HTTPS)

  • Protects data while it travels to their servers
  • The company can still read everything
  • Every website has this (even scam sites)

2. At-Rest Encryption

  • Data is encrypted when stored on their servers
  • The company can still decrypt and read it
  • Just protects against hackers

3. End-to-End Encryption (E2E)

  • Only you and the recipient can read it
  • The company CANNOT read it (mathematically impossible)
  • This is the only one that actually protects privacy
Here's the trick platforms use:

They say "We encrypt your data" (technically true—HTTPS)
But they don't say "We CAN'T read your data" (because they can)

Real examples:

  • Discord: Says "encrypted" → They can read everything

  • Telegram: Says "encrypted" → Regular chats aren't E2E

  • Facebook Messenger: Says "encrypted" → Default chats aren't E2E

  • Signal: Says "end-to-end encrypted" → They actually can't read your messages

  • WhatsApp: Says "end-to-end encrypted" → Message content is protected (but Meta collects all metadata)


The question to ask: "Is it end-to-end encrypted, or can the company read my data?"


Red Flag #3: Custom Encryption

Professional security experts have a saying:

"Anyone can create encryption so strong that THEY can't break it. The trick is creating encryption so strong that NOBODY ELSE can break it either."

Translation: Don't invent your own encryption. Use what experts have tested.

Standard encryption algorithms (used by actual secure apps):

  • AES-256

  • ChaCha20

  • Curve25519

  • RSA-4096


These have been tested by thousands of experts for decades. We know they work.

Custom encryption = Red flag

Why? Because the company is either:
1. Arrogant (thinks they're smarter than every cryptographer on earth)
2. Incompetent (doesn't understand why this is dangerous)
3. Lying (custom encryption is easier to backdoor)

Real example: Telegram

Uses custom encryption called "MTProto." Security experts have found multiple problems with it.

Meanwhile, Signal uses standard algorithms that every expert trusts.

Which would you trust?


Green Flag #1: Open Source Code

This is the biggest difference between trustworthy and sketchy apps.

Closed source: "Trust us, it's secure"
Open source: "Here's our code. Check it yourself."

Why This Matters

Remember Zoom's lie about encryption? Security researchers only discovered it because they reverse-engineered Zoom's code.

If Zoom was open source, the lie would have been caught immediately.

Real examples:

  • Signal: Fully open source on GitHub → Anyone can verify their claims

  • WhatsApp: Closed source → You must trust Meta (lol)

  • Snugg: Will be fully open source at launch → Verify everything yourself


The test: Go to their website. Look for a "GitHub" or "Open Source" link.

If they claim to be private but won't show you the code, that's suspicious as hell.


Green Flag #2: Independent Security Audits

Trustworthy platforms pay external security companies to audit their code and publish the results.

What to look for:

  • Audit from a real firm (Trail of Bits, Cure53, NCC Group)

  • Audit is recent (within 2 years)

  • Audit is public (you can read it)

  • They fixed the issues that were found


Red flags:
  • "We've been audited" but won't publish the report

  • Audit is 5+ years old

  • Can't name the audit firm

  • "No issues found" (every real audit finds something)


Real examples:
  • Signal: Multiple public audits, all published, issues fixed

  • Telegram: Offers bounties but no comprehensive public audits

  • Snugg: Audit scheduled with Trail of Bits before launch


The test: Search "[App name] security audit" and see if you can actually read it.


Green Flag #3: They're Honest About Limitations

Red flag: "We're 100% secure against everything!"
Green flag: "Here's what we protect AND what we don't."

No security is perfect. Honest companies admit this.

Example: Signal's honesty

What Signal protects:

  • Message content (encrypted)

  • Calls (encrypted)

  • Some metadata (sealed sender)


What Signal DOESN'T protect:
  • Your phone if someone steals it

  • Screenshots

  • If you forward messages to someone else


They're upfront about this. That's trustworthy.

Compare to sketchy apps:

"Military-grade security!" (What does that even mean?)
"100% secure!" (Impossible)
"Complete privacy!" (Not how this works)

The test: Do they explain limitations, or do they promise perfection?


The Simple Verification Checklist

You don't need to understand code. Just ask these questions:

1. Is the code public?

  • Go to their website
  • Look for "GitHub" or "Open Source" link
  • If they claim privacy but won't show code → suspicious

2. Has it been audited?

  • Google "[App name] security audit"
  • Can you actually read the audit report?
  • If audit is secret or doesn't exist → suspicious

3. Is it end-to-end encrypted?

  • Does it specifically say "end-to-end"?
  • Or just "encrypted" (which means nothing)?
  • If vague about E2E → suspicious

4. What encryption does it use?

  • Do they name specific algorithms?
  • Or just say "military-grade"?
  • If they won't name the algorithm → suspicious

5. Do security experts trust it?

  • Search Reddit r/privacy for opinions
  • Check HackerNews discussions
  • If experts are skeptical → suspicious

6. How do they make money?

  • Subscription? (Good—aligned incentives)
  • Advertising? (Bad—needs your data)
  • "Free" with no explanation? (Very bad—selling your data)
  • If business model depends on data → untrustworthy
Scoring:
  • 5-6 yes: Probably trustworthy (still verify yourself)
  • 3-4 yes: Proceed with extreme caution
  • 0-2 yes: Don't trust this app

Real Examples: Let's Check Some Apps

Signal — TRUSTWORTHY

1. Open source? Yes (github.com/signalapp)
2. Audited? Multiple public audits
3. E2E encrypted? Yes, everything
4. Specific encryption? Signal Protocol, fully documented
5. Experts trust it? Yes, universally
6. Business model? Non-profit donations

Verdict: Claims verified. Signal is legit.


WhatsApp — MIXED

1. Open source? No (closed source)
2. Audited? Uses Signal Protocol (audited) but app is closed
3. E2E encrypted? Yes, but extensive metadata collection
4. Specific encryption? Uses Signal Protocol
5. Experts trust it? Encryption yes, Meta ownership no
6. Business model? Meta advertising

Verdict: Messages are encrypted, but Meta collects everything else about you. Half-trustworthy at best.


Telegram — SKETCHY

1. Open source? Clients yes, server no
2. Audited? Bounty program but concerns remain
3. E2E encrypted? Only "Secret Chats" (not default)
4. Specific encryption? Custom MTProto (experts skeptical)
5. Experts trust it? Mixed to negative
6. Business model? Unclear (how do they make money?)

Verdict: Most chats aren't E2E encrypted. Custom encryption is questionable. Business model unclear. Not trustworthy.


Zoom — LIAR (Historical)

1. Open source? No
2. Audited? Not when they made the claim
3. E2E encrypted? Claimed yes, researchers proved no
4. Specific encryption? Vague claims
5. Experts trust it? Caught them lying
6. Business model? Freemium/Business

Verdict: Literally lied about E2E encryption. Perfect example of why verification matters.

Current status: Now offers E2E for some features, but trust is gone.


"But I'm Not Technical..."

Good news: You don't have to be.

Here's the three-question version:

Question 1: Is it open source?

  • Look for GitHub link on their website

  • Yes or No


Question 2: Has it been audited?
  • Google "[App name] security audit"

  • Can you find and read it?

  • Yes or No


Question 3: Do security experts on Reddit/HackerNews trust it?
  • Search r/privacy for discussions

  • Read what experts say

  • Trusted or Skeptical


If all three are YES → Probably okay
If any are NO → Be skeptical

That's it. You just learned to spot most privacy lies.


What About "New" Apps That Haven't Been Audited Yet?

Fair question. Not every app is old enough to have public audits.

Green flags for new apps:

  • Open source (so experts CAN audit it)

  • Use standard encryption (not custom)

  • Have audit scheduled/in progress

  • Transparent about being unaudited

  • Team includes security experts


Red flags for new apps:
  • Closed source AND unaudited

  • Custom encryption

  • Claims perfect security

  • Won't commit to future audits

  • Anonymous team


Example: Snugg (That's Us)

We're new. So how should you evaluate us?

  • Will be open source at launch (verify the code)

  • Uses standard encryption (TweetNaCl.js: XSalsa20-Poly1305)

  • Security audit scheduled with Trail of Bits before launch

  • Transparent threat model published

  • Team includes security advisors


Our take: Don't just trust us. When we launch, verify everything yourself. That's literally the point of this article.


The Bottom Line: You've Been Trusting Liars

Here's what I learned after the Zoom incident:

Companies lie about privacy all the time. Most users never find out.

But you don't have to be one of those users anymore.

The simple version:

1. Look for open source → Can experts verify it?
2. Look for public audits → Has it been verified?
3. Look for specific details → Or just marketing BS?
4. Ask what experts think → Check Reddit r/privacy
5. Check the business model → Do they need your data to make money?

If a platform scores well on these, they're probably trustworthy.

If they don't, assume they're lying until proven otherwise.


Why This Matters

You've probably been using apps for years that you thought were private.

Maybe they are. Maybe they aren't.

The question is: How would you know?

Until now, you wouldn't. You'd just have to trust them.

But trust without verification is how we got Zoom's lies, TikTok's data sharing, and Facebook's entire business model.

You deserve better than "trust us."

You deserve apps you can verify.


What Snugg Does Differently

We built Snugg to be verifiable from day one.

What we're committing to:

  • Open source → Check our code on GitHub

  • Public audits → Read the audit reports

  • Standard encryption → No custom crypto BS

  • Honest limitations → We'll tell you what we don't protect

  • Transparent business model → Subscriptions, not surveillance


What you should do:

When we launch, don't just trust us. Verify us.

Use this article. Check our code. Read our audit. Ask security experts what they think.

If our claims don't match reality, call us out.

That's how it should work.


Resources to Learn More

Want to verify apps yourself?

Communities where security experts discuss apps:


Privacy guides:

Audit firms to look for:

If an app says they've been audited by one of these firms, you can usually trust that (as long as the report is public).


Quick Reference Card

Print this out or screenshot it:

Red Flags (Don't Trust)

  • "Military-grade" without specifics
  • "Bank-level security" without details
  • "Encrypted" but not "end-to-end"
  • Custom encryption
  • Closed source
  • No public audits
  • Business model requires your data
  • Vague or no technical details

Green Flags (Might Trust)

  • Open source code
  • Public security audits
  • Specifically says "end-to-end encrypted"
  • Names specific algorithms
  • Honest about limitations
  • Security experts trust it
  • Business model doesn't need your data
  • Transparent team
Rule of thumb: If they can't (or won't) prove it, don't believe it.

One Last Thing

After Zoom, I changed how I evaluate apps.

Old me: "They said it's encrypted, so it must be secure."
New me: "Show me the code, show me the audit, or I don't believe you."

It's not cynicism. It's realism.

Companies have proven they'll lie about privacy to win users.

The only defense is verification.

And now you know how.


Try Snugg

We're building a platform you can actually verify.

What makes us different:

  • Open source (check the code yourself)

  • Independently audited (read the reports)

  • End-to-end encrypted (everything, not just messages)

  • No ads (subscription model)

  • Minimal metadata (we can't profile you)


But don't trust us. Verify us when we launch.

Join the waitlist →


Questions?

"Isn't this paranoid?"

Is it paranoid to not trust companies that have proven they lie? Zoom lied. TikTok lied. Facebook lies constantly. This is realism, not paranoia.

"What if I just want to use what everyone else uses?"

That's fine. But at least you'll know what you're giving up. Informed consent matters.

"Can audits be faked?"

Reputable firms (Trail of Bits, NCC Group, Cure53) stake their entire business on honest audits. Lying would destroy them. So while technically possible, it's extremely unlikely.

"What about apps that can't be open source?"

Banking apps, for example, often can't open source their code. That's fine—but then they better have multiple audits, regulatory compliance, and transparency reports. Higher scrutiny for closed source.


Share this if you know someone who still thinks "encrypted" automatically means "private."


About Snugg: We're building a social platform you can actually verify. Open source, audited, end-to-end encrypted. No surveillance, no lies.

Learn more: snugg.social
Questions: hello@snugg.social


About the Author - Sam Bartlett

I'm a yacht surveyor based in the Caribbean and the founder of Snugg. After 15 years watching social media platforms prioritize ads over genuine connection, I decided to build the alternative. I previously built and ran a successful sailing holiday business, topping Google search results for years before algorithm changes destroyed organic reach. I'm not a developer or privacy activist—just someone who got tired of platforms that forgot their purpose. When I'm not building Snugg or surveying yachts, I wish everyone had more time for sailing in beautiful places (or whatever brings you joy).

Connect with me:

Share this post

Ready for Real Privacy?

Join our waitlist and be among the first to experience a truly private social platform.

Join Waitlist