- Pitch Club
- Posts
- The Privacy Issue: Privacy Issues
The Privacy Issue: Privacy Issues
Tea data leak, Age verification, Vibe Insecurity and more
This is a longer letter. A few events in succession are worth covering. Privacy has been under the gun the past few weeks and the general trend –– personal privacy is degrading and risks are increasing.
Age Verification Wars
The UK law requiring age verification went into effect this week. Websites with adult content (pornography, but also sites with NSFW content like Reddit) will have to comply by verifying users are not minors. Immediately, VPN signups soared as people looked to spoof their IPs to avoid verification.
There are arguments that this is necessary for shielding an extremely online youth population from age-inappropriate content, but in practice, age verification means uploading photos, IDs, or other personal information to verify your age. Verification means users must ask themselves, “Do I trust the company to be responsible with my sensitive data?”
That trust may be easier to give to established tech giants like YouTube (which is working on their own age verification). But last week, a concerning privacy incident came out of Tea, an app advertised as "dating safety tools that protect women" where women could share dating experiences and advice, including posting about men to avoid and conduct background checks. Last week it was reported that a "hack" had revealed 70,000+ users from the Tea database. I say "hack" because the database was apparently public-facing, so really it was just found. What's worse is the leak included thousands of photos of women's faces and driver's licenses and the information was shared extensively on 4chan. A map was circulating with the addresses of those in the leak.
Tea says that they will offer personal identity help to victims. But it is hard to imagine how emotionally damaging this leak could be for those in it, as well as introducing anxiety about identity theft, stalking, or other real-world harms.
Vibe Coding Security Crisis
This leak also exposed a larger worrying trend for startups. Vibe coding is going to lead to what some are calling vibe insecurity. As people create usable products with text prompts, it is more likely that AI will make security lapses, whether that is making databases public or exposing API keys. It introduces a set of people building software that won't be familiar with their own privacy and security protocols.
AI Chatbots and Legal Exposure
In a different privacy concern, people are realizing that AI chatbots do not have any legal privileged protections. In the US your electronic records are generally considered under ECPA (Electronic Communications Privacy Act), originally passed in the '80s. People don't like to think about this, but in almost all cases, under a legal investigation if a judge signs a search warrant for information held by a tech company, the company will disclose your information. The rationale is that your electronic records are somewhat akin to notes you'd keep in your home which would also be accessible via search warrant. But I don't think a lot of people think like this. Sam Altman made this point on a recent podcast, which is especially concerning as people are using AI chatbots in sensitive ways like legal, therapy, and medicine, without the legal protection we have afforded those professions.
People might argue that this is a case of "if I am not doing anything wrong my information is not at risk," but another huge risk right now is the technological literacy of law enforcement and judges in the United States. While working in tech legal compliance, I once saw a search warrant for location data for the entire city of Virginia Beach AND a judge had approved it. That was a legally compelling warrant for the data of millions of people. In this case, counsel got involved and pushed back on the warrant, but it shows that your data could be exposed by mere proximity to an investigation or a misunderstanding from law enforcement or a judge on what information is needed.
The Astronomer Moment
Finally, it would be tough not to mention the most viral privacy incident recently: the Astronomer scandal. The internet exploded when the CEO and HR head were caught embracing at a Coldplay concert. The consequences were swift when the internet, late night talk hosts, sports stadiums, and newsletter writers piled on with memes and condemnation. The CEO resigned, the HR head was put on leave, and Astronomer hired Gwyneth Paltrow (Chris Martin's ex) to do a PR PSA to capitalize on the attention. It is a reminder that cameras really are everywhere and the viral machine moves quickly. With many startups hoping to put always on AI recorders on people and smart glasses with cameras making a strong comeback, we are moving towards an even more always observed world.
Startups
How will this fold into the zeitgeist of startups. On one hand, privacy concerns might open an opportunity. Who is building the Plaid of personal verification or security review for vibe-coded apps? There's a real need for infrastructure that can quickly audit AI-generated code for privacy vulnerabilities before deployment.
On the other, startups are certainly exacerbating this problem between vibe coding, AI assistants like Friend or Limitless, and always-recording smart glasses. There doesn’t seem to be a privacy backlash as people seem happy to accept the tradeoff for the advancements in technology. Which might mean startups can push the envelope even more.
Our Sponsor
Founders, get ahead of tax season: Gelt is your year-round tax partner built for startups, investors & entrepreneurs. Reduce your burden, boost returns, and make smarter tax moves—fast.
Stop overpaying or scrambling come tax season. Gelt delivers proactive strategy and full-service filing, powered by elite CPAs and a powerful platform. Trusted by founders and loved by investors, we've helped 200+ businesses save over $5M (and counting).
Check Eligibility
