On April 7, 2026, Anthropic did something no frontier AI company has ever done before. They finished training their most capable model — and announced they would not be releasing…
On April 7, 2026, Anthropic did something no frontier AI company has ever done before. They finished training their most capable model — and announced they would not be releasing…
Most of us assume that when our health info moves between a doctor’s office, a hospital, a pharmacy, or an app, there are basic safety checks in place—like “you can’t log in without strong protection,” “your data is encrypted,” and “there’s a record of who accessed it.” Those basics matter because when health data leaks or gets misused, the harm isn’t abstract. It can mean an abuser finds you, an employer learns something they shouldn’t, your insurance situation gets complicated, or you lose trust in care and stop seeking it.
Protecting your privacy online can feel overwhelming. But we can help.As you browse online for health conditions or advice—looking up symptoms, medications, or clinics—ad trackers can quietly broadcast clues about what you’re reading and where you are billions of times a day across the web. Data brokers turn those clues into “audience lists” of people likely dealing with things like asthma, depression, or diabetes (sometimes even tagging caregivers or people in government/medical roles), even when platforms say they ban this. And once your data is pushed into that system, there’s no practical way to control who sees it, how it’s combined with other data, or who it’s sold to next.
When health systems and policymakers think about designing AI for the clinic—slow, controlled, locked inside compliance and limited scope of a clinical encounter. Technology companies like Meta, Google, and Microsoft, on the other hand, thinks about “users”—scaling fast, collecting data first, and asking forgiveness never. And right in that no-man’s land sit patients, who are using AI every day without rights, safety nets, or protections. They’re treated as neither full citizens of the clinic nor valued customers of tech—just data streams to be mined. That gap is where harm festers, safety issues linger, where trust collapses, and where the most vulnerable are left to carry the risk alone.
Patients have been innovating for decades. HIV activists forced FDA action. The cystic fibrosis community moved Kalydeco from bench to bedside. People with breast cancer organized for access to Herceptin. Type 1 diabetes advocates normalized continuous glucose monitoring. Long COVID groups mapped symptoms and pushed for repurposed therapies. Different diseases, same playbook: build community‑run networks, get smart on the science, rewire trials and policy, and then use targeted leverage to change the rules — fast.
A growing number of experts and lawmakers are sounding the alarm on how AI and your personal data are being used by the federal government—with little oversight and massive potential consequences. This post outlines what you can do.
FOR IMMEDIATE RELEASE The Light Collective, a patient advocacy group committed to protecting privacy and patient rights, today announced it has submitted a formal complaint to the Federal Trade Commission…
Are you at a hospital that can’t help patients? Here is the short term fix.
In a nutshell, the ruling that hospitals can share patient browsing data to Meta, TikTok and other third parties via adtech if patients view health related content, voiding part of OCR’s ban on tracking technologies.