Skip to main content
A creative paper-cut illustration depicting a divide between tech companies and health systems, with stick-figure characters crossing a precarious bridge made of papers. On one side are robots labeled "Tech Companies" and on the other, an ambulance by the "Health Systems" side. A girl is shown hanging from the bridge, symbolizing the risks patients face.

The Field of Patient Advocacy is Caught Between Tech Companies And Unmet Needs In Healthcare 

Patients are using AI to solve their own health problems – but doing so without any rights, safety, or privacy protections.  In the last year, the New England Journal of Medicine acknowledges this trend: large numbers of patients are already turning to AI for medical advice because they compare it to the care they can actually get. And sometimes using ChatGPT has amazing results.  For example, after three years and 17 doctors, a mom used ChatGPT to surface tethered cord syndrome for her son and then found a neurosurgeon who confirmed it.  When faced with the trauma of a new health condition, our first instinct is to seek answers wherever we can. In emerging AI spaces, the risk shifts onto us as users.

Sam Altman recently pointed to “day-to-day care advice” and “lifesaving” diagnoses during the release of ChatGPT5—while the fine print says ChatGPT isn’t meant to diagnose or treat anything. And there’s the big catch: A tech company can’t have it both ways. That mixed message encourages risky use and leaves patients holding the bag when advice goes wrong.

We’ve been down this road before when things go wrong on the internet.  After Cambridge Analytica, patient groups learned what happens when tech companies write the rules for how to use our health information, or how to give advice. Facebook called disease communities “private,” but brokers and bad actors scraped rosters and mined comments; people were doxxed, outed, and targeted with predatory health ads.   We know from past experience with social media that companies can start on a virtuous path to build trust, but can just as easily alter the contract, the business model, and the features. 

We’re not waiting for healthcare to modernize— as patients we’re using AI to make sense of symptoms when we lack access to care, and fill the gaps ourselves. It’s a powerful act of self-determination in a system that too often leaves us behind.

But here’s the paradox: the more we free ourselves from healthcare’s gatekeepers, the more we risk getting trapped by tech companies that have no duty to care for us. Somewhere between the deep void between cautious health systems and the speed of tech companies lies the real opportunity: to build tools that are both empowering and accountable, with patients leading the way. We need approaches that respect patient autonomy and protect those who are most vulnerable. The goal isn’t to shut down innovation or surrender to it. How might we co-design something better in the space between the very disparate cultures of healthcare and tech companies?

The critical gap: Fiduciary duty of care.

A graphic illustrating the conflict between Big Tech and health systems, featuring bold text that emphasizes 'BIG TECH,' 'HEALTH SYSTEMS,' and concepts like 'RIGHTS' and 'LEGAL DUTY OF CARE' against a backdrop of stylized text and imagery.

When health systems and policymakers think about designing AI for the clinic—slow, controlled, locked inside compliance and limited scope of a clinical encounter. Technology companies like Meta, Google, and Microsoft, on the other hand, thinks about “users”—scaling fast, collecting data first, and asking forgiveness never. And right in that no-man’s land sit patients, who are using AI every day without rights, safety nets, or protections. They’re treated as neither full citizens of the clinic nor valued customers of tech—just data streams to be mined. That gap is where harm festers, safety issues linger, where trust collapses, and where the most vulnerable are left to carry the risk alone.

So the big audacious idea is this: What if we didn’t ignore this massive gap? Instead, we could empower patient communities to establish their own infrastructure, standards, and design when patients are using AI. What if the health system and tech companies were bold enough to be accountable? How might we implement missing rights and patient-led standards into real contracts? Could we incorporate these rights into design specs and sandboxes for implementation?

We can’t simply replace real world accountability with features.

There is no question that using ChatGPT to shortcut a deeply broken health system, cut down on paperwork, or help us find faster cures. Band-aids won’t fix bullet holes, new buttons or features don’t create accountability when something goes wrong.  Using ChatGPT leads to real world harm and patient safety issues that require real world liability.  

A few recent and tragic examples illustrate this point: a man was hospitalized after ChatGPT told him to replace table salt with sodium bromide.  There was a recent murder-suicide case where chatting with AI made delusions worse for someone with mental illness.  There’s the latest story of parents of Adam Raine are suing OpenAI because they allege that ChatGPT assisted in suicide.  These won’t be isolated events if we don’t take action.  In response to the latest lawsuit, OpenAI announced new features.

While this is an important step forward, more must be done to fill the void. We can’t hold a machine accountable for malpractice.  A machine doesn’t care. By contrast, doctors have both a legal duty of care and a fiduciary duty to patients—meaning they must act competently, avoid conflicts of interest, and prioritize the patient’s well-being. They can be held accountable through medical boards, licensing, ethics rules, and malpractice law. Corporate officers and directors, by contrast, have a fiduciary duty to shareholders.

From the patient’s perspective we are missing real world fiduciary representation to serve patient interests.  Representation means many things – but it mainly is about whose primary legal interests are being served. Even Sam Altman recognizes we don’t have any legal protections, or say when something goes wrong.  

OpenAI isn’t the only emerging actor here. What’s unfolding is a generational shift in how patients interact with medical information online—and in who profits from that interaction.  As large language models (LLMs) like GPT5 move into clinical and consumer health spaces, companies are setting themselves up as partners in patient decision-making. Tech companies don’t provide the legal duty to care about you in the same way that a therapist, oncologist, or genetic counselor must uphold to protect your safety or rights.  So while publicly committing to make ChatGPT safer, privately the company calculates whether it’s cheaper to add safeguards or add lawyers.

Close the digital divide between health systems and tech companies.

If GPT-5 is already giving health advice, then we’re already in a contract—just one that’s wildly unbalanced. Two things are still missing to make that contract fair: a legal duty to protect users, and a business model that doesn’t profit from exploiting them.

OpenAI’s Aug. 26, 2025 post shows technical progress to address the growing awareness and willingness to address safety problems with ChatGPT5. Great. But features are not a substitute for rights. These tools already operate in clinical gray zones—without any binding obligations, licensing, or legal accountability.

First, we need an enforceable legal contract for patients getting health advice. If AI tools claim to care, they must carry care-grade duties: real legal accountability, enforceable limits, independent oversight, public incident reporting, licensed handoffs, privacy by default, and liability that sticks. That means safe testing environments where patients know they’re part of a trial—and can say no. People seeking health help online deserve clear rights, not hidden disclaimers.

Second, we need to change the incentives for companies selling AI. If companies like Meta, Google, and OpenAI don’t want liability, then the infrastructure needs to be owned or governed by those who will take responsibility.  We need to establish new roles of accountability that tie to existing legal duties, like duty of care. That means ending business models in health tech built on surveillance, ad targeting, data brokerage, addictive design, and monetizing desperation. Right now, the system rewards manipulation and calls it innovation.

We need to rewrite the next chapter of health innovation with Patient communities leading.

The public doesn’t owe tech companies blind trust—tech companies owe us safe tools and real accountability. No more marketing the upside while offloading the risk. That’s how we end up with GPT-branded advice everywhere and responsibility nowhere.Patients aren’t waiting for permission to use AI. They’re already building their own AI workflows—cobbling together chatbots, spreadsheets, and platforms to make sense of a healthcare system that’s been fragmented for decades. Patients are doing this out of necessity, not novelty, because the system isn’t meeting their needs. And what are health systems doing in response? Doubling down on the same brittle legacy structures—pouring money into bureaucracy, reinforcing outdated workflows, and acting as if this patient-led shift doesn’t exist. It’s the classic move: protect the institution first, the patient maybe later. But ignoring this groundswell doesn’t make it go away; it makes it dangerous. When patients are running ahead and institutions are dragging their feet, safety risks grow, inequities widen, and trust takes the biggest hit. This isn’t a side story—it’s the frontline of how healthcare will succeed or fail in the age of AI.


Discover more from Light Collective

Subscribe to get the latest posts sent to your email.

Discover more from Light Collective

Subscribe now to keep reading and get access to the full archive.

Continue reading