In January of this year, Facebook announced they would remove sensitive detailed health ad topics to protect vulnerable populations. Facebook previously said in their announcement that they were taking these steps in order to “address feedback from civil rights experts, policymakers and other stakeholders on the importance of preventing advertisers from abusing the targeting options we make available.” While these changes were widely praised and covered in the media as a step to make the platform safer, this new analysis shares the current state of ad targeting tools to infer an individual’s “interests” as a proxy for medical history.
To understand how ad targeting works, it is important to understand how Facebook recommends content to users. Facebook’s ad algorithms use AI to infer “interests” of users based on what content users interact with on the platform combined with what information you voluntarily share about your health in posts, likes, or engagement with certain content. Ad targeting interests can also be inferred from your browsing behavior as you navigate different health related websites, or purchase health related products.
While Facebook’s Ad Targeting tools change often, there is a common set of features that remain unchanged. Specifically, using the targeting feature on Facebook’s ad manager you are given three main categories: Demographics, Interests, and Behaviors. Demographics includes education, financial, life events, parents, relationship, and work. Using these tools, any advertiser can target a college educated, high income single parent who recently moved and works in software development for google.
Facebook’s advertising AI uses data that is collected about individuals to infer their personality, interests, and their lifestyle among many other things, then will turn around to sell your “personality profile” to advertisers. One way that it is able to infer information is through “private” support groups that an individual joins.
Targeting ADHD Ads
Recently, the Department of Justice launched an investigation on a startup called Cerebral for selling controlled substances for ADHD through their platform. We conducted an investigation of ad targeting endpoints which are still available after the January announcement from Facebook.
Facebook labels medical conditions as interests because listing them as demographics would admit that facebook is leaking medical data. If you search ADHD in Facebook ad manager’s “Detailed Targeting” you will get results such as ADHD Awareness, ADHD Hub, a mom’s view of ADHD, ADHD Hope, and Driven to Distraction (ADHD).
These interests can be used by advertisers to target people with ADHD or to target parents of someone with ADHD. Jack Bamford of Careset Systems conducted a recent analysis which reveals that Facebook has made minimal changes the results since ad targeting was first conducted by Fred Trotter in 2018, and documented in a formal complaint to the FTC. The same filters that Fred Trotter was able to use originally are still available, but Facebook has removed the “Suggested” feature from the search bar and now has a section below the search bar to browse popular categories.
Below is a screenshot of Facebook’s current ad targeting “Interests” for ADHD.
What do Patients On Facebook See?
What patients see as a result of health prior ad targeting remains opaque. While Facebook’s Ad Library does show prior ads, it appears that specific health ads run in the past have been deleted from the library. Upon reviewing Facebook’s Ad Library, we did find examples of ads that were in Facebook’s Ad Library as late as May 2022.

Several patients reached out to The Light Collective regarding aggressive ADHD ads being targeted to them both on Facebook and TikTok. Here is one example of an ad received by a patient with ADHD.

Beware of Predatory Health Ads
The takeaway here is the same for any social media platform: don’t trust health ads you see. Don’t click on these ads, because a single click may enable a company to track you across multiple platforms and websites. Data brokers can sell your browsing behavior and ‘clicks’ in the form of leads. And even if data is “anonymized” your engagement with these ads may change what you see or how you are treated when health services are offered to you.
We recommend reading about why Metadata Matters from Electronic Frontier Foundation. Whether from Pharma companies or startups, it is difficult for patients in support groups to discern what ads and health claims are real and what are harmful. If you are a health company or healthcare organization, do take steps to stop social media ad targeting as a marketing tactic in your practice. For any “good” ad that you may see from a reputable company, consider that your targeting of vulnerable populations also enables targeting of snake oil, scams, and medical misinformation to those same patients.
We will follow up on other posts to talk more about how sharing real names with your health info can be used against you, and ways to advocate for your digital rights. Learn more about The Light Collective here.