Punchline: Don’t just make privacy claims. Prove you are backing your claims w/ evidence. There are 2 ways to do this.
Patient forums software needs to earn trust rather than demanding it. There are several different pathways to trust for a software maker.
- Write Open Source software and ensure that it receives attention and review from the Open Source community.
- Write proprietary software and ensure that is regularly audited by neutral and trusted third parties.
Note that currently Facebook fails to do any of the above, but so do other PHR systems that are popular with the patient community, including patientslikeme and Smart Patients.
In some ways, it is easier to describe what is OK, by first saying what is not OK. It is not OK to just say “this software works the way we say it does, just trust us.” Facebook, for instance, would routinely change how its software worked for only a select group of its users. They would “test” a change on a few hundred thousand real user and see how they would react. But this meant that Facebook’s basic assertions that the system worked in any one particular way was almost always false. There are dozens of cases where Facebook has been caught in the act of advertising that its products work in one way, where in fact they work in another way. Facebook has been especially egregious about this, but many proprietary software vendors have this problem. This includes illegal back-doors in software, illegal “phone home” mechanisms in software. It is just not possible to trust an assertion from a company that says “hey, we assure you that this is how the software works”, when the purpose of that software is to facilitate the communication of the most private and sensitive personal details with any kind of peer support group.
We see two viable alternatives to this kind of “trust us no really trust us” position that Big Tech has taken with patients.
One is to release all patient forum software systems as Free and Open Source Software, with publicly available and inspect-able source code, while ensuring that there is a community who is performing privacy and security reviews on the software.
It might be necessary to separate out some specific functions and not publish them, the way that automattic does with its Akismet service. In this case though, it is critical that there by a specific, limited cyber-security based exemption with public justification. Just like there is found in the Akismet Wikipedia article.
Another caveat to the Open Source path is that it is not enough to rely on the “many eyes” phenomenon for security. If the community does not automatically step up and perform audits… then they need to be sponsored. Its not just Open Source, its “Actually Reviewed Open Source” that provides the privacy benefit.
We get it can be hard to make money using an Open Source model. We know that some major Internet forums tried to be Open Source and then decided that it was too difficult to make money that way.
As a secondary, not as OK, solution, we can tolerate having proprietary software when the following conditions are met.
- Regular independent audits are conducted by trusted third party auditors
- Auditors are rotated and must attest that they had the access that they needed to confirm that privacy commitments were being met.
- All data sharing deals are public and transparent, the answer to “did you share the data from the patient forum?”, cannot be well we did not sell access to THAT version of the data.
- There is a privacy officer that is publicly identified and reports to the CTO, who has appropriate training in patient privacy principles and the ethics associated with the IRB process (not that it has to be followed, but just to understand the history of illegal and unethical experimentation on patients)
- The privacy officer is responsible for personally attesting that the auditors had all of the information that they needed to affirm that the privacy standards and policies were being honored within the company.
- The privacy office would also be responsible for ensuring that the privacy standards were not designed in a deceptive manner, and that critical privacy decisions for patients were not catch-22 decisions
It may be necessary to radically rethink these standards if, after having applied them, some company still managed to conceal bad faith actions and to betray the trust of the patient community. But by the same token, we cannot insist on rules that no one can follow, in order to prevent every action a truly evil company might take. At a certain point, we have to trust that there are people who will step forward when rules are not being followed.
For this reason, companies that choose to either path must have published policies that protect cyber-security researchers and whistle-blowers both inside and outside their company from any form of retaliation.