Skip to main content

Is this a shakedown?

Its not an unreasonable question, and if you are wondering this, then perhaps this article can help.

You should be reading this, if you have recently been approached by someone who claims to have found a cybersecurity or digital privacy problem in software system that you manage. This will be written with clinical organizations in mind, especially organizations that currently have no or little IT infrastructure. First, we will have a brief overview of what a legitimate cybersecurity interaction looks like and help you understand if you should contact law enforcement (sometimes people are in fact trying to shake-you down). But if you want to actually understand why these rules are these rules, I have a much longer parable that I have written, which illustrates why the cybersecurity industry works the way in which it does.

Perhaps you want something more formal, and meaty than this guide (which is targeting the layman and clinical reader)? If so consider reading the CERT Guide to Coordinated Vulnerability Disclosure. If you find significant differences between what this document seems to be suggesting, and the CERT document.. You should probably assume CERT is right. But it is 90-pages of really thick text and provides little context for those who are trying to protect clinical information or other sensitive data. Hence, this guide.

Help with the Jargon

A “whitehat” hacker is another name for a cyber-security researcher. This is a person who has learned to understand how and why digital systems break, and they can find problems in digital systems that could be exploited to do harm.

A “blackhat” hacker is the term for someone with the same skill as a “whitehat” hacker, but who chooses to actually attack systems. These are malicious users who frequently demonstrate a nearly psychopathic disregard for human life, as they take actions that hurt other people through digital systems.

There are some people who refer to “grey hats”, this means someone who is hired to act just like a black hat would, so that organizations can protect themselves. But this term is not used consistently in this way, so I will avoid using it here.

“Hacking” also has two meanings. One is a positive action of creating clever solutions to hard problems. The other is “breaking into digital systems”. To be clear, in this document I will use the term “Cracking” which only means “breaking into digital systems”. This is to clarify sometimes “Hacking Healthcare” has a positive meaning, and sometimes it has a negative meaning. This is confusing. Sorry about this. But its a little like the difference between “being shit” (which is bad) and “being the shit” (which is good) and “being shitty” (which is bad again). English is tough. Moving on.

Rules of Thumb

What you can expect from a legitimate cybersecurity researcher (cyber-researcher):

The big question that any legitimate cyber-security researcher is going to ask is “Is it better for the people impacted by this problem to work with organization X (in this case, you, the reader), to get the problem fixed? Or is it better to simply release this information directly to the impacted parties, so that they can take steps to protect themselves”. This is especially true if there are patients or other vulnerable people whose data is involved.

They are constantly deciding whether they should take one of two paths: Path A: “going public” or Path B: “Working with you to fix the problem, before going public”  

  • Generally, a legitimate cyber-researcher’s primary concern is not going to be you, or your reputation, but rather the safety of the individuals who would/are impacted by the problem that they have found. When you see that they hold this as the top priority over time, and there are no other red-flags, then you should begin to trust them.
  • One, they are likely to inform you that there will be a time-frame under which you will need to have either fixed the problem or substantially demonstrated progress towards fixing the problem, or they will go public. In some cases, the cyber-researcher will insist on going public with the problem they found eventually. This is likely to feel like a threat to you, but by itself, this is typical cyber-researcher behavior and there are good reasons why they will insist on this “eventual transparency” around cybersecurity and/or privacy problems. This is especially true for digital systems that patients or clinicians interact with (clinical systems).
  • Legitimate cyber-researchers will not withhold information required to understand what and where the problem is. This does not extend to giving you all of the information to fix your problem for free, but they will give you all of the information to find the problem and to understand why it is a problem. You may need to hire third-party help in order to interpret this information correctly.
  • Legitimate researchers will not attempt to extract a fee in order to provide the details of the problem that you might have. They might, however, balk at the notion that you are asking for free labor to have them fix the problem for you, or guide you through a process where they are instructing you how to fix the problem for free.
  • Most legitimate cyber-researchers will agree to a request to work through a neutral third-party organization like CERT in order to provide structure for discussing the problem with your system that they have found. However, they might not be willing to work with your local state police, they are not at all regarded as neutral by many cyber-researchers.
  • A cyber-researcher may demand that you give them updates on what is being done to address the underlying problem. For normal problems, weekly updates are reasonable. If the problem is life or injury threatening, daily updates are reasonable.
  • A legit cyber-researcher will not hold your data hostage in a way that would hurt neutral third parties (i.e. your users or patients, etc). However, they might take data that they have acquired because of the problem with your system, remove identifies from it, and publish it as proof when they “go public”. Usually this would only happen after someone from your organization denied publicly that there was a problem once the cyber-researcher goes public.
  • Legitimate cyber-researchers may demand public credit for discovering the problem, as part of an eventual step where the issue is made known to the public after it is fixed.
  • Demands for payments are especially concerning if they are through a crypto-currency or other anonymous payment method (i.e. cash in an envelope, left under a rock, etc)
  • The only thing that a legitimate researcher is within their rights to “demand” is that you fix the problem, if the problem is real (if you have evidence that the problem is not real.. That can get complicated). Once the problem is demonstrably fixed, no legitimate cyber-researcher would demand that you specifically shut down a digital resource or take some other action to change how you operate. However, it is reasonable for a cyber–security researcher to say “if you cannot quickly fix this problem, you need to take temporarily take down this resource until you can protect it”.
  • A legitimate cyber-researcher is going to have patience for your constraints, your resources. They will be willing to give you more time, if you are able to demonstrate progress. They might also be willing to refer you to places where you can learn more about the type of problem they have discovered. But also recognize that they cyber-researcher might feel a profound sense of urgency when a problem they have been discovered is likely to lead to real-world harms, or is already being taken advantage of to hurt people. If they are putting you on a tight schedule, they will be willing to communicate clearly with you why that schedule is necessary.
  • You can count on a legitimate researcher to not go public with a given problem, until the problem is no longer a risk to patients/users/etc. Or while you are clearly making progress on a fixing problem. Many cyber-security researchers may not have patience for the complexity of working with clinical systems, and they may be inclined to “go public” if they do not see things fixed.
  • A person who is reporting a cyber-security problem to you will never need to have “additional access” to any technical resource that you might have. If, using this guide, it is not possible to tell if a person is a legit cybe-researcher.. Or a person trying to crack into your systems, it might be appropriate to reduce the amount of access that they have to your systems. But there is almost never going to be a legitimate reason to give a person who you do not know well greater access to your systems.

If you put these together, if someone is saying “Hey, you have this problem, but I am not going to show you where it is or demonstrate clearly how it can be exploited, and I am going to need you to send me 10 bitcoin or I am going to release a bunch of patient data that I have downloaded from your site to the dark web”. Well that is extortion and you should call the police immediately.

On the other hand legitimate cyber-researches sound like “Hey I found a very serious bug in the login system for your patient appointment scheduler, here is the link and the instructions to replicate the problem are attached… how soon can you fix this?”

What you a cyber-researcher should be able to expect from you.

  • You need to make it clear that your top priority is not your reputation, but to the end users, patients, and other neutral third parties who might be harmed if the problem is made public too soon. If a cyber-security researchers sees that you are spending energy protecting your reputation, at the expense of making progress on the underlying problem, it is a reasonable decision, and in some cases the only ethical decision that they have, for them to go public with the problem.
  • This does not mean that the cyber-security researcher does not care about your organization, it just means that she cares about vulnerable users more. For the most part, it is your responsibility to put those same users first.
  • People who are capable of finding cyber-security or privacy problems have a very technical skill set. There are lots of unethical ways to make money with this skill set and very few ways to make money using this skill set ethically. When a cyber-researcher contacts you, and presents you with a legitimate problem, and does not ask for money, please recognize that the cyber-researcher is doing you an uncompensated favor by letting you know about this problem. She is taking a risk by communicating at all with you, because many organizations attempt to hurt cyber-security researchers and frequently they are treated like criminals, rather than professionals.
  • Verify early on in the process if the cyber-researcher wants credit for discovering the problem and how that credit should be given. Many cyber-researchers choose not to go by their real names to protect their privacy and to shield them from overzealous governments and criminals. If you choose to release information about the problem, you should give credit to the cyber-researcher if they have requested it.
  • Many times, a cyber-researcher will not understand that a particular clinical system can have impact for clinicians, patients or other vulnerable populations. They may understand what the technical details of the problem are, without understanding the clinical implications or the damage that might be done to people. If you think that the problem that a cyber-researcher has brought to your attention could be used to really hurt people, if it is released before it is fixed, you need to clearly articulate this to the cyber-researcher. A legitimate cyber-researcher will act much differently if they understand that real-life harms could be happen because of a given problem.
  • Even if they have not asked for it explicitly, you should give a cyber-researcher weekly updates of progress as you fix the problem that they have brought to your attention. Be objective and specific in these communications.
  • It is not reasonable for you to ask for a Non-Disclosure-Agreement in order for you to tell the cyber-researcher about progress while a problem is being worked on.
  • While you have the right to have enough information from the researchers that an competent cyber-professional could verify that the problem is real, the cyber-researcher does not owe you a layman’s term explanation of the problem. If they are willing to give you such an explanation, it is reasonable for them to negotiate a reasonable hourly rate. Many cyber-researchers might willing to help you, despite not being paid, and this is a favor that they are doing for you and is not something you should take for granted.
  • As long as a cyber-security researchers is acting in good faith (i.e. doing all of things outlined in the list above) it is entirely inappropriate for you to threaten the cyber-researcher, either with criminal, civil or other retributions of any kind.  
  • It is possible that the cyber-security researcher has found something that legitimately appears to be a problem, but is really not. As long as they are acting in good faith, these mistakes happen, and you should work to ensure that even in these circumstances, you should take no action against them.
  • Recognize that the cyber-researchers primary obligation is NOT primarily to you, but to those impacted by the underlying problem that they have discovered. Eventually, and despite your best efforts, they may feel that going public is still the best options for giving those neutral third parties the opportunity to protect themselves.
  • If you verify that a cyber-security or privacy problem that you have is legitimate you have to inform any user who MIGHT have been impacted by it. If you can either demonstrate that it has had an impact on patient/users, or if you cannot demonstrate that it has NOT had an impact on patients/users, then you have an obligation to be transparent. Specifically, you have an obligation to tell those patients/users what happened and help them understand what the implications are of this problem. There are rules under HIPAA and the FTC PHR breach notification rules in the US, and GDPR in Europe that might create specific obligations under these circumstances. But you have an ethical obligation to inform your users in any case, even if these rules do not apply to you.
  • If a cyber-researcher does decide to go public with a privacy or cyber-security system that you own, do not deny that there was a problem unless you are 100% certain that you are correct. Frequently cyber-researchers will maintain some proof that their problem was legitimate, to have proof in the case that you attack their reputation by claiming the problem was non-existent.
  • Do not attempt to force a cyber-researcher to delete patient data until you are able to completely demonstrate that a problem has been fixed. A cybersecurity researcher may choose to remove patient identity from the data that they have as a result of the problem, in order to ensure that they have proof that the problem did in fact exist. Attempting to insist that this kind of “de-fanged” proof has been erased is pointless.
  • Do not threaten someone who reports a cyber-security problem to you with calling the police. Using this guide, you should be able to tell if a person is a legitimate cyber-security researcher, or a crook. If they are a legitimate cyber-security researcher, then they are trying to help you, and calling the police will do nothing to hurt them, and they will simply refuse to work with you further after this.  If they are a crook, you should not threaten to call the police.. You should just call the police.

Frequently Asked Questions

Why is this cyber-security researcher insisting that we go public about this problem?

In the bad-old days, cyber-researchers would approach companies with significant problems, and the companies would request that the cyber-researcher would be silent… even sometimes paying them to sign NDAs. Then, the company would choose not to fix the problem at all. Eventually, and sometimes years later, a blackhat hacker would discover the problem and then a huge number of people would be impacted because the companies had not fixed the problem.

Eventually, the whitehat community released that they had an obligation to let the public know about cybersecurity problems in order to ensure that the public could be protected from this. This also resulted in a general unwillingness for white hat hackers to sign Non-disclosure agreements (NDAs).

Why won’t the cyber-security researcher just trust me that the problem is fixed? Why is she insisting that I answer all of these questions about my environment?

They white hat hacker has to make a determination about when and whether to go public about a particular problem that they know you have. They need to know when it is safe to do that. They may not trust in your ability to assess whether a problem is fixed. This problem can be compounded if you have taken the step of revoking their access to a system, so that they can no longer test themselves. Generally, you need to give the cyber-researcher all of the information they need to ensure that the problem is in fact fixed.

The best way to get around this is to fix the problem, and put out a press release and give the white hat hacker credit (if they wanted that). This ends up negating the problem from the perspective of the cyber-researcher. This is also the right thing to do for you end users.

I am being asked to pay a bitcoin (or other crypto-currency) ransom to unlock my data, should I do that?

That is hard question. Here are some good rules of thumb. First, you should involve law-enforcement. Second, you should be able to recover all data on any critical system that you have. If you cannot recover the data, then it is possible that you have to pay the ransom in order to protect patient safety.

Having said this, this is kind of a “digital never event”. If you are in this situation twice, you need to take a careful look at your practices.

I do not have resources for replying to this, we are small company with limited resources and outsourced IT. What should we do?

There are two answers to this question, and neither of them are great.

First, in the short term, consider reaching out to one of the several organizations who provides help on these topics on a volunteer basis or non-profit basis:

Are options. If you are a clinical organization like a non-profit,  hospital or small outpatient facility, I recommend you explore the resources available from the H-ISAC. If you think you might need to involve law enforcement in cybersecurity issues regularly, try attending your local InfraGuard meetings.

If your problem is a privacy-only issue, you might find help with Epic and if you are a clinical organization with privacy concerns you might reach out to Patient Privacy Rights.

If all else fails, you might even try to get ahold of me (fredtrotter.com) but realistically, all I am going to be able to do is try to refer you to someone more specific. Try and treat this like a last resort.

Second, in the long term, it could be time for some hard decisions.  

Generally, an organization needs to assume that will be spending about 10% of its budget on IT (whether or not it is in-house).

If we break apart that IT spend, then somewhere between 10-25% should be spent on cybersecurity resources of one kind or another. 10% if you are a typical small organization, and 25% if you choose to host digital resources for patients or other at-risk populations. If you host such systems, it is imperative that you also invest in cybersecurity resources. If you cannot afford to invest in cybersecurity resources, then you cannot afford to host digital resources for clinicians, patients or other at-risk populations.

So the first thing to do is have the hard conversation: “Can we survive as an independent organization, given our need to purchase cybersecurity services”.

A hacker has demonstrated that they have access to our patients data. Have I broken the law? What should I do?

Well both HIPAA and the FTC have rules for this.

First, you should know for certain if you are a HIPAA covered entity. If you do not already know if you are HIPAA covered then you can find out here, if you are. If you are just discovering now that you are HIPAA covered, then gods and angels help you.

If you host patient data online, you may be covered by the FTC PHR breach notification rules.

In all cases, if you are covered by these regulations, and you have a had a breach, and you do not notify, then the fines can add up. So figure this out quickly.

It is also unclear how GDPR (a relatively new European regulation) is going to fit into this. This can be a concern for you in the US, if you have EU citizens in your database. Here is a place to start considering this problem.

If you have reached here and have not yet been covered by a regulation, then you are in an interesting situation. You might not have an legal obligation to notify your users of a breach. But you still have an ethical obligation. I do not what to pretend that this is always cut and dry, or that these decisions are easy to make. I have made several decisions in these grey areas that I continue to question, and I do not have any easy answers. Try to do what you think is right, even if it is hard. More transparent is generally better than less transparent, but panic can be contagious. Weigh your options carefully.

Who are you? What qualifies you to be writing this guide?

I am convinced that no one really cares to read my resume here. But if I am wrong.