Helping Counsel Define Reasonable Security for Clients and Organizations
To the corporate CISO cybersecurity breaches are a one-two punch. The first punch is, of course, the attack or mistake that caused the breach. Nobody wants to be caught unprepared, or to be the cause of harm to anyone who may be exposed by a breach. The second punch, so say the statistics, is the more damaging one. It’s the legal fees, litigation settlements, and regulatory fines that come with adjudication. According to NetDiligence Cyber Claims Studies over the past few years, two-thirds of the breach-related claims paid by insurance carriers covers the liability-related costs for larger enterprises. About half the claims paid to smaller organizations are for liability-related costs. So take the near-term costs caused hackers and double it. That’s the cost that lawyers cause.
The power of that second punch depends largely on how well the breached organization has defined the word “reasonable.” Both cybersecurity communities and privacy regulations use the word as the standard for compliance, but neither provides a clear definition, calculus, or test for reasonability. As a result, organizations seldom use reasonability as an operating metric for their cybersecurity and privacy programs. When a breach occurs both plaintiffs’ attorneys and regulators gladly opine on what reasonable security isn’t. That punch lands hard. Why, the breached organizations ask, was no definition for reasonable security available to them prior to the breach?
In June of 2018 the vagueness of FTC’s use of the word “reasonable” or “reasonably defined data-security program” caused the 11th Circuit Court of Appeals to find the FTC’s order against LabMD, a breached organization, unenforceable. The public was left to presume that if the FTC was to continue to use “reasonableness” as a standard, they would have to define it.
The Sedona Conference Working Group 11 (WG11) has provided that definition. In February 2021, The Sedona Conference released its Commentary on a Reasonable Security Test to help the regulatory and litigation communities “move the law forward in a reasoned and just way.”
The test is a novel advancement of the Learned Hand Rule. It states that a person is reasonable if no alternative safeguard would have created an added benefit that was greater than the added burden of that safeguard. The commentary presents the test both descriptively and as an equation reminiscent of the Hand formula. B2 – B1 < (P x H)1 – (P x H)2 where ‘B’ references the burden, ‘P x H’ references the probability of harm (or risk), where subscript ‘1’ denotes the control (or lack of control) at the time of a breach, and subscript ‘2’ denotes an alternative control that a plaintiff or regulator would suggest should have been used.
But the test also includes “utility” as a factor in the equation. The utility of risky conduct may be so great that an added control would reduce that utility unacceptably. For example, when gymnasiums or other open spaces are used to provide healthcare to a community in an emergency, clinicians are providing a great benefit in that public environment. But they also increase the risk of people overhearing each other’s diagnoses. If patient privacy was ensured by bringing patients to private rooms surely fewer people would receive care in an urgent situation. That reduced utility would contribute to the harm factor in (P x H)2.
Because reasonableness is foundational to security and privacy regulations as well as litigation the Sedona Conference’s commentary has tremendous implications.
The commentary establishes a strong basis for its test by demonstrating its universality in regulatory, litigation, and information security domains. Because risk analysis (evaluating the likelihood and magnitude of harm and the burden of safeguards to reduce risk) is used in all three domains the commentary establishes that all parties interested in the test should already be familiar with its principles. This argument provides a useful analytical tool for management who plan security priorities, budgets, tactics, and resources.
Organizations that must protect sensitive information and systems already are obligated to conduct risk assessments by some standard or other. Regulations and information security standards almost without exception require risk analysis or a demonstration of reasonableness which, in our parlance, requires risk analysis. Risk analysis is not simply a required exercise. It’s an intentional estimation of the likelihood and magnitude of bad events. It’s also the rational basis for prioritizing controls and determining their effectiveness given the foreseeable breaches.
Organizations that use risk assessments as a basis for their budgets, priorities, and tactics, and who include the risk of harm to themselves and others, are situating themselves well both in terms of good practice and to defend their choices when things go wrong.
This is not just a result of the commentary, of course. This is because we would be acting as litigators, regulators, and the cybersecurity community say we should act. This is the basis of due care. The commentary’s novelty is that it presents a common language for lawyers and the cybersecurity community that is based on each of their “native” due care practices.
Consider the entangled negotiations that occur after a security incident. Plaintiffs and defense, regulators and counsel, arbitrators and interested parties … all struggle to lay out what would constitute a reasonable security control. But in an ex post situation these negotiations often intend to identify controls that should be used in the future, whether in the form of a settlement or an injunction.
Defense attorneys and counsel often find themselves in an ironic position of reducing the investments their clients should make to secure environments that were recently breached. Such negotiation is, of course, short-sighted and may serve to increase risks of recurring breaches at demonstrably unsecure client.
Similarly, plaintiffs and regulators insist that safeguards that they have no practical experience operating must be implemented effectively in the organization they are admonishing. Safeguards can create their own risks as well as benefits, and regulators know this. This is why regulatory rulemaking requires cost-benefit analysis and comments from the regulated public.
Those who have been involved in ex post negotiations like these have hopefully sought a more rational means for determining the reasonableness of security controls after a breach occurs. The commentary’s test provides that, and does so in a way that regulators, plaintiffs, and security practitioners can agree on.
How do we know this? For two reasons. A careful reader will note that the commentary’s co-authors are prominent figures in ex post cybersecurity matters, including regulators, defense counsel, plaintiffs’ bar, and cybersecurity experts. The Sedona Conference ensures a non-partisan work product and has produced one here. And keeping in mind that the test is not sui generis invented in the commentary, but based on a long legacy of practice, this author has used the principles of the test to guide counsel for breached organizations, plaintiffs, and regulators and has found that injunctive terms and fines have both been reduced and negotiated quickly. And why wouldn’t they be? The test is derived from a common practice in determining due care that simply needed a common translation.
There is considerable work to be done now that the legal and cybersecurity communities have a common basis for discussing and operationalizing reasonableness.
Insurance carriers should evaluate their policy holders’ risk based on whether they plan and operate their cybersecurity programs using the test. If two-thirds of cybersecurity claims costs pay for liability-related issues, those claims will be smaller for policy holders who can demonstrate reasonableness to regulators and plaintiffs. As well, insurance carriers that offer risk engineering services should make sure that their policy holders have embedded the test in their risk analysis, budgeting, prioritization, and decision-making.
Information security standards communities such as the International Organization for Standardization (ISO), the PCI Security Standards Council, National Institute of Standards and Technology (NIST), Factor Analysis for Information Risk (FAIR), Applied Information Economics, and others should adjust their risk analysis methods to explicitly require inclusion of foreseeably harmed parties in impact evaluations, and to weigh the risks of safeguards against the risks they address to ensure reasonableness. The Center for Internet Security, Inc. (CIS®) did that in 2018 with their risk assessment method (CIS RAM).
Regulators who are (thankfully) resistant to tell organizations exactly how they should be managing their risks can now point them to the principles they should be addressing while analyzing and managing risks they pose to themselves and others. After all, the commentary’s test conforms to Executive Orders 12866 and 13563 which both require a risk-based, cost-benefit analysis in regulatory rulemaking and enforcement. Injunctive orders and settlements are excellent opportunities for regulators to reference the test because it will likely speed up the settlement process and will not require passage through the rulemaking processes before it is shared with the public.
Since the Gramm Leach Bliley Safeguards Rule through to the most recent state-based privacy regulations, liability for information security breaches has teetered on a poorly defined word that is now clearly defined. In the decades of its absence, the marketplace has not waited for the word “reasonable” to concretely present itself, but has rather filled the void with misleading – if well-intended – risk analysis methods that have only led organizations empty handed when asked by lawyers whether controls were reasonable.
We now have a test for reasonable security practices that brings together the traditions of regulators, litigators, and information security communities to balance burdens of safeguards against the risk of harm to ourselves and others.
We should expect now that those who use the test well will prevail when asked were their controls reasonable.
Founded in 1996, HALOCK Security Labs is a thought-leading information security firm, that combines strengths in strategic management consulting with deep technical expertise. HALOCK’s Purpose Driven Security® service philosophy is to apply just the right amount of security to protect critical assets, satisfy compliance requirements and achieve corporate goals.
As principal authors of CIS Risk Assessment Method (RAM) and board members of The Duty of Care Risk Analysis (DoCRA) Council, HALOCK offers the unique insight to help organizations define their acceptable level of risk and establish “duty of care” for cybersecurity. Through this risk assessment method, businesses can evaluate cyber risk that is clear to legal authorities, regulators, executives, lay people, and security practitioners.
HALOCK’s cyber security programs include: Penetration Testing, Risk Management, Third-Party Risk Management, Incident Response Readiness, Sensitive Data Management & Scanning, Threat Hunting or Managed Detection and Response (MDR). HALOCK also provides Risk Assessments, Compliance (HIPAA, PCI DSS, CCPA, CMMC-readiness), Security Architecture Reviews, Security Awareness and Incident Response Training, Forensics and Compromise Assessments, Workforce and CISO Advisory/vCISO services.
HALOCK Security Labs