LabMD – FTC Face-Off Continues Over FTC’s Data Privacy Authority

Jul 11, 2017

By Linda Henry

See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

The U.S. Court of Appeals for the Eleventh Circuit recently heard oral arguments in LabMD, Inc. v. Federal Trade Commission, the long-running dispute over the FTC’s authority to impose liability for data security breaches even in the absence of actual consumer injury. The Court’s decision, which is expected in the coming months, will have widespread implications on companies’ potential liability for lax security practices.

The LabMD dispute dates back to 2013 when the FTC filed an administrative complaint against LabMD, alleging that it failed to reasonably protect the security of consumers’ personal data, including protected health information. The FTC maintained that LabMD’s data security practices caused or were likely to cause substantial consumer injury, and thus constituted an unfair business practice under Section 5 of the FTC Act (the “Act”). Rather than settling the complaint with the FTC, LabMD became the second company to challenge the FTC’s authority over companies’ data security practices.

In 2015, an Administrative Law Judge (“ALJ”) dismissed the case after finding that the FTC had not met its burden of proof for demonstrating that LabMD had engaged in unfair practices in violation of the Act. Section 5 of the Act provides that a business practice is unfair if it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” The ALJ found that the FTC failed to prove that LabMD’s data security practices were likely to cause” substantial consumer injury” as required by the Act, and cited the lack of evidence that anyone actually misused consumers’ data. The ALJ stated “[t]o impose liability for unfair conduct under Section 5(a) of the FTC Act, where there is no proof of actual injury to any consumer, based only on an unspecified and theoretical ‘risk’ of a future data breach and identity theft, would require unacceptable speculation and would vitiate the statutory requirements of ‘likely’ substantial consumer injury.”

The FTC later reversed the ALJ’s decision and maintained that the ALJ did not apply the correct legal standard in making its determination. The FTC stated: “contrary to the ALJ’s holding that ‘likely to cause’ necessarily means that the injury was ‘probable,’ a practice may be unfair if the magnitude of the potential injury is large, even if the likelihood of the injury occurring is low.” According to the FTC, it need not wait for consumers to suffer actual harm before exercising its enforcement authority under Section 5 of the Act.

In November 2016, the Eleventh Circuit Court of Appeals granted a stay of enforcement of the FTC’s Final Order until the pending appeal is resolved, stating that “there are compelling reasons why the FTC’s interpretation may not be reasonable.” The Court questioned whether the Act covers intangible harms such as those at issue in the LabMD case, and also whether the FTC was correct that the phrase “likely to cause” substantial injury to consumers should be interpreted to mean “significant risk” rather than “probable” risk. The Court noted that it did not interpret “the word ‘likely’ to include something that has a low likelihood,” thus finding that the FTC’s interpretation was not reasonable.

In the oral arguments before the Eleventh Circuit on June 21, 2017, LabMD argued that the Court should reject the FTC’s argument that “purely conceptual privacy harm that the FTC found to exist, whenever there is any unauthorized access to any personal medical information, constitutes substantial injury within the meaning of Section 5 under the FTC Act.” In addition, LabMD urged the Court to consider the legislative history of the Act, and pointed to a policy statement on which Congress relied when enacting the Act. According to LabMD, Congressional intent was to expressly exclude subjective injuries and as a result, the Court should not accept the FTC’s position that “likely injury” under Section 5 of the Act includes low-likelihood harm.

In the oral arguments, the FTC maintained that there is nothing in the Act or the Act’s legislative history that limits substantial injury to tangible injury, and that companies have an obligation to act reasonably under the circumstances. The Court questioned whether there is an outer limit to the FTC’s enforcement approach or anything would be beyond the power of the Commission to reach, however the FTC did not provide a direct answer to this question. When asked by the Court why the FTC did not use rulemaking to enact regulations that would address data privacy and security issues, the FTC replied that rule-making is not an effective way to proceed in the cybersecurity context due to the ever-evolving nature of technology and cybersecurity threats. The FTC went on to argue that it is much more sensible to say that a company must act reasonably than rely on rulemaking. The Court pressed for an explanation as to how a company could ever know with certainty what it means to act reasonably, however, the FTC maintained that failure to act reasonably under the circumstance is not a nebulous standard, and stressed that it does not act by using hindsight but rather, considers what is reasonable at the time the security breach occurs.

As the oral arguments made clear, the Court’s decision is likely to significantly impact the FTC’s data security enforcement authority. If the Eleventh Circuit agrees with LabMD’s position that the FTC must demonstrate concrete consumer harm or injury in order to bring an enforcement action under Section 5 of the Act, speculative injury may no longer be a sufficient basis for liability. If, however, the Court finds in favor of the FTC, companies may face liability for data security breaches if the FTC is able to show a “significant” risk of consumer injury, even if such injury is not probable and has not actually occurred.



The ABA Speaks on AI

By Jennifer Thompson | Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI.

Is Anonymized Data Truly Safe From Re-Identification? Maybe not.

By Linda Henry | Across all industries, data collection is ubiquitous. One recent study estimates that over 2.5 quintillion bytes of data are created every day, and over 90% of the data in the world was generated over the last two years.

FTC Settlement Reminds IoT Companies to Employ Prudent Software Development Practices

By Linda Henry | Smart home products manufacturer D-Link Systems Inc. (D-Link) has reached a proposed settlement with the Federal Trade Commission after several years of litigation over D-Link’s security practices.

Beyond GDPR: How Brexit Affects Other Data Laws

By Dawn Ingley | Since the United Kingdom (UK) voted in June, 2016, to exit the European Union (i.e., “Brexit”), the question in many minds has been, “Whither GDPR?” After all, the UK was a substantial contributor to this legislation. The UK has offered assurances that that it intends to, in large part, harmonize its data protection laws with GDPR.

San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

By Jennifer Thompson | On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.

NYC’s Task Force to Tackle Algorithmic Bias: A Study in Inertia

By Linda Henry | In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

U.S. Lawmakers Want Companies to Check their Bias

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, technology companies are under increased scrutiny for alleged discriminatory practices related to their use of artificial intelligence.

The Weight of “GDPR Lite”

By Dawn Ingley | In June, 2018, California’s legislature took the first steps to ensure that the state’s approach to data privacy was trending more closely to the European Union’s General Data Protection Regulation (GDPR), the de facto global industry standard for data protection. Though legislators have acknowledged that further refinements to the California Consumer Privacy Act (CCPA) will be necessary in the coming months, its salient requirements are known.

The ABA’s Valentine’s Gift to Same-Sex Couples: Formal Opinion 458 Requires Judges to Perform Marriages

By Jennifer Thompson | On Valentine’s Day, the American Bar Association (ABA) Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 485, entitled “Judges Performing Same-Sex Marriages,” stating that judges may not decline to perform marriages for couples of the same sex.

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.