LabMD – FTC Face-Off Continues Over FTC’s Data Privacy Authority

Jul 11, 2017

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

The U.S. Court of Appeals for the Eleventh Circuit recently heard oral arguments in LabMD, Inc. v. Federal Trade Commission, the long-running dispute over the FTC’s authority to impose liability for data security breaches even in the absence of actual consumer injury. The Court’s decision, which is expected in the coming months, will have widespread implications on companies’ potential liability for lax security practices.

The LabMD dispute dates back to 2013 when the FTC filed an administrative complaint against LabMD, alleging that it failed to reasonably protect the security of consumers’ personal data, including protected health information. The FTC maintained that LabMD’s data security practices caused or were likely to cause substantial consumer injury, and thus constituted an unfair business practice under Section 5 of the FTC Act (the “Act”). Rather than settling the complaint with the FTC, LabMD became the second company to challenge the FTC’s authority over companies’ data security practices.

In 2015, an Administrative Law Judge (“ALJ”) dismissed the case after finding that the FTC had not met its burden of proof for demonstrating that LabMD had engaged in unfair practices in violation of the Act. Section 5 of the Act provides that a business practice is unfair if it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” The ALJ found that the FTC failed to prove that LabMD’s data security practices were likely to cause” substantial consumer injury” as required by the Act, and cited the lack of evidence that anyone actually misused consumers’ data. The ALJ stated “[t]o impose liability for unfair conduct under Section 5(a) of the FTC Act, where there is no proof of actual injury to any consumer, based only on an unspecified and theoretical ‘risk’ of a future data breach and identity theft, would require unacceptable speculation and would vitiate the statutory requirements of ‘likely’ substantial consumer injury.”

The FTC later reversed the ALJ’s decision and maintained that the ALJ did not apply the correct legal standard in making its determination. The FTC stated: “contrary to the ALJ’s holding that ‘likely to cause’ necessarily means that the injury was ‘probable,’ a practice may be unfair if the magnitude of the potential injury is large, even if the likelihood of the injury occurring is low.” According to the FTC, it need not wait for consumers to suffer actual harm before exercising its enforcement authority under Section 5 of the Act.

In November 2016, the Eleventh Circuit Court of Appeals granted a stay of enforcement of the FTC’s Final Order until the pending appeal is resolved, stating that “there are compelling reasons why the FTC’s interpretation may not be reasonable.” The Court questioned whether the Act covers intangible harms such as those at issue in the LabMD case, and also whether the FTC was correct that the phrase “likely to cause” substantial injury to consumers should be interpreted to mean “significant risk” rather than “probable” risk. The Court noted that it did not interpret “the word ‘likely’ to include something that has a low likelihood,” thus finding that the FTC’s interpretation was not reasonable.

In the oral arguments before the Eleventh Circuit on June 21, 2017, LabMD argued that the Court should reject the FTC’s argument that “purely conceptual privacy harm that the FTC found to exist, whenever there is any unauthorized access to any personal medical information, constitutes substantial injury within the meaning of Section 5 under the FTC Act.” In addition, LabMD urged the Court to consider the legislative history of the Act, and pointed to a policy statement on which Congress relied when enacting the Act. According to LabMD, Congressional intent was to expressly exclude subjective injuries and as a result, the Court should not accept the FTC’s position that “likely injury” under Section 5 of the Act includes low-likelihood harm.

In the oral arguments, the FTC maintained that there is nothing in the Act or the Act’s legislative history that limits substantial injury to tangible injury, and that companies have an obligation to act reasonably under the circumstances. The Court questioned whether there is an outer limit to the FTC’s enforcement approach or anything would be beyond the power of the Commission to reach, however the FTC did not provide a direct answer to this question. When asked by the Court why the FTC did not use rulemaking to enact regulations that would address data privacy and security issues, the FTC replied that rule-making is not an effective way to proceed in the cybersecurity context due to the ever-evolving nature of technology and cybersecurity threats. The FTC went on to argue that it is much more sensible to say that a company must act reasonably than rely on rulemaking. The Court pressed for an explanation as to how a company could ever know with certainty what it means to act reasonably, however, the FTC maintained that failure to act reasonably under the circumstance is not a nebulous standard, and stressed that it does not act by using hindsight but rather, considers what is reasonable at the time the security breach occurs.

As the oral arguments made clear, the Court’s decision is likely to significantly impact the FTC’s data security enforcement authority. If the Eleventh Circuit agrees with LabMD’s position that the FTC must demonstrate concrete consumer harm or injury in order to bring an enforcement action under Section 5 of the Act, speculative injury may no longer be a sufficient basis for liability. If, however, the Court finds in favor of the FTC, companies may face liability for data security breaches if the FTC is able to show a “significant” risk of consumer injury, even if such injury is not probable and has not actually occurred.

 

OTHER THOUGHT LEADERSHIP POSTS:

San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

By Jennifer Thompson | On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.

NYC’s Task Force to Tackle Algorithmic Bias: A Study in Inertia

By Linda Henry | In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

U.S. Lawmakers Want Companies to Check their Bias

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, technology companies are under increased scrutiny for alleged discriminatory practices related to their use of artificial intelligence.

The Weight of “GDPR Lite”

By Dawn Ingley | In June, 2018, California’s legislature took the first steps to ensure that the state’s approach to data privacy was trending more closely to the European Union’s General Data Protection Regulation (GDPR), the de facto global industry standard for data protection. Though legislators have acknowledged that further refinements to the California Consumer Privacy Act (CCPA) will be necessary in the coming months, its salient requirements are known.

The ABA’s Valentine’s Gift to Same-Sex Couples: Formal Opinion 458 Requires Judges to Perform Marriages

By Jennifer Thompson | On Valentine’s Day, the American Bar Association (ABA) Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 485, entitled “Judges Performing Same-Sex Marriages,” stating that judges may not decline to perform marriages for couples of the same sex.

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry | On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.