D-Link Continues Challenges to FTC’s Data Security Authority

Sep 26, 2018

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

In January 2017, the FTC sued D-Link for engaging in unfair or deceptive acts in violation of Section 5 of the FTC Act in connection with D-Link’s failure to take reasonable steps to secure its routers and Internet-protocol cameras from widely known and reasonably foreseeable security risks.   The FTC’s complaint focused on D-Link’s marketing practices, noting that D-Link’s marketing materials and user manuals included statements in bold, italicized, all-capitalized text that D-Link’s routers were “easy to secure” with “advanced network security.”  D-Link also promoted the security of its IP cameras in its marketing materials, specifically referencing the device’s security in large capital letters.  In addition, the IP camera packaging also listed security claims, such as “secure-connection” next to a lock icon as one of the product features.

Although a U.S. district court judge dismissed three of the FTC’s six claims in September 2017, the judge also rejected D-Link’s argument that the FTC lacked statutory authority to regulate data security for IoT companies as an unfair practice under Section 5 of the FTC Act.  In the court’s Order Regarding Motion to Dismiss, the court stated that “the fact that data security is not expressly enumerated as within the FTC’s enforcement powers is of no moment to the exercise of its statutory authority.”  With respect to the court’s dismissal of the FTC’s unfairness claim, the court agreed with D-Link that the FTC had failed to provide any concrete facts demonstrating actual harm to consumers, and reasoned that the absence of any concrete facts makes it just as possible that D-Link’s devices would not cause substantial harm to consumers and that “the FTC cannot rely on wholly conclusory allegations about potential injury to tilt the balance in its favor.”

Despite the court’s dismissal of the FTC’s unfairness claim, the court indicated that the claim might have survived a motion to dismiss if the FTC had tied the unfairness claim to the representations underlying the deception claims.  The court stated that “a consumer’s purchase of a device that fails to be reasonably secure — let alone as secure as advertised — would likely be in the ballpark of a “substantial injury,” particularly when aggregated across a large group of consumers.”   Although the court’s reasoning indicates that there are limits to the FTC’s data security enforcement capabilities, it did not completely foreclose the possibility that lax security practices might be deemed to violate the unfairness prong of the FTC Act even in the absence of evidence of actual harm to consumers.

The FTC argued in its September 2018 motion for summary judgment that summary judgment is appropriate because there is no dispute that D-Link made representations regarding the security of its devices from unauthorized access, the devices contained numerous vulnerabilities that made them susceptible to unauthorized access and D-Link’s security statements were material to consumers.  The FTC noted that “there is no genuine dispute that D-Link routers and IP cameras have contained serious, foreseeable, and easily preventable vulnerabilities permitting unauthorized access; that D-Link knew of these vulnerabilities; and that D-Link sold and marketed these devices as secure anyway.”

In D-Link’s motion for summary judgment, D-Link argued that the FTC’s remaining deception claims were based on “expert conjecture” with no evidentiary support.  D-Link stressed that the FTC’s failure to present any evidence that an identifiable consumer was deceived by D-Link’s marketing statements or that any of the routers or cameras were actually compromised demonstrated that there was no harm for the court to remedy.

D-Link is significant because the outcome may have a substantial impact on the FTC’s ability to successfully pursue a claim under Section 5 of the FTC Act in the absence of evidence that there has been an actual harm or injury to consumers. In addition, the outcome of D-Link may shape the FTC’s approach to classifying informational harm that impacts consumers following a data breach.

Even if the D-Link decision offers more clarity around the scope of the FTC’s regulatory authority on data security, the FTC’s past guidance regarding data security and privacy remains useful when evaluating a company’s data security practices.  Over the past few years, the FTC has repeatedly stressed that a company’s failure to implement reasonable security measures may be considered deceptive or unfair, and has stated that “the touchstone of the FTC’s approach to data security is reasonableness: a company’s data security measures must be reasonable in light of the sensitivity and volume of consumer information it holds, the size and complexity of its data operations, and the cost of available tools to improve security and reduce vulnerabilities.” In addition, the FTC’s motions in D-Link confirm that a company should ensure that it actually follows all security practices it claims to follow.

OTHER THOUGHT LEADERSHIP POSTS:

NYC’s Task Force to Tackle Algorithmic Bias Issues Final Report

In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

While you’ve been focused on CCPA Compliance Efforts, Elon has Been Developing Cyborgs

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

DHS Cybersecurity Arm Directs Executive Agencies to Develop Vulnerability Disclosure Policies

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

Open Internet Advocates Rejoice: Ninth Circuit Finds Web Scraping of Publicly Accessible Data Likely Does Not Violate CFAA

The Ninth Circuit Court of Appeals recently handed open internet advocates a big win by upholding the right of a data analytics startup to use automated bots to scrape publicly available data.

The ABA Speaks on AI

By Jennifer Thompson | Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI.

Is Anonymized Data Truly Safe From Re-Identification? Maybe not.

By Linda Henry | Across all industries, data collection is ubiquitous. One recent study estimates that over 2.5 quintillion bytes of data are created every day, and over 90% of the data in the world was generated over the last two years.

FTC Settlement Reminds IoT Companies to Employ Prudent Software Development Practices

By Linda Henry | Smart home products manufacturer D-Link Systems Inc. (D-Link) has reached a proposed settlement with the Federal Trade Commission after several years of litigation over D-Link’s security practices.

Beyond GDPR: How Brexit Affects Other Data Laws

By Dawn Ingley | Since the United Kingdom (UK) voted in June, 2016, to exit the European Union (i.e., “Brexit”), the question in many minds has been, “Whither GDPR?” After all, the UK was a substantial contributor to this legislation. The UK has offered assurances that that it intends to, in large part, harmonize its data protection laws with GDPR.

San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

By Jennifer Thompson | On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.

NYC’s Task Force to Tackle Algorithmic Bias: A Study in Inertia

By Linda Henry | In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.