FTC Settlement Reminds IoT Companies to Employ Prudent Software Development Practices

Jul 31, 2019

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Smart home products manufacturer  D-Link Systems Inc. (D-Link) has reached a proposed settlement with the Federal Trade Commission after several years of litigation over D-Link’s security practices. The dispute, which dates back to early 2017, has been a closely watched enforcement action in privacy and data security law due to the possible implications on companies’ potential liability for lax security practices. Although the settlement order does not settle the question as to whether inadequate security practices will be deemed to violate the unfairness prong of the FTC Act even in the absence of evidence of actual harm to consumers, the settlement order does provide insight into software development practices that prudent IoT companies routinely implement.

The dispute began January 2017 when the FTC sued D-Link for engaging in unfair or deceptive acts in violation of Section 5 of the FTC Act in connection with D-Link’s failure to take reasonable steps to secure its routers and Internet-protocol cameras from widely known and reasonably foreseeable security risks.  The FTC’s complaint focused on D-Link’s marketing practices, noting that D-Link’s marketing materials and user manuals included statements in bold, italicized, all-capitalized text that D-Link’s routers were “easy to secure” with “advanced network security.”  D-Link also promoted the security of its IP cameras in its marketing materials, specifically referencing the device’s security in large capital letters. In addition, the IP camera packaging also listed security claims, such as “secure-connection” next to a lock icon as one of the product features.

Although a U.S. district court judge dismissed three of the FTC’s six claims in September 2017, the judge also rejected D-Link’s argument that the FTC lacked statutory authority to regulate data security for IoT companies as an unfair practice under Section 5 of the FTC Act. In the court’s Order Regarding Motion to Dismiss, the court stated that “the fact that data security is not expressly enumerated as within the FTC’s enforcement powers is of no moment to the exercise of its statutory authority.”  With respect to the court’s dismissal of the FTC’s unfairness claim, the court agreed with D-Link that the FTC had failed to provide any concrete facts demonstrating actual harm to consumers, and reasoned that the absence of any concrete facts makes it just as possible that D-Link’s devices would not cause substantial harm to consumers and that “the FTC cannot rely on wholly conclusory allegations about potential injury to tilt the balance in its favor.”

Despite the court’s dismissal of the FTC’s unfairness claim, the court indicated that the claim might have survived a motion to dismiss if the FTC had tied the unfairness claim to the representations underlying the deception claims. The court stated that “a consumer’s purchase of a device that fails to be reasonably secure — let alone as secure as advertised — would likely be in the ballpark of a “substantial injury,” particularly when aggregated across a large group of consumers.”   Although the court’s reasoning indicated that there are limits to the FTC’s data security enforcement capabilities, it did not completely foreclose the possibility that lax security practices might be deemed to violate the unfairness prong of the FTC Act even in the absence of evidence of actual harm to consumers.

In September 2018, the FTC and D-Link each filed a motion for summary judgement and the bench trial scheduled for January 2019 was postponed due to the government shutdown.

Although the proposed settlement announced on July 2, 2019 does not include any financial penalties or finding of liability by D-Link, the settlement order does require D-Link to implement a comprehensive software security program for twenty years that is designed to provide protection for the security of its IP cameras and routers.

Pursuant to the settlement, D-Link will be required to document its security planning, perform threat modeling to identify internal and external risks to data security, conduct pre-release vulnerability testing of every software release, and perform ongoing code maintenance and annual testing of security safeguards. D-Link’s ongoing requirements once products have been released for consumer purchase also include ongoing monitoring for security vulnerabilities, performing automatic firmware updates and establishing a system to accept vulnerability reports from security researchers.

D-Link will also be required to obtain an independent third-party assessment of its security program every other year for the next decade from an FTC approved assessor, and a senior manager of D-Link will be required to provide an annual certification that D-Link is in compliance with the order. Of note, the order allows D-Link to satisfy the requirement of a comprehensive software security program by having the assessor certify D-Link’s compliance with the International Electrotechnical Commission’s (IEC) standard for secure product development lifecycle. Although the FTC has included industry standards in previous orders, the incorporation of the IEC standard is significant because the FTC has not previously cited the IEC’s standards as an approved standard for determining secure product development lifecycle requirements.

In a published FTC statement regarding the settlement order, the FTC reminded IoT companies to apply sound security practices when developing new products, including training engineers in secure coding, verifying that privacy and security features work, and testing for and remediating common vulnerabilities.

Although the settlement order leaves open the question as to whether a consumer’s purchase of a device that fails to be reasonably secure would constitute a “substantial injury,” D-Link’s counsel noted that: “The Court’s dismissal of the Complaint’s ‘unfairness’ claim for failure to plead actual consumer harm will hopefully refocus the FTC’s efforts on practices that actually injure identifiable consumers, providing technology companies with additional certainty necessary for permissionless and evolving innovation.”

OTHER THOUGHT LEADERSHIP POSTS:

DHS Cybersecurity Arm Directs Executive Agencies to Develop Vulnerability Disclosure Policies

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

Open Internet Advocates Rejoice: Ninth Circuit Finds Web Scraping of Publicly Accessible Data Likely Does Not Violate CFAA

The Ninth Circuit Court of Appeals recently handed open internet advocates a big win by upholding the right of a data analytics startup to use automated bots to scrape publicly available data.

The ABA Speaks on AI

By Jennifer Thompson | Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI.

Is Anonymized Data Truly Safe From Re-Identification? Maybe not.

By Linda Henry | Across all industries, data collection is ubiquitous. One recent study estimates that over 2.5 quintillion bytes of data are created every day, and over 90% of the data in the world was generated over the last two years.

FTC Settlement Reminds IoT Companies to Employ Prudent Software Development Practices

By Linda Henry | Smart home products manufacturer D-Link Systems Inc. (D-Link) has reached a proposed settlement with the Federal Trade Commission after several years of litigation over D-Link’s security practices.

Beyond GDPR: How Brexit Affects Other Data Laws

By Dawn Ingley | Since the United Kingdom (UK) voted in June, 2016, to exit the European Union (i.e., “Brexit”), the question in many minds has been, “Whither GDPR?” After all, the UK was a substantial contributor to this legislation. The UK has offered assurances that that it intends to, in large part, harmonize its data protection laws with GDPR.

San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

By Jennifer Thompson | On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.

NYC’s Task Force to Tackle Algorithmic Bias: A Study in Inertia

By Linda Henry | In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

U.S. Lawmakers Want Companies to Check their Bias

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, technology companies are under increased scrutiny for alleged discriminatory practices related to their use of artificial intelligence.

The Weight of “GDPR Lite”

By Dawn Ingley | In June, 2018, California’s legislature took the first steps to ensure that the state’s approach to data privacy was trending more closely to the European Union’s General Data Protection Regulation (GDPR), the de facto global industry standard for data protection. Though legislators have acknowledged that further refinements to the California Consumer Privacy Act (CCPA) will be necessary in the coming months, its salient requirements are known.