IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

Jun 11, 2018

By Jennifer Thompson


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA.  While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed.  Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

First, as documented In “IoT Device Companies:  Add COPPA to Your “To Do” Lists,” the FTC issued guidance in June, 2017, that “Internet of Things” (“IoT”) companies selling devices used by children are subject to COPPA, and may face increased scrutiny from the FTC with respect to their data collection practices.  While COPPA was originally written to apply to online service providers and websites, this guidance made it clear that COPPA’s reach extends to device companies. In general, this action focused on step 1 of the Compliance Plan (general applicability of COPPA), while also providing some guidance on how companies could comply with step 4 of the Compliance Plan (obtaining verifiable parental consent).

Then, in January, 2018, the FTC entered its first-ever settlement with an internet-connected device company resulting from alleged violations of COPPA and the FTC Act.  As discussed in “IoT Device Companies: COPPA Lessons Learned from Vtech’s FTC Settlement,” the FTC alleged violations by the device company of almost all the steps in the Compliance Plan, including failure to appropriately post privacy policies (step 2), failure to appropriately notify parents of the intended data collection activities prior to data collection (step 3), failure to verify parental consent (step 4) and failure to implement adequate security measures to protect the data collected (step 6).  The significance of the settlement was that it solidified the earlier guidance that COPPA operates to govern device companies, in addition to websites and online application providers.

In April, 2018, the FTC further expanded its regulatory reach by sending warning letters alleging potential COPPA violations to two device/application companies located outside the United States.  Both companies collected precise geolocation data on children in connection with devices worn by the children.  The warning letters clarified that, although located outside the United States, the FTC deemed the companies subject to COPPA, as: a) their services were directed at children in the United States; and b) and the companies knowingly collected data from children in the United States.  Interestingly, one of the targeted companies, Tinitell, Inc., was not even selling its devices at the time of the letter’s issuance.  Nonetheless, the FTC warned that since the Tinitell website indicated that the devices would work through September 2018: a) COPPA would continue to apply beyond the sale of the devices; and b) the company is obligated to continue to take reasonable measures to secure the data it had and would continue to collect.

Most recently, the FTC again took to its blog post to remind companies that COPPA obligations pursuant to step 6 (implement reasonable procedures to protect the security of kids’ personal information) may extend even beyond the termination of the company’s relationship with the child.  Although “reasonable security measures” is a broad concept, the FTC narrowed in on the duty to delete data that is no longer required.

Section 312.10 of COPPA states that companies may keep personal information obtained from children under the age of 13 “for only as long as is reasonably necessary to fulfill the purpose for which the information was collected.”  After the fulfillment of the purpose for which the information was collected, the information is to be deleted in such a manner and using reasonable measures to ensure that it cannot be accessed or used in connection with the deletion.

On May 31, 2018, the FTC posted a blog entitled “Under COPPA, data deletion isn’t just a good idea.  It’s the law.” which reminds website and online service providers subject to COPPA (and, by extension, any device companies that market internet-connected devices to children) that there are situations in which COPPA will require subject companies to delete the personal information it has collected from children, even if the parent does not specifically request the deletion.  This guidance establishes an affirmative duty on the company collecting the information to self-police and to securely discard the information as soon as it no longer needs it, even if the customer has not made such a request.

The blog further suggests that all companies review their data retention policies to ensure that the stated policies adequately address the following list of questions:

  • What types of personal information are you collecting from children?
  • What is your stated purpose for collecting the information?
  • How long do you need to hold on to the information to fulfill the purpose for which it was initially collected? For example, do you still need information you collected a year ago?
  • Does the purpose for using the information end with an account deletion, subscription cancellation, or account inactivity?
  • When it’s time to delete information, are you doing it securely?

It will be interesting to see if the FTC continues to focus on COPPA in its enforcement actions. All told, the FTC has brought around thirty actions pursuant to COPPA.  But recent activity, like the warning letters to international companies and the recent guidance on data deletion, indicate that the FTC may be expanding the arena for COPPA applicability.

OTHER THOUGHT LEADERSHIP POSTS:

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry | On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley | With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson | Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA. While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed. Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley | Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry | As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use and monetization.  Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high-performance web scraping tools are fueling the big data revolution.  Like many technological advances though, the law has not kept up with the technology that enables scraping. As a result, the state of the law on data scraping remains in flux.

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson | In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States Uber drivers had been hacked.  Uber, like many companies, leveraged a vulnerability disclosure or “bug bounty” program that invited hackers to test Uber’s systems for certain vulnerabilities, and offered financial rewards for qualifying vulnerabilities.  In fact, Uber has paid out over $1,000,000 pursuant to its program, which is administered through HackerOne, a third-party vendor.  Uber initially identified the breach as an authorized vulnerability disclosure, paid the hackers $100,000, and the hackers deleted the records.  Yet, Uber has faced lawsuits, governmental inquiry and much public criticism in connection with this payment.