PRESS RELEASE:  Dawn Ingley, Senior Counsel, Patrick Law Group, LLC Has Earned CIPP/US Certification

PRESS RELEASE: Dawn Ingley, Senior Counsel, Patrick Law Group, LLC Has Earned CIPP/US Certification

FOR RELEASE: August 1, 2018

Dawn Ingley, Senior Counsel, Patrick Law Group, LLC Has Earned CIPP/US Certification

[Atlanta, Georgia] – [August 31, 2018 – Dawn Ingley, Senior Counsel, Patrick Law Group, LLC, has earned the ANSI-accredited Certified Information Privacy Professional/United States (CIPP/US) credential through the International Association of Privacy Professionals (IAPP). Dawn Ingley is Senior Counsel at Patrick Law Group, and possesses over 14 years of experience representing mid-size and large corporations, primarily in the areas of technology, information security and data privacy, mergers and acquisitions and general commercial contracting.

Privacy professionals are the arbiters of trust in today’s data-driven global economy.  They help organizations manage rapidly evolving privacy threats and mitigate the potential loss and misuse of information assets.  The IAPP is the first organization to publicly establish standards in professional education and testing for privacy and data protection. IAPP privacy certification is internationally recognized as a reputable, independent program that professionals seek and employers demand.

The CIPP is the global standard in privacy certification. Developed and launched by the IAPP with leading subject matter experts, the CIPP is the world’s first broad-based global privacy and data protection credentialing program. The CIPP/US demonstrates a strong foundation in U.S. private-sector privacy laws and regulations and understanding of the legal requirements for the responsible transfer of sensitive personal data to/from the U.S., the EU and other jurisdictions. Ms. Ingley joins the ranks of more than 10,000 professionals worldwide who currently hold one or more IAPP certifications.

About the IAPP

The International Association of Privacy Professionals (IAPP) is the largest and most comprehensive global information privacy community and resource. Founded in 2000, the IAPP is a not-for-profit organization that helps define, support and improve the privacy profession globally. More information about the IAPP is available at www.iapp.org.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

Diversity of Backgrounds

Axon’s ethics board includes not only AI experts, but also a diverse sampling from the related fields of computer science and engineering, privacy and data protection and civil liberties.  A sampling of members includes the following individuals:

  • Ali Farhadi, Professor of Computer Science and Engineering, University of Washington
  • Barry Friedman, Professor and Director of the Policing Project at New York University School of Law
  • Jeremy Gillula, Privacy and Civil Liberties Technologist
  • Jim Bueerman, President of the Police Foundation
  • Miles Brundage, Research Fellow at the University of Oxford’s Future of Humanity Institute
  • Tracy Ann Kosa, Professor at Seattle University School of Law
  • Vera Bumpers, Chief of Houston Metro Police Department and President of the National Organization of Black Law Enforcement Executives
  • Walter McNeil, Sheriff of Leon County, Florida Sheriff’s Office and prior President of the International Association of Chiefs of Police

Obviously, Axon’s goal was to establish a team that could evaluate use and implementation from all angles, ranging from law enforcement officials who employ such technologies, to those experts who help to create and shape legislation governing use of the same.  Axon may be moving in the direction of facial recognition technologies; after all, police forces in both the United Kingdom and China have leveraged these types of technologies for years.  Thus far, one of the chief concerns surrounding facial recognition is the penchant for racial and gender bias—higher error rates for both females and African-Americans.  If Axon does, indeed, move in that direction, it is critical that its advisory group include constituents from all perspectives and demographics.

Core Values

In addition to its own commitment to diversity, DeepMind’s key principles are reflective of its owner’s more expansive footprint across technology platforms:

  • Social benefit: AI should “serve the global social and environmental good…to build fairer and more equal societies…”
  • Rigorous and evidence-based: Technical research must conform to the highest academic research standards, including peer review.
  • Transparent and open: DeepMind will be open as to “who we work with and what projects we fund.”
  • Collaboration and Inclusion: Research must be “accountable to all of society.”

DeepMind’s focus on managing the risk of AI is on an even broader canvas than that of Axon.  In furtherance of its key principles,  DeepMind seeks to answer key questions:

  • What are the societal risks when AI fails?
  • How can humans remain in control of AI?
  • How can dangerous applications of AI in the contexts of terrorism and warfare be avoided?

Though much of the AI industry has yet to provide details as to their own ethics programs, some of its blue chips have acted in unison to establish a more formalized set of guidelines.  In 2016, the Partnership on AI to Benefit People and Society was founded collectively by Amazon, Apple, Google, Facebook, IBM and Microsoft.  Seven pillars form the basis of this partnership:

  • Safety-critical AI: Tools used to perform human discretionary tasks must be “safe, trustworthy and aligned with ethics…”
  • Fair, transparent and accountable AI: Systems must designed so as to be alert to possible biases resulting from the use of AI.
  • Collaborations between people and AI systems: AI is best harnessed in a close collaboration between humans and the systems, themselves.
  • AI, labor and the economy: “Competition and innovation is encouraged and not stifled.”
  • Social and societal influences of AI: While AI has the potential to provide useful assistance and insights to humans, users must also be sensitive to its potential to subtly influence humans.
  • AI and social good: The sky’s the limit in terms of AI’s potential in addressing long-standing societal ills.

While these best practices are a promising start, the industry continues to lack more particulars in terms of how the guidelines will be put into practice.  It is likely that consumers will maintain a healthy skepticism until more particular guardrails are provided which offer compelling evidence of the good, rather than the bad and the ugly.

OTHER THOUGHT LEADERSHIP POSTS:

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across...

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Technology Alert – PLG Joins Blockchain Initiative

Technology Alert – PLG Joins Blockchain Initiative

Patrick Law Group Becomes Member of Legal Center of Excellence, a New Global Platform for Studying Developments in Blockchain

Patrick Law Group is pleased to announce that the firm has joined a new global legal initiative, the Legal Center of Excellence (LCoE), which is devoted to advancing legal thought leadership and sharing best practices regarding blockchain technology. The LCoE was established by R3, a London, New York and Singapore-based enterprise software firm that is working with over 200 institutions to develop applications on its distributed ledger platform, Corda.

As a member of the LCoE, PLG will have access to R3’s research on blockchain and monthly demonstrations that will provide PLG attorneys insight into real world blockchain applications. Richard Gendal Brown, R3’s chief technology officer, said the LCoE “will allow R3 to directly engage with the lawyers that will be advising on and helping draft the smart contracts used by the network of Corda users across the globe.”

PLG’s participation in the LCoE demonstrates PLG’s commitment to staying at the forefront of legal and technological developments.  We look forward to working with our Clients on blockchain and other emerging technologies.

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA.  While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed.  Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

First, as documented In “IoT Device Companies:  Add COPPA to Your “To Do” Lists,” the FTC issued guidance in June, 2017, that “Internet of Things” (“IoT”) companies selling devices used by children are subject to COPPA, and may face increased scrutiny from the FTC with respect to their data collection practices.  While COPPA was originally written to apply to online service providers and websites, this guidance made it clear that COPPA’s reach extends to device companies. In general, this action focused on step 1 of the Compliance Plan (general applicability of COPPA), while also providing some guidance on how companies could comply with step 4 of the Compliance Plan (obtaining verifiable parental consent).

Then, in January, 2018, the FTC entered its first-ever settlement with an internet-connected device company resulting from alleged violations of COPPA and the FTC Act.  As discussed in “IoT Device Companies: COPPA Lessons Learned from Vtech’s FTC Settlement,” the FTC alleged violations by the device company of almost all the steps in the Compliance Plan, including failure to appropriately post privacy policies (step 2), failure to appropriately notify parents of the intended data collection activities prior to data collection (step 3), failure to verify parental consent (step 4) and failure to implement adequate security measures to protect the data collected (step 6).  The significance of the settlement was that it solidified the earlier guidance that COPPA operates to govern device companies, in addition to websites and online application providers.

In April, 2018, the FTC further expanded its regulatory reach by sending warning letters alleging potential COPPA violations to two device/application companies located outside the United States.  Both companies collected precise geolocation data on children in connection with devices worn by the children.  The warning letters clarified that, although located outside the United States, the FTC deemed the companies subject to COPPA, as: a) their services were directed at children in the United States; and b) and the companies knowingly collected data from children in the United States.  Interestingly, one of the targeted companies, Tinitell, Inc., was not even selling its devices at the time of the letter’s issuance.  Nonetheless, the FTC warned that since the Tinitell website indicated that the devices would work through September 2018: a) COPPA would continue to apply beyond the sale of the devices; and b) the company is obligated to continue to take reasonable measures to secure the data it had and would continue to collect.

Most recently, the FTC again took to its blog post to remind companies that COPPA obligations pursuant to step 6 (implement reasonable procedures to protect the security of kids’ personal information) may extend even beyond the termination of the company’s relationship with the child.  Although “reasonable security measures” is a broad concept, the FTC narrowed in on the duty to delete data that is no longer required.

Section 312.10 of COPPA states that companies may keep personal information obtained from children under the age of 13 “for only as long as is reasonably necessary to fulfill the purpose for which the information was collected.”  After the fulfillment of the purpose for which the information was collected, the information is to be deleted in such a manner and using reasonable measures to ensure that it cannot be accessed or used in connection with the deletion.

On May 31, 2018, the FTC posted a blog entitled “Under COPPA, data deletion isn’t just a good idea.  It’s the law.” which reminds website and online service providers subject to COPPA (and, by extension, any device companies that market internet-connected devices to children) that there are situations in which COPPA will require subject companies to delete the personal information it has collected from children, even if the parent does not specifically request the deletion.  This guidance establishes an affirmative duty on the company collecting the information to self-police and to securely discard the information as soon as it no longer needs it, even if the customer has not made such a request.

The blog further suggests that all companies review their data retention policies to ensure that the stated policies adequately address the following list of questions:

  • What types of personal information are you collecting from children?
  • What is your stated purpose for collecting the information?
  • How long do you need to hold on to the information to fulfill the purpose for which it was initially collected? For example, do you still need information you collected a year ago?
  • Does the purpose for using the information end with an account deletion, subscription cancellation, or account inactivity?
  • When it’s time to delete information, are you doing it securely?

It will be interesting to see if the FTC continues to focus on COPPA in its enforcement actions. All told, the FTC has brought around thirty actions pursuant to COPPA.  But recent activity, like the warning letters to international companies and the recent guidance on data deletion, indicate that the FTC may be expanding the arena for COPPA applicability.

OTHER THOUGHT LEADERSHIP POSTS:

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across...

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Predictive Algorithms in Sentencing: Are We Automating Bias?

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

The criminal justice system is one area in which governments are increasingly using algorithms, particularly in connection with creating risk assessment profiles of defendants.  For example, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a computer algorithm used to score a defendant’s risk of recidivism and is one of the risk assessment tools most widely used by courts to predict recidivism.  COMPAS creates a risk assessment by comparing information regarding a defendant to historical data from groups of similar individuals.

COMPAS is only one example of proprietary software being used by courts to make sentencing decisions, and states are increasingly using software risk assessment tools such as COMPAS as a formal part of the sentencing process.  Because many of the algorithms such as COMPAS are proprietary, the source code is not published and is not subject to state or federal open record laws. As a result, the opacity inherent in proprietary programs such as COMPAS prevents third parties from seeing the data and calculations that impact sentencing decisions.

Challenges by defendants to the use of such algorithms in criminal sentencing have been unsuccessful.  In 2017, Eric Loomis, a Wisconsin defendant, unsuccessfully challenged the use of the COMPAS algorithm as a violation of his due process rights.  In 2013, Loomis was arrested and charged with five criminal counts related to a drive by shooting.  Loomis maintained that he was not involved in the shooting but pled guilty to driving a motor vehicle without the owner’s permission and fleeing from police.  At sentencing, the trial court judge sentenced Loomis to six years in prison, noting that the court ruled out probation based in part on the COMPAS risk assessment that suggested Loomis presented a high risk to re-offend.[1]Loomis appealed his sentence, arguing that the use of the risk assessment violated his constitutional right to due process.  The Wisconsin Supreme Court ultimately affirmed the lower court’s decision that it could utilize the risk assessment tool in sentencing, and also found no violation of Loomis’ due process rights.  In 2017, the U.S. Supreme Court denied Loomis’ petition for writ of certiorari.

The use of computer algorithms in risk assessments have been touted by some as a way to eliminate human bias in sentencing.  Although COMPAS and other risk assessment software programs use algorithms that are race neutral on their face, the algorithms frequently use data points that can serve as proxies for race, such as ZIP codes, education history and family history of incarceration.[2]  In addition, critics of such algorithms question the methodologies used by programs such as COMPAS, since methodologies (which are necessarily created by individuals) may unintentionally reflect human bias.  If the data sets being used to train the algorithms are not truly objective, human bias may be unintentionally baked into the algorithm, effectively automating human bias.

The investigation by ProPublica that prompted New York City’s algorithmic accountability bill found that COMPAS risk assessments were more likely to erroneously identify black defendants as presenting a high risk for recidivism at almost twice the rate as white defendants (43 percent vs 23 percent).  In addition, ProPublica’s research revealed that COMPAS risk assessments erroneously labeled white defendants as low-risk 48 percent of the time, compared to 28 percent for black defendants.  Black defendants were also 45 percent more likely to receive a higher risk score than white defendants, even after controlling for variables such as prior crimes, age and gender. [3]ProPublica’s findings raise serious concerns regarding COMPAS, however, because the calculations used to assess risk are proprietary, neither defendants nor the court systems utilizing COMPAS have visibility into why such assessments have significant rates of mislabeling among black and white defendants.

Although New York City’s algorithmic accountability bill hopes to curb algorithmic bias and bring more transparency to algorithms used across all New York City agencies, including those used in criminal sentencing, the task force faces significant hurdles.  It is unclear how the task force will make the threshold determination as to whether an algorithm disproportionately harms a particular group, or how the City will increase transparency and fairness without access to proprietary source code.  Despite the task force’s daunting task of balancing the need for more transparency against the right of companies to protect their intellectual property, critics of the use of algorithms in the criminal justice system are hopeful that New York City’s bill will encourage other cities and states to acknowledge the problem of algorithmic bias.

 


[1] State v. Loomis, 881 N.W.2d 749 (2016)

[2] Hudson, L., Technology Is Biased Too. How Do We Fix It?, FiveThirtyEight (Jul. 20, 2017), https://fivethirtyeight.com/features/technology-is-biased-too-how-do-we-fix-it/.

[3] Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www .propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

OTHER THOUGHT LEADERSHIP POSTS:

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across...

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...