Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

Nov 13, 2018

By Dawn Ingley

See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.  Just one week later, Senator Ron Wyden of Oregon introduced a discussion draft for a privacy law, the Consumer Data Protection Act, that would result in steep fines and possible incarceration for top executives of companies that violated the law.  Cook’s position and Wyden’s proposed bill stand in stark contrast to several of Apple’s technology industry competitors.  Indeed, those competitors are reportedly already seeking to unravel California’s recent data protection legislation.  In the wake of the congressional and corporate push for more restrictive privacy regulations in an innovative industry, will other giants on the technology landscape be left in the dust?

Tim Cook’s proclamation was of no surprise to close followers of Apple’s culture and approach to data protection; yet, Cook’s message at this conference was more direct than ever, citing his competitors’ efforts to create “platforms and algorithms” to “weaponize personal data.”  Cook’s argument for increased data protection regulation then took an apocalyptic turn, when he warned, “Your profile is a bunch of algorithms that serve up increasingly extreme content, pounding our harmless preferences into harm…we shouldn’t sugarcoat the consequences.  This is surveillance.”

Cook’s proposed regulation included several of the key hallmarks of GDPR and its progeny:

  1. Minimization of personal data collection from technology users;
  2. Communication to technology users as to the “what and why”—what data is being collected and why;
  3. Users’ rights to obtain their data, as well as to correct and delete their data; and
  4. Security of user data.

Cook argued that not only were such rights fundamental to the user experience and adoption of emerging technologies, but that such rights also engendered trust between technology organizations and the consumers of such technologies.

Cook’s (and Apple’s) approach to increased data protection has garnered praise by those members of Congress who routinely champion consumer rights in this space, including Senator Wyden.  His discussion draft identified the federal government’s failures to protect data:

“ (1)     Information about consumers’ activities, including their location information and the websites they visit is tracked, sold and monetized without their knowledge by many entities;

(2)        Corporations’ lax cybersecurity and poor oversight of commercial data-sharing partnerships has resulted in major data breaches and the misuse of Americans’ personal data;

(3)        Consumers have no effective way to control companies’ use and sharing of their data.”

As consumer protections of the nature contemplated in data protection laws and regulations would typically be the province of the Federal Trade Commission (FTC), Senator Wyden then went on to demonstrate how the FTC lacked the power and ammunition to effectively combat threats to consumer data privacy:

“(1)      The FTC cannot fine first-time corporate offenders.  Fines for subsequent violators of the laws are tiny and not a credible deterrent.

(2)        The FTC does not have the power to punish companies unless they lie to consumers about how much they protect their privacy or the companies’ harmful behavior costs consumers money.

(3)        The FTC does not have the power to set minimum cybersecurity standards for products that process consumer data, nor does any federal regulator.

(4)        The FTC does not have enough staff, especially skilled technology experts…”

Senator Wyden posited that Congress could empower the FTC to:

“(1)      Establish minimum privacy and cybersecurity standards.

(2)        Issue steep fines (up to 4% of annual revenue), on the first offense for companies and 10-20 year criminal penalties for senior executives.

(3)        Create a national Do Not Track system that lets consumers stop third-party companies from tracking them on the web by sharing data, selling data, or targeting advertisements based on their personal information.  It permits companies to charge consumers who want to use their products and services, but don’t want their information monetized.

(4)        Give consumers a way to review their personal information a company has about them, learn with whom it has been shared or sold, and to challenge inaccuracies in it.

(5)        Hire 175 more staff to police the largely unregulated market for private data.

(6)        Require companies to assess the algorithms that process consumer data to examine their impact on accuracy, fairness, bias, discrimination, privacy and security.”

Any combination of Cook’s proposal and Senator Wyden’s bill would change the landscape of privacy in the United States, especially among the technology giants of Silicon Valley.  The push for increased data protection at the federal level from technology companies (other than Apple) largely have been efforts to avoid and supersede laws passed in states such as California and Massachusetts.  In the absence of a superseding federal law, those states’ laws become the de facto law of the land for companies doing business nationally.  Yet, technology companies must walk a fine line if they are to avoid a publicity and consumer nightmare—if they succeed in their efforts to have federal law implemented in place of state laws, consumer sentiment would dictate that federal law must be something akin to those ones proposed by Cook and Senator Wyden, and not merely lip service.  Otherwise, they could find themselves in the headlines for all the wrong reasons.

With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonabtele, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

Diversity of Backgrounds

Axon’s ethics board includes not only AI experts, but also a diverse sampling from the related fields of computer science and engineering, privacy and data protection and civil liberties.  A sampling of members includes the following individuals:

  • Ali Farhadi, Professor of Computer Science and Engineering, University of Washington
  • Barry Friedman, Professor and Director of the Policing Project at New York University School of Law
  • Jeremy Gillula, Privacy and Civil Liberties Technologist
  • Jim Bueerman, President of the Police Foundation
  • Miles Brundage, Research Fellow at the University of Oxford’s Future of Humanity Institute
  • Tracy Ann Kosa, Professor at Seattle University School of Law
  • Vera Bumpers, Chief of Houston Metro Police Department and President of the National Organization of Black Law Enforcement Executives
  • Walter McNeil, Sheriff of Leon County, Florida Sheriff’s Office and prior President of the International Association of Chiefs of Police

Obviously, Axon’s goal was to establish a team that could evaluate use and implementation from all angles, ranging from law enforcement officials who employ such technologies, to those experts who help to create and shape legislation governing use of the same.  Axon may be moving in the direction of facial recognition technologies; after all, police forces in both the United Kingdom and China have leveraged these types of technologies for years.  Thus far, one of the chief concerns surrounding facial recognition is the penchant for racial and gender bias—higher error rates for both females and African-Americans.  If Axon does, indeed, move in that direction, it is critical that its advisory group include constituents from all perspectives and demographics.

Core Values

In addition to its own commitment to diversity, DeepMind’s key principles are reflective of its owner’s more expansive footprint across technology platforms:

  • Social benefit: AI should “serve the global social and environmental good…to build fairer and more equal societies…”
  • Rigorous and evidence-based: Technical research must conform to the highest academic research standards, including peer review.
  • Transparent and open: DeepMind will be open as to “who we work with and what projects we fund.”
  • Collaboration and Inclusion: Research must be “accountable to all of society.”

DeepMind’s focus on managing the risk of AI is on an even broader canvas than that of Axon.  In furtherance of its key principles,  DeepMind seeks to answer key questions:

  • What are the societal risks when AI fails?
  • How can humans remain in control of AI?
  • How can dangerous applications of AI in the contexts of terrorism and warfare be avoided?

Though much of the AI industry has yet to provide details as to their own ethics programs, some of its blue chips have acted in unison to establish a more formalized set of guidelines.  In 2016, the Partnership on AI to Benefit People and Society was founded collectively by Amazon, Apple, Google, Facebook, IBM and Microsoft.  Seven pillars form the basis of this partnership:

  • Safety-critical AI: Tools used to perform human discretionary tasks must be “safe, trustworthy and aligned with ethics…”
  • Fair, transparent and accountable AI: Systems must designed so as to be alert to possible biases resulting from the use of AI.
  • Collaborations between people and AI systems: AI is best harnessed in a close collaboration between humans and the systems, themselves.
  • AI, labor and the economy: “Competition and innovation is encouraged and not stifled.”
  • Social and societal influences of AI: While AI has the potential to provide useful assistance and insights to humans, users must also be sensitive to its potential to subtly influence humans.
  • AI and social good: The sky’s the limit in terms of AI’s potential in addressing long-standing societal ills.

While these best practices are a promising start, the industry continues to lack more particulars in terms of how the guidelines will be put into practice.  It is likely that consumers will maintain a healthy skepticism until more particular guardrails are provided which offer compelling evidence of the good, rather than the bad and the ugly.


NYC’s Task Force to Tackle Algorithmic Bias Issues Final Report

In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

While you’ve been focused on CCPA Compliance Efforts, Elon has Been Developing Cyborgs

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

DHS Cybersecurity Arm Directs Executive Agencies to Develop Vulnerability Disclosure Policies

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

Open Internet Advocates Rejoice: Ninth Circuit Finds Web Scraping of Publicly Accessible Data Likely Does Not Violate CFAA

The Ninth Circuit Court of Appeals recently handed open internet advocates a big win by upholding the right of a data analytics startup to use automated bots to scrape publicly available data.

The ABA Speaks on AI

By Jennifer Thompson | Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI.

Is Anonymized Data Truly Safe From Re-Identification? Maybe not.

By Linda Henry | Across all industries, data collection is ubiquitous. One recent study estimates that over 2.5 quintillion bytes of data are created every day, and over 90% of the data in the world was generated over the last two years.

FTC Settlement Reminds IoT Companies to Employ Prudent Software Development Practices

By Linda Henry | Smart home products manufacturer D-Link Systems Inc. (D-Link) has reached a proposed settlement with the Federal Trade Commission after several years of litigation over D-Link’s security practices.

Beyond GDPR: How Brexit Affects Other Data Laws

By Dawn Ingley | Since the United Kingdom (UK) voted in June, 2016, to exit the European Union (i.e., “Brexit”), the question in many minds has been, “Whither GDPR?” After all, the UK was a substantial contributor to this legislation. The UK has offered assurances that that it intends to, in large part, harmonize its data protection laws with GDPR.

San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

By Jennifer Thompson | On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.

NYC’s Task Force to Tackle Algorithmic Bias: A Study in Inertia

By Linda Henry | In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.