Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

Jul 23, 2018

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

Diversity of Backgrounds

Axon’s ethics board includes not only AI experts, but also a diverse sampling from the related fields of computer science and engineering, privacy and data protection and civil liberties.  A sampling of members includes the following individuals:

  • Ali Farhadi, Professor of Computer Science and Engineering, University of Washington
  • Barry Friedman, Professor and Director of the Policing Project at New York University School of Law
  • Jeremy Gillula, Privacy and Civil Liberties Technologist
  • Jim Bueerman, President of the Police Foundation
  • Miles Brundage, Research Fellow at the University of Oxford’s Future of Humanity Institute
  • Tracy Ann Kosa, Professor at Seattle University School of Law
  • Vera Bumpers, Chief of Houston Metro Police Department and President of the National Organization of Black Law Enforcement Executives
  • Walter McNeil, Sheriff of Leon County, Florida Sheriff’s Office and prior President of the International Association of Chiefs of Police

Obviously, Axon’s goal was to establish a team that could evaluate use and implementation from all angles, ranging from law enforcement officials who employ such technologies, to those experts who help to create and shape legislation governing use of the same.  Axon may be moving in the direction of facial recognition technologies; after all, police forces in both the United Kingdom and China have leveraged these types of technologies for years.  Thus far, one of the chief concerns surrounding facial recognition is the penchant for racial and gender bias—higher error rates for both females and African-Americans.  If Axon does, indeed, move in that direction, it is critical that its advisory group include constituents from all perspectives and demographics.

Core Values

In addition to its own commitment to diversity, DeepMind’s key principles are reflective of its owner’s more expansive footprint across technology platforms:

  • Social benefit: AI should “serve the global social and environmental good…to build fairer and more equal societies…”
  • Rigorous and evidence-based: Technical research must conform to the highest academic research standards, including peer review.
  • Transparent and open: DeepMind will be open as to “who we work with and what projects we fund.”
  • Collaboration and Inclusion: Research must be “accountable to all of society.”

DeepMind’s focus on managing the risk of AI is on an even broader canvas than that of Axon.  In furtherance of its key principles,  DeepMind seeks to answer key questions:

  • What are the societal risks when AI fails?
  • How can humans remain in control of AI?
  • How can dangerous applications of AI in the contexts of terrorism and warfare be avoided?

Though much of the AI industry has yet to provide details as to their own ethics programs, some of its blue chips have acted in unison to establish a more formalized set of guidelines.  In 2016, the Partnership on AI to Benefit People and Society was founded collectively by Amazon, Apple, Google, Facebook, IBM and Microsoft.  Seven pillars form the basis of this partnership:

  • Safety-critical AI: Tools used to perform human discretionary tasks must be “safe, trustworthy and aligned with ethics…”
  • Fair, transparent and accountable AI: Systems must designed so as to be alert to possible biases resulting from the use of AI.
  • Collaborations between people and AI systems: AI is best harnessed in a close collaboration between humans and the systems, themselves.
  • AI, labor and the economy: “Competition and innovation is encouraged and not stifled.”
  • Social and societal influences of AI: While AI has the potential to provide useful assistance and insights to humans, users must also be sensitive to its potential to subtly influence humans.
  • AI and social good: The sky’s the limit in terms of AI’s potential in addressing long-standing societal ills.

While these best practices are a promising start, the industry continues to lack more particulars in terms of how the guidelines will be put into practice.  It is likely that consumers will maintain a healthy skepticism until more particular guardrails are provided which offer compelling evidence of the good, rather than the bad and the ugly.

OTHER THOUGHT LEADERSHIP POSTS:

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across...

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...