PRESS RELEASE:  Dawn Ingley, Senior Counsel, Patrick Law Group, LLC Has Earned CIPP/US Certification

PRESS RELEASE: Dawn Ingley, Senior Counsel, Patrick Law Group, LLC Has Earned CIPP/US Certification

FOR RELEASE: August 1, 2018

Dawn Ingley, Senior Counsel, Patrick Law Group, LLC Has Earned CIPP/US Certification

[Atlanta, Georgia] – [August 31, 2018 – Dawn Ingley, Senior Counsel, Patrick Law Group, LLC, has earned the ANSI-accredited Certified Information Privacy Professional/United States (CIPP/US) credential through the International Association of Privacy Professionals (IAPP). Dawn Ingley is Senior Counsel at Patrick Law Group, and possesses over 14 years of experience representing mid-size and large corporations, primarily in the areas of technology, information security and data privacy, mergers and acquisitions and general commercial contracting.

Privacy professionals are the arbiters of trust in today’s data-driven global economy.  They help organizations manage rapidly evolving privacy threats and mitigate the potential loss and misuse of information assets.  The IAPP is the first organization to publicly establish standards in professional education and testing for privacy and data protection. IAPP privacy certification is internationally recognized as a reputable, independent program that professionals seek and employers demand.

The CIPP is the global standard in privacy certification. Developed and launched by the IAPP with leading subject matter experts, the CIPP is the world’s first broad-based global privacy and data protection credentialing program. The CIPP/US demonstrates a strong foundation in U.S. private-sector privacy laws and regulations and understanding of the legal requirements for the responsible transfer of sensitive personal data to/from the U.S., the EU and other jurisdictions. Ms. Ingley joins the ranks of more than 10,000 professionals worldwide who currently hold one or more IAPP certifications.

About the IAPP

The International Association of Privacy Professionals (IAPP) is the largest and most comprehensive global information privacy community and resource. Founded in 2000, the IAPP is a not-for-profit organization that helps define, support and improve the privacy profession globally. More information about the IAPP is available at www.iapp.org.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

Diversity of Backgrounds

Axon’s ethics board includes not only AI experts, but also a diverse sampling from the related fields of computer science and engineering, privacy and data protection and civil liberties.  A sampling of members includes the following individuals:

  • Ali Farhadi, Professor of Computer Science and Engineering, University of Washington
  • Barry Friedman, Professor and Director of the Policing Project at New York University School of Law
  • Jeremy Gillula, Privacy and Civil Liberties Technologist
  • Jim Bueerman, President of the Police Foundation
  • Miles Brundage, Research Fellow at the University of Oxford’s Future of Humanity Institute
  • Tracy Ann Kosa, Professor at Seattle University School of Law
  • Vera Bumpers, Chief of Houston Metro Police Department and President of the National Organization of Black Law Enforcement Executives
  • Walter McNeil, Sheriff of Leon County, Florida Sheriff’s Office and prior President of the International Association of Chiefs of Police

Obviously, Axon’s goal was to establish a team that could evaluate use and implementation from all angles, ranging from law enforcement officials who employ such technologies, to those experts who help to create and shape legislation governing use of the same.  Axon may be moving in the direction of facial recognition technologies; after all, police forces in both the United Kingdom and China have leveraged these types of technologies for years.  Thus far, one of the chief concerns surrounding facial recognition is the penchant for racial and gender bias—higher error rates for both females and African-Americans.  If Axon does, indeed, move in that direction, it is critical that its advisory group include constituents from all perspectives and demographics.

Core Values

In addition to its own commitment to diversity, DeepMind’s key principles are reflective of its owner’s more expansive footprint across technology platforms:

  • Social benefit: AI should “serve the global social and environmental good…to build fairer and more equal societies…”
  • Rigorous and evidence-based: Technical research must conform to the highest academic research standards, including peer review.
  • Transparent and open: DeepMind will be open as to “who we work with and what projects we fund.”
  • Collaboration and Inclusion: Research must be “accountable to all of society.”

DeepMind’s focus on managing the risk of AI is on an even broader canvas than that of Axon.  In furtherance of its key principles,  DeepMind seeks to answer key questions:

  • What are the societal risks when AI fails?
  • How can humans remain in control of AI?
  • How can dangerous applications of AI in the contexts of terrorism and warfare be avoided?

Though much of the AI industry has yet to provide details as to their own ethics programs, some of its blue chips have acted in unison to establish a more formalized set of guidelines.  In 2016, the Partnership on AI to Benefit People and Society was founded collectively by Amazon, Apple, Google, Facebook, IBM and Microsoft.  Seven pillars form the basis of this partnership:

  • Safety-critical AI: Tools used to perform human discretionary tasks must be “safe, trustworthy and aligned with ethics…”
  • Fair, transparent and accountable AI: Systems must designed so as to be alert to possible biases resulting from the use of AI.
  • Collaborations between people and AI systems: AI is best harnessed in a close collaboration between humans and the systems, themselves.
  • AI, labor and the economy: “Competition and innovation is encouraged and not stifled.”
  • Social and societal influences of AI: While AI has the potential to provide useful assistance and insights to humans, users must also be sensitive to its potential to subtly influence humans.
  • AI and social good: The sky’s the limit in terms of AI’s potential in addressing long-standing societal ills.

While these best practices are a promising start, the industry continues to lack more particulars in terms of how the guidelines will be put into practice.  It is likely that consumers will maintain a healthy skepticism until more particular guardrails are provided which offer compelling evidence of the good, rather than the bad and the ugly.

OTHER THOUGHT LEADERSHIP POSTS:

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across...

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

My Car Made Me Do It: Tales from a Telematics Trial

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

Wireless technology integrated into vehicles is nothing new—indeed, diagnostic systems, cellular connections and in-dash navigation have been the norm for years.  However, the breadth of data collection and manner in which data is monetized are evolving quickly.  Telematics actually are very akin to a social media platform in terms of the sheer volumes of data collected and the purposes for which data can (and could) be used.  To be sure, many use cases stand to benefit drivers—predicting oil changes and locating the nearest gas stations–for example.  Future functionality may even detect texting while driving or sleeping drivers.

Opportunities abound, and at least one company’s early success is proof of the sheer potential of telematics data mining.  In the past three years, Otonomo has carved out a niche for itself by brokering sales of mined telematic data to parties such as insurance companies and general retail businesses.  Otonomo’s technology efficiently packages telematics data into a user-friendly, anonymized platform that takes into account worldwide regulations governing telematics.

Not surprisingly, several automobile manufacturers predict the sale of automobile analytics as a key profit center in coming years.  What is surprising, however, are the lack of legal “rules of the road” that exist today in the United States.  While laws do clarify that an automobile’s event data recorders are owned by the automobile owner (and provide that these “black boxes” may be obtained only by court order), other laws governing telematics are few and far between.  A driver’s consent often occurs upon registration of embedded GPS platforms or other navigation tools, but according to Government Accounting Office research, these types of notices often are lacking in terms of explaining how data is used and whether it is shared.  The Federal Trade Commission maintains jurisdiction over consumer data and related privacy issues, but there are not yet rules specific to telematic data collected by the automobile industry.

Much like the credit card industries’ promulgation of Payment Card Industr Data Security Standard (PCI DSS) rules, the automotive industry, in 2014, responded with its own Privacy Principles for Vehicle Technologies and Services, which include the following:

  • Transparency: a commitment to provide both owners and registered users of vehicles with access to “clear, meaningful notices” as to what data is collected, used and shared.
  • Choice: a commitment to provide owners and registered users with certain choices “regarding the collection, use and sharing” of information.
  • Respect for Context: a commitment to use and share information in a manner consistent with the context in which information was collected.
  • Data Minimization, De-identification, and Retention: a commitment to collect information only as needed for legitimate business purposes, and to retain it no longer than needed for such legitimate business purposes.
  • Data Security: a commitment to implement reasonable measures to protect information against loss and unauthorized access or use.
  • Integrity and Access: a commitment to implement measures to maintain the accuracy of information, along with a means for owners and registered users to correct information.
  • Accountability: a commitment to take reasonable steps to ensure that any parties receiving the information adhere to the principles.

To date, twenty automakers have signed on to the principles, including Honda, Toyota, Nissan, Subaru and Hyundai.

Congress has also responded to concerns over privacy and security in automobiles.  In early 2017, Representatives Joe Wilson (R-SC, 2nd District) and Ted Lieu (D-CA, 33rd District) introduced the SPY Car Study Act.  The Act does not introduce any new laws or regulations, but does require the National Highway Traffic Safety Administration (NHTSA) to investigate technological threats to automobiles.  More specifically, Congress tasked the NHTSA with identifying:

  • Measures necessary to separate critical software systems that affect a driver’s control of a vehicle from other technology systems;
  • Measures necessary to detect and prevent codes associated with malicious behaviors;
  • Techniques necessary to detect and prevent, discourage or mitigate intrusions into vehicle software systems and other cybersecurity risks in automobiles;
  • Best practices to secure driver data collected by electronic systems; and
  • A timeline for implementation of technology to reflect such best practices.

Otonomo has indicated that the current market for automobile telematics data focuses on user experience and convenience, but, in reality, no future use case is off the table.  And as with many technologies and, in particular, IoT platforms, drivers must weigh the benefits and dangers of use.  The calculus would look something like this:

  • Benefits (current and future):
    • Traffic and navigation services save drivers time and reduce risk of further traffic accidents;
    • Automobile diagnostics can not only remind drivers of to-do’s such as oil changes, but also alert drivers to issues such as dangerous behaviors (texting or sleeping while driving, blood alcohol level); and
    • Automobile insurance discounts may be a “reward” for drivers supplying metadata.
  • Risks:
    • Customer “lock-in”—could data as to driving habits (miles driven, speeds, use of turn signals) keep a customer from changing insurance carriers, if prospective carriers refuse coverage based on a driver’s metrics?
    • Will lenders factor risky driving behaviors into decisions as to whether credit or loans are extended?
    • Will current insurers raise premiums based on activities tracked via collection of metadata?

Indeed, the answer to this calculus may vary across geographies and cultures.  In the United States, there is not an across-the-board approach to privacy and data protection; rather, protections are extended across particular industries (ex: HIPAA for healthcare data).  U.S. citizens have proven more likely to provide the types of information contemplated if they receive some benefit from such sharing.  The European Union, on the other hand, has adopted a stringent, uniform approach to data protection, which is wide-ranging and extends across all industries.  It follows that EU citizens may be more sensitive than other geographies to sharing information.  It is likely that automobile manufacturers will need to take such variations into account when implementing telematics systems.  Regardless of geography, drivers should not only look to the manner in which data is being used today, but also contemplate tomorrow, as the expansion of use case is likely a “not if, but when” scenario.  For this reason, the answer to why a person drives more cautiously may be the same as to why his or her grocery bill mysteriously increased last month: “My car made me do it!”

OTHER THOUGHT LEADERSHIP POSTS:

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across...

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that the issue will be examined at the federal level.  In September, Senator Al Franken of Minnesota, concerned that certain Apple technologies would be used to benefit other sectors of its business, as a “big data” profit center or to satisfy law enforcement agency requests, issued a series of pointed questions to Apple regarding its iPhone X’s FaceID.  That letter included the following questions:

  • Is it possible for Apple or a third party to extract faceprint data from iPhone X?
  • How was the FaceID algorithm developed and how did Apple gather data for the algorithm?
  • How does Apple protect against racial, gender or age bias in FaceID?
  • How does FaceID distinguish between an actual face of a person, as opposed to the photograph of that face?
  • Can Apple assure users that it will never share faceprint data?
  • Does FaceID cause the device to continually “look” for a facial profile and in doing so, does it record other faces as well?

The response from Apple, made public on October 17th, was quite illuminating:

  • FaceID works by using iPhone X’s TrueDepth camera to scan and analyze a user’s face based on depth perception maps and two-dimensional technology.  That scan is then authenticated with images stored in iPhone X’s Secure Enclave.
  • Data from the Secure Enclave is never backed up to the cloud, does not leave the device and isn’t even saved in device backups.  Scanned faces are deleted after being used to unlock iPhone X.
  • The neural network that helps to form the algorithm was created from over a billion images from individuals who provided specific consent to Apple.  Further, a broad cross-section of individuals spanning gender, race, ethnicity, and age, was leveraged to create the algorithm.
  • Passcodes will still be available to unlock devices if users choose not to use FaceID.
  • Any third party applications that leverage FaceID for authentication don’t actually access FaceID; rather, those apps are notified only as to whether authentication was approved.

As ranking member on the Judiciary Committee, Subcommitee on Privacy, Technology and the Law, Senator Franken’s foray into technology and privacy matters is not new.  In 2013, he presented a similar set of questions when Apple introduced the iPhone 5S Touch ID fingerprint scanner.   Shortly after that inquiry, Apple published a white paper outlining the steps it had taken with Touch ID to assure Senator Franken that privacy concerns were of the highest priority to Apple.  The collaboration between Senator Franken and Apple is vital in a time when a body of privacy laws to address facial recognition technologies is still emerging and protections are lacking in most jurisdictions.  It will be interesting to see if other technology providers embrace a similar level of transparency in their product rollouts.

OTHER THOUGHT LEADERSHIP POSTS:

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across...

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

When 2017 Becomes 1984: Facial Recognition Technologies – Face a Growing Legal Landscape

When 2017 Becomes 1984: Facial Recognition Technologies – Face a Growing Legal Landscape

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Recently, Stanford University professor and researcher Michal Kosinski caused a stir of epic proportions and conjured up visions of George Orwell’s 1984 in the artificial intelligence (AI) community.  Kosinski posited that several AI tools are now able to determine the sexual orientation of a person, just based on a photograph, and has gone on to speculate that AI could also predict political ideology, IQ, and propensity for criminal behavior.  Indeed, using a complex algorithm, Kosinski accurately pinpointed a male’s sexual orientation over 90% of the time.  While technology advances frequently outpace corresponding changes in the law, the implications of this technology are alarming.  Could the LGBTQ community be targeted for violence or other discrimination based on this analysis?  Could “potential criminals” be turned away from gainful employment based on mere speculation about future behavior?  Would Facebook account photographs be an unintentional window into the most private facets of one’s life?  In a country already divided over sociopolitical issues, the answer to all of these questions unfortunately seems to be not if, but when.  The urgency for laws and regulations to police the exponential proliferation of AI’s potential intrusions cannot be overstated as the threat of a 1984 world becomes more of a reality.

Although Kosinski’s revelation is a recent one, concerns over facial recognition technologies and biometric data are hardly novel.  In 2012, Ireland forced Facebook to disable its facial recognition software in all of Europe—the EU data privacy directive (and the upcoming GDPR) obviously would have required explicit consent from Facebook users, and such consent was never requested or received from Facebook account owners.  Facebook was also required to delete all facial profiles collected in Europe.

In the United States, Illinois appears to be ground zero for the battle over facial recognition technologies, predominantly because it’s one of the few states with a specific law on the books.  The Biometric Information Privacy Act, 740 ILCS 14, (“BIPA”) initially was a reaction to Pay by Touch, a technology available in the mid 2000s as a means to connect biometric information (e.g., fingerprints) to credit card and other accounts.  A wave of privacy-geared litigation spelled doom for Pay by Touch and its handful of competitors.  With increasing adoption of facial recognition software into popular technology platforms (such as the iPhone) BIPA is front and center once again.

The scope of BIPA includes “retina or iris scan, fingerprint, voiceprint or scan of hand or face geometry…”  Key provisions of BIPA are as follows:

  • Prohibits private entities from collecting, selling, leasing, trading or otherwise profiting from biometric data, without express written consent;
  • Requires private entities that collect biometric data to protect such data using a reasonable standard of care that is at least as protective as the methods used by the entity to protect other forms of confidential and sensitive information;
  • Requires private entities that collect such biometric data to comply with written policies “made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information, on the earlier of: i) when the initial purpose for collecting or obtaining such identifiers or information has been satisfied; or ii) within 3 years of the individual’s last interaction with the private entity.

One of the latest lawsuits filed pursuant to this law is against Shutterfly, which is accused of collecting facial, fingerprint and iris scans without express written consent from website visitors, as well as people who may be “tagged” in photos (even though they may have never used Shutterfly’s services or held a Shutterfly account).  In a similar lawsuit, filed against Facebook in 2015, users also charged that Facebook was collecting and using biometric data without specific consent.

It is likely that Apple is monitoring these cases closely.  After all, iPhone X, according to Apple’s website, uses facial recognition technology to obtain access to the phone—in essence, a “facial password.”  Apple is quick to point out that it encrypts the mapping of users’ faces and that such actual data exists only on the physical device and not elsewhere (i.e., not in a cloud).

It is also likely that lawmakers in the other two states with statutes similar to BIPA are keeping a watchful eye on the Illinois docket.  Texas and Washington both have biometric laws on the books, along the lines of BIPA (though unlike Illinois, Texas and Washington do not provide a private cause of action).  While residents of these states can take comfort in legislative remedies available there, where does that leave residents of other states?  Given that the federal approach to privacy in the United States generally tends to be sector-specific (e.g., HIPAA for medical data; Gramm-Leach Bliley for financial institutions), it seems clear that change must surface at the state level.  Until then, state residents without legal protections are left with the following options:

  • Obscuracam: an anti-facial recognition app that, as its name suggests, obscures visual characteristics in those individuals photographed.
  • Opt-out whenever possible: Facebook settings can be modified so as to allow an account holder to both opt out of facial recognition technologies, and to delete any data already collected.  Users of Google+ have to affirmatively opt in to facial recognition technologies.
  • Tangible solutions: 3-D printed glasses are sometimes effective in disrupting and/or scrambling features, thereby thwarting facial recognition technologies.

However, realistically, until the law catches up with technology, the Orwellian threat is real. As the saying goes and as 1984 illustrates time and time again, “knowledge is power.”  And when knowledge gleaned from facial recognition technology falls into the wrong hands, “absolute power absolutely corrupts.”  For lawmakers, the time is yesterday (if not sooner) for laws to catch up with the break-neck pace of facial recognition technologies and the potential slippery slope of use cases.

OTHER THOUGHT LEADERSHIP POSTS:

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across...

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Mark Madness: Avoiding Trademark Landmines in College Sports

Mark Madness: Avoiding Trademark Landmines in College Sports

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Recently, the Washington Post reported on a Maryland high school’s thwarted attempt to expand its use of a green hornet mascot logo which resembles Georgia Tech’s famous “Buzz” mascot trademark.  The Damascus Swarmin’ Hornets had previously negotiated with Georgia Tech to carefully define its use of the “Green Hornet,” which, unlike “Buzz,” faced in an opposite direction, had a “D” on its chest and was yellow and green, instead of gold and black.  Damascus and Georgia Tech previously agreed that the Green Hornet could be used on helmets, hallway signs and in the school’s newspaper.  According to the Post, the city had proposed placement of the Green Hornet on a nearby water tower that was being repainted, as a means of congratulating the football team on its recent championships.  Upon learning that painting would commence very shortly on the Green Hornet mascot, the school’s general counsel approached Georgia Tech regarding this expanded use, but Georgia Tech stalled and referred the matter to its licensing and trademark committee.  This incident is but one example of the scores of challenges that have emerged in NCAA sports in the past decade, and is yet more evidence that such mascot trademarks are big business.  Indeed, traversing the gauntlet of collegiate trademark protection is daunting, but a review of the relevant case law and settlements made public reveals a few key strategies that could be the difference between winning and losing.

Avenues for Recovery

Under trademark infringement law, it’s theoretically possible for colleges to bring the following types of claims: 1) Trademark dilution, which can occur when a trademark is so commonly infringed that its value is negatively affected; 2) Likelihood of confusion, which may arise when the marks result in consumer confusion; and 3) Trademark tarnishment, which may happen when the party using a mark which is similar to the existing trademark is used in a distasteful or inappropriate manner so as to, in effect, tarnish the mark of the trademark holder.

Strategies

In pursuing one or more of these paths to recovery, schools (and the Collegiate Licensing Company [CLC], which licenses NCAA merchandise) lack neither creativity, nor attention to even the smallest, seemingly insignificant uses of trademarks.  A prime example: In August, 2012, CLC served a cease and desist order upon Mary’s Cake & Pastries and inquired as to the total sales of pastries containing an Alabama Crimson Tide “A” on them.  In most cases such as this one, the University and CLC simply outspend the opposing party into submission…but situations such as Mary’s and other outliers provide useful roadmaps to potential success.

 Go Viral

In an interview with a local paper, the owner of Mary’s Cake & Pastries indicated that she would discontinue use of any Alabama marks.  Unexpectedly, news of the cease and desist eventually went viral on social media, depicting the University of Alabama as a petty, money-hungry cash cow that targeted the owner of a small business.  The University of Alabama apologized and pledged to work with the owner to resolve the issue amicably, rather than proceeding with a lawsuit as previously threatened.  The result: payment of a one-time, $10 licensing fee.

Use the Constitution

Daniel Moore, an artist, also tussled with the University of Alabama, in his case, for the better part of a decade (University of Alabama Board of Trustees v. New Life Art Inc., [11th Cir. June 11, 2012]) primarily due to his rendering of the football team’s uniforms in his paintings.  Moore successfully argued that he had a First Amendment right to depict historical events (in his case, famous Alabama football plays) in his paintings, just as journalists depict those same events in newspapers.  The federal judge did, however, prevent Moore from reproducing the depictions in calendars and on t-shirts and coffee mugs.

Beat Them to the Punch

In 2013, Johnny Manziel, the then-star quarterback of Texas A&M, prevailed in a suit against a t-shirt maker who sold shirts which incorporated the moniker made famous by Manziel, “Johnny Football.”  The NCAA, however, prevents collegiate athletes from being compensated for their names and/or likenesses.  How did Manziel prevail in the lawsuit without running afoul of NCAA regulations?  He licensed the nickname to a third party—for free, thereby allowing the moniker to be used in commerce (by the third party), as is required by trademark law.  Manziel slayed two birds with one stone: his licensing to a third party allowed him to satisfy the “in commerce” requirement and by making Texas A&M one of the licensees, he prevented them from asserting ownership.  The NCAA was powerless to contest Manziel’s assertion of ownership.

If You Can’t Beat ‘em, Join ‘em

Most basketball fans associate the term “March Madness” with the annual NCAA basketball tournament.  However, a member of the Illinois High School Association (IHSA) authored the phrase in the 1930’s to describe its own state basketball tournament.  Commentator Brent Musburger began to use the phrase in the 1980’s during the NCAA tournament, and it quickly caught on with players and fans alike.  Both the IHSA and NCAA claimed ownership of the phrase, and this conflict formed the basis of Illinois High School Association v. GTE Vantage, 99 F.3d 244 (7th Cir. 1996).  The Seventh Circuit ruled that although the IHSA had obviously used the term first, to the general public, the phase had taken on an entirely different meaning associated with the NCAA college basketball tournament.  Left in limbo given this “dual-use” decision, the parties chose to create a holding company for the intellectual property, allowing them to jointly license the phrase to other parties.

Conclusion

Back in Damascus, the city and the Washington Suburban Sanitation Commission (owner of the water tower) scrapped plans for use of the Green Hornet logo on the water tower, opting for a green and gold “D.”  What could Damascus have done differently?  Since this was yet another “David and Goliath” scenario, similar to the circumstances around Mary’s Cake & Pastries, Damascus could have taken steps to create a viral groundswell, thereby gaining sympathy and exerting more pressure on Georgia Tech.  But viral groundswells take time, and Damascus did not reach out to Georgia Tech’s Licensing and Trademark Committee until mid-May, just prior to when painting of the tower was to commence.  Trademarks are too profitable for universities to make such decisions on a whim, and if push comes to shove, university counsel will err on the side of caution and reject the proposed use.  There’s no guarantee that being proactive and using the viral community would have allowed Damascus to prevail in this case, but those steps at least would have given the Swarmin’ Hornets a fighting chance.

OTHER THOUGHT LEADERSHIP POSTS:

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across...

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...