Government Efforts to Fight a Pandemic Challenge Data Privacy Concerns

Mar 24, 2020

By Meredith Sidewater


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

“In public health, we can’t do anything without surveillance. That’s where public health begins.”
-David Satcher, MD, PhD, U.S. Surgeon General, 1998-2002 

Media outlets reported this week that representatives from Facebook, Google, Amazon, and Apple are meeting with members of the White House to brainstorm about ways in which the “Big Four[1]”, can leverage the consumer information they possess to help in the war against COVID–19.  Specifically, the government proposes to combine artificial intelligence with geolocation data collected from consumers’ smartphones and other devices to track the movement of individuals.  Having knowledge about a population’s movement can help predict the future spread of the virus as well as pinpoint locations where an outbreak might be most severe.  

The collaboration with these companies is noteworthy given the increasing and often hostile scrutiny by Congress and privacy advocates who demand increased oversight of the information giants.  However, this is an unprecedented time of (hopefully temporary) self-isolation, plummeting financial markets, decimation of industries, shuttering of small businesses, and exacerbated food insecurity for the poor. 

The value of cell-phone data in tracking, predicting, and understanding disease has been studied by various academics.  In 2015, researchers from Harvard University and Princeton University published a study (Princeton University, Woodrow Wilson School of Public and International Affairs, 2015) in which cellphone data for more than 15 million users in Kenya was evaluated to determine whether movement around the country could predict the seasonal spread of rubella. 

Interestingly, in November of 2019, just before news of the COVID-19 contagion broke, researchers from the Massachusetts Institute of Technology and the Ecole Polytechnique Federale de Lausanne published a paper in Scientific Reports regarding a study that demonstrated the relationship between human mobility and the 2013 and 2014 dengue outbreaks in Singapore. (Ecole Polytechnique Federale de Lausanne, 2019)   Not surprisingly, the study revealed that human mobility is a significant variable in the spread of diseases such as malaria and dengue.  In conducting the study, researchers used datasets that included anonymized mobile phone location data.  The researchers ultimately called for legislation that would allow scientists, NGOs, and government to access consumer’s phone location data for public health purposes.            

What would be the harm in allowing the government to gain access to mobile phone usage data?  Will consumers receive notice if their cell phone usage data is disclosed to government or public health providers?  How will a definitive connection be made between the activity associated with a given mobile phone and a specific individual?  Do we feel comfortable with the disclosure and use of aggregated and anonymized geolocation data if it could provide valuable insight for disease mapping, resource allocation, and education purposes?  Or, is the very real risk of reidentification too great?   Can we ensure that the information shared will be limited to that of geolocation data or might the government gain access browsing history, app usage, photographs, etc.?

Disclosure of private personal information for use in the realm of public health and safety is not a novel concept.  In fact, there exist today laws that attempt to codify a balance between consumer privacy and public health: 

  • The HIPAA Privacy Rule grants covered entities the ability to disclose protected personal health information (PHI), without patient consent, to public health authorities who are legally authorized to receive the PHI in order to prevent or control disease, injury, or disability[2]. Covered entities may also, if so directed by a public health authority, disclose PHI to a foreign government agency that is collaborating with a public health authority[3]. With limited exceptions, covered entities are required to limit the PHI disclosed for public health purposes to the minimum amount necessary to accomplish the public health purpose[4].
  • FERPA allows educational agencies and institutions to disclose personally identifiable information from an education record to appropriate parties in connection with an emergency if the disclosure is necessary to protect the health or safety of the student or other individuals.[5]
  • All states have enacted laws to mandate reporting of specific diseases or clusters of diseases to state or local health departments. Certain states require reporting beyond specific diseases[6].  

The laws referenced above speak to information pertaining to an individual’s health, not to information that, on its face, seems wholly unrelated to the safety and welfare of individuals.  Is there a meaningful difference between the disclosure of such health-related information and information pertaining to mobile phone usage that will be used in the context of a public health crisis?  Perhaps, is that a distinction without a difference given the urgency of the current situation. 

Protecting society from public health threats is paramount, however, the privacy issues put forth by concerned individuals require thoughtful consideration.  How will the data that is collected be used once the crisis ends?  Will data collection cease at some point in time or continue indefinitely?  Can we trust that public officials will appropriately govern access to and use of mobile phone data in a way that effectively balances consumer privacy with public health?   Should consumer privacy carry equal weight in the face of a global quarantine and economic collapse?    It will be interesting to see both the immediate results of the conversations between the White House and the Big 4 and the long-term impact on data privacy. 

[1] So named by Scott Galloway,

[2] 45 CFR 164.512(b)(1)(i)

[3] 45 CFR 164.512(b)(1)(i)

[4] See 45 CFR 164.502(b). 

[5] 34 C.F.R. § 99.31. and 99.36

[6] N.J. Admin. Code § 8:57-1.1 et seq., 28 Pa. Code § 27.3, S.C. Code Ann. § 44-29-10 (2002)

OTHER THOUGHT LEADERSHIP POSTS:

FTC Provides Guidance on Using Artificial Intelligence and Algorithms

The Director of the Federal Trade Commission (FTC) Bureau of Consumer Protection recently issued guidance in its Tips and Advice blog as to how companies can manage consumer protection risks that may arise as a result of using artificial intelligence and algorithms.

Is Robotic Process Automation Reducing or Increasing your Software Licensing Fees?

While statistics regarding the increase in the use of Robotic Process Automation (RPA) vary, it is clear that the use of RPA is on the rise. Companies are rolling out RPA in an effort to increase productivity, cut costs and reduce errors.

A Few More Thoughts About Improving Our Force Majeure Provisions

The Coronavirus pandemic has brought the force majeure provision into the spotlight. A quick Google search brings up countless articles published in the past few weeks by lawyers worldwide about how to use force majeure provisions offensively and defensively in these uncertain times.

Government Efforts to Fight a Pandemic Challenge Data Privacy Concerns

Media outlets reported this week that representatives from Facebook, Google, Amazon, and Apple are meeting with members of the White House to brainstorm about ways in which the “Big Four,” can leverage the consumer information they possess to help in the war against COVID–19.

School or Parent? Factors Playing into the FTC’s Analysis of who should provide Parental Consent under COPPA in the Age of EdTech

The use of education technologies (EdTech) has exploded in recent years. In fact, between online learning sites, one to one device deployments in school districts and personalized curriculum services, virtually every student today has some online or digital component to their learning.

NYC’s Task Force to Tackle Algorithmic Bias Issues Final Report

In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

While you’ve been focused on CCPA Compliance Efforts, Elon has Been Developing Cyborgs

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

DHS Cybersecurity Arm Directs Executive Agencies to Develop Vulnerability Disclosure Policies

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

Open Internet Advocates Rejoice: Ninth Circuit Finds Web Scraping of Publicly Accessible Data Likely Does Not Violate CFAA

The Ninth Circuit Court of Appeals recently handed open internet advocates a big win by upholding the right of a data analytics startup to use automated bots to scrape publicly available data.

The ABA Speaks on AI

By Jennifer Thompson | Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI.