This Court Case You’ve Never Heard of Directly Impacts Your Sensitive Data
Brady Allen Kruse
October 27, 2022
Data breaches are so frequent today that we hardly bat an eye when one breaks the news. Credit cards, Social Security numbers, addresses, and other types of personally identifiable information are routinely stolen—or even leaked because of poor security practices—oftentimes affecting thousands, if not millions, of people simultaneously. Class action litigation has been a common course of remedy for consumers affected by data breaches, with total restitution oftentimes reaching into the hundreds of millions of dollars. In the court decision TransUnion LLC v. Ramirez last year, the United States Supreme Court (SCOTUS) officially made it much harder for data breach victims to successfully sue the company that handled their data.
In order for a class action lawsuit to proceed, a group of people need to be granted Article III standing. By far, the hardest requirement to satisfy in obtaining Article III standing in data breaches is fulfilling the legal term “injury in fact,” essentially proving that the people have suffered an “actual” or “imminent” injury. The question for data breach victims: does leaked personal data reach this standard? Until recently, lower courts were split on the issue. In the past, some lower courts have accurately surmised that sensitive data being stolen alone is harmful enough to constitute “injury in fact.” In 2021, SCOTUS disagreed.
With TransUnion, SCOTUS decided that the risk of future harm from the use of stolen data does not qualify for “injury in fact” and thus monetary compensation. Subsequent data breach lawsuits citing TransUnion have required that consumers prove that their data had actually been used nefariously, usually but not exclusively, through attempted identity theft and fraud. Essentially, victims of data breaches are forced to sit around and wait for someone to use their data and make the harm materialized, rather than hypothetical.
Some courts, such as the 2nd Circuit in McMorris v Carlos Lopez & Associates, interpret TransUnion with a lower burden of proof, requiring an individual to prove only that some subset of the leaked dataset was used rather than having to prove that their specific data was used. The drawback is that McMorris names attempted identity theft and fraud as the gold standards for harm. Identity theft and fraud are relatively easy to prove—unauthorized charges on someone’s credit card, for example, leave a clear digital footprint thanks to online banking. However, data can also cause harm in a nearly unlimited number of deeply hidden ways. Data need not be as sensitive as Social Security or credit card numbers to be harmful, nor is harm concrete only if it can be undoubtedly shown.
The black market of advertising data operates illegally on the dark web, creating digital profiles of people and their preferences that are sold to the highest bidder for malicious purposes. These datasets are not contingent on information typically needed for identity theft, such as Social Security numbers, but are full of otherwise personal information related to an individual’s habits and beliefs that are now possessed by a bad actor. Or consider a hypothetical leak involving datasets of location data collected from GPS devices: location data has previously been used to out a member of the LGBTQ+ community. It is unclear if these examples of harm would meet the TransUnion standard as they are very hard—or at least much harder than fraud and identity theft—to produce evidence of quantifiable harm, if they are noticed at all.
TransUnion also ignores the fact that it can be extremely difficult to pinpoint the source of leaked data. Article III standing requires that harm be “fairly traceable” to the aggravating party. In cases of identity theft, a fraudulent bank charge shortly after a data breach would typically satisfy this requirement. The reality of the illegal data marketplace, however, is that data changes hands frequently. Leaked datasets are auctioned, aggregated, ripped apart, and combined with other datasets constantly. It could be years and a sloppily amalgamated dataset later before a malicious actor actually uses a consumer’s data, deeply obfuscating the original source and indirectly giving companies a liability shield for sloppy cybersecurity practices.
By requiring that consumers clearly demonstrate that their stolen data had been directly used to receive Article III standing, the TransUnion standard utterly fails to grasp the nature of data breaches. The harsh truth is that leaked data is almost always used nefariously in some way, shape, or form. Personal data that falls outside of the strict scope of identity theft and fraud, like political preferences or sexual orientation, can still cause terrific distress to consumers. Data brokers, who often have lax cybersecurity practices, collect massive amounts of such data on American consumers, from location data to political preferences, that, though the brokers claim it is “anonymized,” can still identify an individual when aggregated. Furthermore, tracing harm to a specific breach is rendered practically unfeasible by the complexities of the illegal data marketplace. A leak of private data, by virtue of its existence, is concretely harmful enough—and the definition of “injury in fact” should reflect this reality.
Brady Allen Kruse (@bradyallenkruse) is a graduate student and research assistant on the data brokerage project at Duke University’s Sanford School of Public Policy.