“CALLER ID POSES INVASION OF PRIVACY” blared the headline of a 1990 op-ed in the Chicago Tribune by law professor Jeffrey M. Shaman. In the piece, Professor Shaman argued that the newfangled technology for displaying phone numbers of incoming calls “may have disastrous consequences,” including deterring people “from calling crisis centers that deal with suicide, rape, child abuse or AIDS for fear of having their identities revealed.” He also warned that “psychiatrists, doctors, social workers and lawyers will not be able to return emergency calls from home… Victims of domestic violence will not be able to call spouses or their children…People will be discouraged from making anonymous calls to the police to report crime.”

Civil libertarians and privacy advocates also sounded the alarm about the dangers of knowing the phone numbers of incoming calls. Deborah Ellis, the legal director for the American Civil Liberties Union (ACLU) of New Jersey, said “banks and other lenders could use the service to discriminate against callers from poor areas.”Others questioned the motives behind the companies deploying the new technology. Marc Rotenberg, then-director of the Computer Professionals for Social Responsibility, said, “This is a case of a company that has a great deal of personal information making money exploiting the sale of that information without the consent of the phone subscriber. Three years ago if a phone company employee had talked about selling someone’s phone number to a business, they would have been fired.”

The apocalypse was nigh, and the fifth Horseman was… Caller ID.

None of these worries came to pass, of course. Telephone systems added the ability to block Caller ID on a per-call basis by dialing *67 before the phone number, heading off most of the potential privacy harms. As for the greedy telephone companies, it is now common for phone contracts to bundle Caller ID into the standard service price at no additional cost.

Many new technologies go through this “privacy panic cycle” (e.g., RFID tags, cameras, loyalty cards). It often begins with advocacy groups — such as the Electronic Privacy Information Center (EPIC), the Center for Democracy & Technology (CDT), Access Now, and others — feeding the natural tendency of media outlets to exaggerate the risks associated with a new technology because audiences love negative news (“if it bleeds, it leads”). As the frenzy escalates, headlines start to declare that the sky is falling. Then, despite the Chicken Little omens, fears begin to diminish over time as reality sets in. The cycle ends — not with a bang, but a whimper — as consumer appreciation of the new technology or service proves the deciding factor in its ultimate widespread adoption.

Privacy Fundamentalism Is a Moral Panic

Sources: Privacy indexes: a survey of Westin’s studies; Choice Architecture and Smartphone Privacy: There’s A Price for That; and Would a privacy fundamentalist sell their DNA for $1000 … if nothing bad happened as a result? The Westin categories, behavioral intentions, and consequences

“Privacy” is not a term that anyone uses in a negative way. Like money or liberty, it is good by definition; more is better, less is worse. But humans have a multiplicity of values and differing hierarchies — more privacy is not always better when it comes at the expense of another good. That’s what makes privacy scholar Alan Westin’s privacy segmentation index (PSI), which categorizes people’s privacy attitudes as “pragmatists,” “fundamentalists,” or “unconcerned” a useful tool in peering through the often-hysterical nature of the public debate surrounding privacy.

Contrary to the popular narrative, most Americans do not place a high value on their privacy for non-sensitive information, such as purchasing habits or online browsing histories. (However, people do value keeping sensitive information such as health or financial records private.) In surveys over the last few decades, about 50 to 65 percent of Americans are privacy pragmatists, meaning they continually evaluate the trade-offs associated with sharing private information, and make decisions on a case-by-case basis. To be clear, this does not mean they’re all reading the legalese in privacy policies — they simply use heuristics to make disclosure decisions.

Source: Pew Research Center

Roughly 10 percent of Americans are privacy unconcerned, which Westin quipped as willing to “give you any information you want about their family, their lifestyle, their travel plans, and so forth” for a 5-cent discount. In one experiment, the vast majority of participants were willing to reveal their monthly income to a video rental store in exchange for a one euro discount on a DVD (without the discount, about half still shared this private information in exchange for no benefit). A different study found that most subjects would happily sell their personal information for just 25 cents, and almost all of them waived their right to shield their information.

The remaining 25 to 35 percent of Americans are privacy fundamentalists — those with an ideological commitment to privacy, claiming they would never trade their privacy for economic benefits (even if they often do in practice). This group also wants stronger privacy laws to prevent companies from acquiring anyone’s personal information.

In 2001 testimony before the House Committee on Energy and Commerce, Westin summarized his catalog of research and direct experience with dozens of national privacy surveys going back to 1979 by saying:

American consumers, by large majorities, want all the benefits and opportunities of a consumer service society and of a market-driven social system… We know that a majority of the American public does not favor the European Union style of omnibus national privacy legislation and a national privacy regulatory agency, but when it comes to sensitive information such as financial information or health information, overwhelming majorities are looking to legislative protections to set the rules and the standards for that kind of activity.

According to more recent surveys, not much has changed in the intervening years. As the charts below show, privacy concerns in the United States have, if anything, decreased over time.

Source: National Telecommunications and Information Administration

Source: National Telecommunications and Information Administration

A recent paper in the Journal of Economic Literature summarized the theoretical and empirical research on the economics of privacy, concluding:

Extracting economic value from data and protecting privacy do not need to be antithetical goals. The economic literature we have examined clearly suggests that the extent to which personal information should be protected or shared to maximize individual or societal welfare is not a one-size-fits-all problem: the optimal balancing of privacy and disclosure is very much context-dependent, and it changes from scenario to scenario. [emphasis added]

In the past, privacy fundamentalists and advocacy organizations have relied on the media and our natural predilections to focus on the negative to push a misleading narrative that does a great disservice to evidence-based policy debates. Now, in the ongoing debate over passing prescriptive baseline privacy regulations, they have a powerful new ally: Big Tech.

Big Tech Bootleggers and Privacy Baptists

In 1983, Bruce Yandel, the executive director of the Federal Trade Commission (FTC), wrote an article in which he coined the term “bootleggers and Baptists” to describe regulations that are supported by a coalition with both virtuous and venal interests. In his canonical example, he observed that bootleggers supported laws prohibiting the sale of alcohol on Sundays because they were good for business; Baptists, on the other hand, were in favor of the same laws for moral or religious reasons. This kind of diverse coalition can prove very effective in passing and maintaining welfare-reducing regulations.

When European Commissioner Věra Jourová traveled to Silicon Valley last year to meet with American tech firms, including Google and Facebook, she expected to hear grumbling about the General Data Protection Regulation (GDPR), the European Union’s new baseline privacy law. Instead, she said, “They were more relaxed, and I became more nervous. They have the money, an army of lawyers, an army of technicians and so on.” Compliance with GDPR would not be a problem for Big Tech.

During a recent Senate hearing, Keith Enright, Google’s chief privacy officer, gave a clue as to why that’s the case. He estimated that the company spent “hundreds of years of human time” to comply with the new privacy rules.  The Global Fortune 500 will spend an estimated $7.8 billion in compliance costs for GDPR.  While these are significant costs to Big Tech, they also represent regulatory barriers to entry for small- and medium-sized enterprises trying to become the next Facebook or Apple.

Big Tech has entered into a “bootleggers and Baptists” coalition with privacy fundamentalist groups to support new omnibus regulations in the United States. Last month, the Information Technology Industry Council — the lobbying group for, among others, Apple, Amazon, Google, Facebook, and Microsoft —  released its policy framework which was “inspired” by GDPR and sought to “create alignment with the privacy protections of other privacy regimes across the globe.”  In a keynote speech two days later in Brussels, Apple CEO Tim Cook said, “We should celebrate the transformative work of the European institutions tasked with the successful implementation of the GDPR … It is time for the rest of the world, including my home country, to follow your lead.” At the same conference, Facebook’s chief privacy officer said the company would “unequivocally” support an American version of GDPR.

From the perspective of multinational incumbent technology firms, this is the rational position to take on privacy regulation. Since the European market is large and regulatory compliance costs are fixed, it is often more efficient for global corporations to comply with European regulations everywhere than it would be to meet the local regulatory minimum in each market. This phenomenon where Europe becomes the de facto global regulator via market mechanisms is known as the “Brussels effect.” (When this occurs domestically, it’s known as the “California effect”). Incumbents lobbying to spread de jure GDPR-style regulations to other countries is an obvious strategy to prevent would-be rivals from entering the market.

Regulation is among the most effective ways of raising a rival’s cost. Indeed, economists Avi Goldfarb and Catherine Tucker found that smaller and more general websites were hit the hardest in the wake of European privacy regulation in the late 2000s. Evidence from the first six months of GDPR show that Google and Facebook have been winners relative to smaller competitors:

Source: Cliqz

The United States Is Not the Wild West of Privacy

Some have claimed that the United States is the “Wild West of privacy.” This is a curious position considering “there are literally hundreds of laws relating to privacy and data protection in the [United States] — including common law torts, criminal laws, evidentiary privileges, federal statutes, and state laws.” As the Association of National Advertisers detailed in recent comments to the FTC, the United States has sector- and age-based privacy regulation:

The Health Information Portability and Accountability Act (“HIPAA”) regulates certain health data; the Fair Credit Reporting Act (“FCRA”) regulates the use of consumer data for eligibility purposes; the Children’s Online Privacy Protection Act (“COPPA”) addresses personal information collected online from children; and the Gramm–Leach–Bliley Act (“GLBA”) focuses on consumers’ financial privacy; the Equal Employment Opportunity Commission (“EEOC”) enforces a variety of anti-discrimination laws in the workplace including the Pregnancy Discrimination Act (“PDA”) and American with Disabilities Act (“ADA”); the Fair Housing Act (“FHA”) protects against discrimination in housing; and the Equal Credit Opportunity Act (“ECOA”) protects against discrimination in mortgage and other forms of lending.

Additionally, the FTC has broad discretionary authority to police “unfair or deceptive acts or practices” and to “seek monetary redress and other relief for conduct injurious to consumers.” The Commission, which is known as “America’s top cop on the privacy beat,” has brought more than 500 enforcement actions on privacy- and security-related issues since 1998.

If this is the Wild West, then the sheriff is doing a pretty good job keeping the outlaws at bay.

Privacy Rights: Some Tradeoffs and Unintended Consequences

While some have applauded specific rights in the GDPR, there are real consequences.  A right to be forgotten, by its nature, is in direct conflict with free speech. For example, an email is a piece of shared data between two parties. If one party wants it to be “forgotten” and the other party wants to keep it, whose right should prevail? Furthermore, for criminal records and other socially-relevant information, the public has a right to know that might trump someone’s right to erase it from the Internet. In the EU, convicted scammers and sex offenders have already exploited the right to be forgotten to scrub records of their crimes from websites and search engines. Combined with the right to explanation, the right to be forgotten might also be a regulatory death blow for blockchain and artificial intelligence technologies in Europe.

Rules that place strict limits on how data is used are antithetical to innovation in the age of Big Data. Purpose specification and data minimization rules assume that businesses know all the valuable products and services that can be built using customer data prior to acquiring those customers. Machine learning algorithms, for example, might be able to use an old dataset for a new purpose that creates real value, as some examples in the medical field show. Biomedical researchers have voiced concerns that GDPR “will make it harder to share information across borders or outside their original research context.” The Danish Cancer Society study that found no link between mobile phone use and cancer rates used data that was initially collected for a different purpose and probably would have been illegal to conduct in a post-GDPR world.

The right of data portability increases what cybersecurity researchers call the “attack surface” by creating additional access points for bad actors to gain access to a network. The Cambridge Analytica scandal was a classic example of data falling into the wrong hands because it was too portable. Data breach notification rights with arbitrarily short deadlines (e.g., within 72 hours) for public disclosure also create perverse incentives. As Alex Stamos, a former chief security officer of Facebook, noted, firms are incentivized to announce the maximum number of users potentially impacted, which spreads undue panic before an investigation can determine who — if anyone — was actually harmed. Additionally, notification rules can hinder coordination with law enforcement and prevent opportunities for gathering information material to identifying and punishing those responsible. In another unintended consequence, a hacker got access to someone’s Spotify account and used the right to access to demand a file with all of their account data.

The impact can be best seen in what isn’t there. New data from researchers at the Illinois Institute of Technology and the University of Maryland has found that post-GDPR, European startups have seen “a $3.38 million decrease in the aggregate dollars raised, a 17.6% reduction in the number of weekly venture deals, and a 39.6% decrease in the amount raised” relative to their United States counterparts. A survey released last week by Merrill Corp. reported that “fifty-five percent of respondents said they had worked on deals that fell apart because of concerns about a target company’s data protection policies and compliance with GDPR.” A separate study by PricewaterhouseCoopers LLP last month found that “fewer than half of international companies worth $100 million or more said they are fully prepared to comply with GDPR.” These are costs preventing new activity or market entry, but what about the companies that operated in Europe before the law took effect?

Since GDPR went into effect in May, numerous companies have shut down or left the European market due to compliance costs and liability risks. Most of the affected companies offered free or low-cost services that were subsidized by advertisements. GDPR’s rules mandating “opt-in” consent from users made these ad-based business models unviable. Losses were concentrated in the news, marketing, and video game industries (as shown in the table below). As of November 9th, at least 1,133 news sites based in the United States are still unavailable in the EU due to GDPR. Prices on independent ad exchanges in Europe fell between 25 and 40 percent in the days after the law went into effect.

The statistics regarding the economic impact of GDPR are unambiguous: Compliance costs for firms in the United States with more than 500 employees are estimated to reach up to $150 billion. At least 75,000 new “Data Protection Officers” will need to be hired as a result of the law. The annual direct welfare loss per EU household due to increased prices from GDPR is estimated to be between $700 and $4,000. And to date, over 42,000 complaints have been filed with the European Data Protection Board.

In a recent Boston Globe article about California’s new GDPR-style law, technology columnist Hiawatha Bray said:

Perhaps strong privacy regulations will prevent the rise of “the next Facebook,” but is that a bad thing? Maybe new regulations will result in fewer companies hitting me up for sensitive data. But I’d rather have one or two huge companies tracking me than 20 or 30 smaller ones. There are fewer opportunities for data breaches that way, and it’s easier to sue when something goes wrong.

It’s refreshing to hear a privacy fundamentalist clarify what is really at stake with these types of broad-based privacy rules, even if we disagree on which outcome is preferable. Yes, preventing the rise of the next Facebook is a bad thing — especially if you dislike how the current Facebook is behaving.

People Care About Privacy. And Convenience, and Choice, and Affordability, and…

In the United States, the right to privacy is implicit in the First, Third, Fourth, Fifth, and (arguably) Ninth Amendments to the Constitution. However, even if we proclaim privacy to be a fundamental human right that transcends our founding documents, that endowment neither determines the wisdom of different policy proposals that would regulate the flow of information nor limits how individuals should be able to trade in those rights. As Eli Noam, professor of finance and economics at Columbia University, points out, “A right is merely an initial allocation. It may be acquired without a charge and be universally distributed regardless of wealth, but it is in the nature of humans … to exchange what they have for what they want. … Whether we like it or not, people continuously trade in rights.”

In practice, people often trade the right to privacy for more convenience, lower prices, or greater choice. Tiffany Morris, vice president of global privacy at Lotame, pointed out how California’s recent GDPR-style privacy law missed this important feature:

There seems to be an illogical presumption that providing data to companies for marketing purposes creates a special kind of jeopardy for privacy. Yet consumers give away personal data every day – to banks, credit card companies, gyms, community centers, grocery stores and even automated toll takers. There is less scrutiny for those categories, likely because the data is required for the related transactions to function properly. The use of data for marketing and media also serves a legitimate interest, but it seems to be viewed as inherently reckless by regulators, legislators, and privacy vigilantes.

In practice, the trade-offs and choices that users make every day demonstrate that they view privacy as tradeable and, indeed, valuable for marketing and media purposes. In a recent paper, economists from Stanford and MIT noted that:

Whenever privacy requires additional effort or comes at the cost of a less smooth user experience, participants are quick to abandon technology that would offer them greater protection. This suggests that privacy policy and regulation has to be careful about regulations that inadvertently lead consumers to be faced with additional effort or a less smooth experience in order to make a privacy-protective choice.

In other words, as much as some might pretend otherwise, there is no free lunch when it comes to prescriptive privacy regulations.

We Need a Pragmatic Approach to Privacy

Instead of a fanatical commitment to privacy, we need a moderate, balanced approach to policy that accounts for the entire constellation of American values, including free speech and economic prosperity. For each policy decision, which values dominate will vary depending on context. It will be a messy, complex process to figure this out (“the slow boring of hard boards”). But figure it out we must, because a pluralistic society demands pluralistic values.

Privacy fundamentalists live in a “clean and well-lit prison of one idea.” They believe we should always prioritize safeguarding privacy over accelerating innovation or increasing economic growth. That is not the policy strategy the United States has followed in the past to become the world leader in the digital economy, and it won’t be the one that leads to prosperity in the future. Opportunity is calling, let’s not panic.

This essay was adapted from a recent series of regulatory comments.

Read the full comments to the NTIA on consumer privacy here.

Read the full comments to the FTC on privacy, big data, and competition here.

Read the full comments to the FTC on AI and consumer welfare here.

Endnotes

Tsai, J., Cranor, L., Acquisti, A., & Fong, C. (2006). What’s it to you? a survey of online privacy concerns and risks.

The questions are as follows:

  1. Consumers have lost all control over how personal information is collected and used by companies. (Strongly disagree = 0 to Strongly Agree = 6)
  2. Most businesses handle the personal information they collect about consumers in a proper and confidential way. (Strongly disagree = 0 to Strongly Agree = 6)
  3. Existing laws and organizational practices provide a reasonable level of protection for consumer privacy today. (Strongly disagree = 0 to Strongly Agree = 6)

Participants are classified into three categories: privacy fundamentalists, privacy pragmatists, and the privacy unconcerned. The descriptions of these categories are as follows (Harris, 2001):

Privacy Fundamentalists: At the maximum extreme of privacy concern, Privacy Fundamentalists are the most protective of their privacy. These consumers feel companies should not be able to acquire personal information for their organizational needs and think that individuals should be proactive in refusing to provide information. Privacy Fundamentalists also support stronger laws to safeguard an individual’s privacy.

Privacy Pragmatists: Privacy Pragmatists weigh the potential pros and cons of sharing information; evaluate the protections that are in place and their trust in the company or organization. After this, they decide whether it makes sense for them to share their personal information.

Privacy Unconcerned: These consumers are the least protective of their privacy – they feel that the benefits they may receive from companies after providing information far outweigh the potential abuses of this information. Further, they do not favor expanded regulation to protect privacy.

According to Westin’s classification, privacy fundamentalists agree with statement 1 and disagree with statements 2 and 3. The privacy unconcerned disagree with statement 1 and agree with statements 2 and 3. The remaining respondents are privacy pragmatists.