A New Framework for Digital Privacy (Part 2 of 3)

Note: This post is part two of a three-part series on digital privacy, concealed influence, and cognitive consent. Click here for part one.


Last year, tech companies raced overtime to improve plummeting public perceptions of their privacy practices. Google began grouping users into anonymized “cohorts” based on shared interests and affinities. Apple now handles most of the ad targeting and computation directly on-device on behalf of advertisers, effectively shutting off the stream of your personal data flowing into a third-party ad-tech soup for analysis, retargeting, and mining.

Meh. Not nearly good enough.


Sidebar: Wanna see the soup? Here it is, enjoy! 😬

Source: chiefmartec.com
Source: chiefmartec.com

In hopes of helping you understand why such efforts miss the mark on protecting your privacy, let me help you reframe your thinking.

I will introduce two concepts:

  1. The intrinsic vs. contextual value of privacy, and
  2. A new “Confidentiality, Interoperability, and Agency” triad, which I’m shamelessly borrowing from the information security discipline and adapting to digital privacy.

Stick with me.

A Better Lockbox?

Today’s most popular privacy-enhancing technologies (PETs for easy reference) focus on the wrong metric: how securely do they safeguard my private data under lock and key? This metric stems from a mistaken assumption that digital privacy is about concealing and protecting private information.

It is not. This notion stems from information security thinking, and applies very poorly to the domain of digital privacy. Digital privacy requires privacy security thinking.

The right metric for evaluating a PET is to ask: how well does it protect me against concealed influence and reassert my cognitive consent in digital spaces, regardless of how strong the lockbox is?

To unpack this a bit, let’s revisit what advertisers want, and why they might care about your information.

Information Is Predictive

Given perfect information, we would be able to make perfect predictions about the environment around us. Digital advertisers understand this: information capture empowers increasingly precise predictions about human behavior, which creates more effective influence operations.

In digital advertising, that influence is currency. Advertising profits are highly dependent on the accuracy of behavior prediction algorithms. The more historical data advertisers access, the better the predictive models they can build. Granular data leads to high model accuracy, driving up the value of the resulting ad placement. That is why advertisers are interested in gaining access to your private data, better known as personally identifiable information, or PII.

Why is this bad? Accurate prediction gives advertisers the power of behavior manipulation: to get you to do what they want without you realizing why you suddenly want it. This isn’t just about shielding you from the sudden onset of late-night food cravings and Instagram-approved fashion choices from a timely and well-placed ad. Concealed influence and predictive microtargeting are a direct cause of real-world harm, such as election interference, disinformation, and social unrest.

Advertisers are not seeking to learn more about you because they care about who you are as a person. It is important to stress and restate this point: advertisers are not interested in obtaining consumer PII for the value of the PII itself.

Advertisers care about effectively influencing your actions. In other words, they care about the second-order effects of knowing about you. If there were another, better way to accomplish the second-order effect of influence without accessing your private data, they’d be game for that, too. Private consumer data is simply the best way of achieving that — for now.

It may be counterintuitive to realize that PII does not hold intrinsic value, but that is because we are using to thinking in information security terms, where PII is used for a different purpose than digital advertising: Criminals want your data to engage in fraud, theft, espionage, and trafficking. This means that the data itself has some innate value, and a great lockbox keeps the criminals out.

Advertisers want your data for an entirely different reason: to make predictions with it in order to influence your actions — and to do so as covertly as possible. The more likely you are to attribute a new craving or idea to your own thinking, the more likely you are to act on it. In other words, it’s not the data itself that carries intrinsic value, but the applied second-order effects that data makes possible: accurate prediction and ad targeting to exert concealed influence.

Accordingly, effective digital privacy controls should target those second-order effects. They should target the end goal, or the *destination, *of advertising: to influence action.

Instead, PETs such as FLoC and on-device targeting naively focus on the means and methods, or the *journey, *of advertising: predictive targeting via access to private consumer data.

In digital privacy, the destination matters much more than the journey. Therefore, a meaningful conversation about privacy must address concealed influence (digital privacy thinking), and not how well a PET keeps your private data under lock and key (infosec thinking).

When Does Privacy Matter? When You Can Control How Much Privacy You Have

The value of privacy is highly context-dependent. That means it varies depending on who needs access to our information, for what purpose, and when.

Modern notice-and-consent laws (such as the ones put forth in the widely-used Generally Accepted Privacy Principles) fail to protect user privacy because they treat PII as if it carries equivalent and intrinsic value that holds constant across all contexts.

A robust privacy-preserving approach would ensure that information flows are appropriate to the norms of the surrounding context. The only way to do that globally, across all contexts, is to give ultimate autonomy, choice, and control to users to govern their own information and toggle how much signal they emit.

Let’s unpack how we might do that.

In “Privacy and the Limits of Law”, legal scholar Ruth Gavison defines privacy as the degree of control that we have over others’ access to us through information, attention, and physical proximity. She writes that privacy is not a normative concept: it doesn’t carry any intrinsic moral value and varies according to context.

In fact, there are many situations where less privacy is actually preferable. Take for example the labyrinthine process of collecting and transferring a comprehensive patient history between medical offices. A patient in this case would benefit from a lower privacy bar to share relevant medical records between trusted professionals with minimal friction. Contrast that with your Amazon purchasing history, which you probably want to keep to yourself.

Obviously, privacy does not hold universal intrinsic value in all situations. Rather, its value and importance depend on circumstance; in some circumstances we might want more privacy, and in others we may wish to enjoy less.

As Gavison continues in her in article, “the requirement of respect for privacy is that others may not have access to us fully at their discretion”. That discretion, or the ability to choose how much to share and when, according to the context, should belong to internet users and not third parties. We, the consumers whose data is at stake, should be able to decide how, when, where, and with whom it is shared, at our discretion.

What we want, in other words, is agency.

Here we see the beginnings of a framework for evaluating PETs in context: the quality of a privacy-enhancing technology depends on the degree of choice it affords to users to exercise agency about the sharing of their personal data.

Its key feature would empower users to expand or contract their privacy settings with minimal friction and according to choices informed by surrounding context. Its key feature would not be how well the technology safeguards private information across all contexts.

So what good is a lockbox if it still enables targeting, concealed influence, and behavior manipulation, even if the PII itself is kept private? Google places you into an affinity-based cohort while hiding your unique identifiers. Your identifiable data is hidden, but the mechanism still enables targeted behavior manipulation. Here, again, we’re addressing only the means, not the ends.

Likewise, Apple handles most of the targeting logic on-device, so advertisers largely do not interact with your data. However, since targeted ads are still being served, you are still being influenced without fully understanding how or why. Like Google, Apple is only looking at the journey, and not the destination.

Takeaways:

  • The value of privacy is highly context-dependent.
  • PII does not carry equivalent and intrinsic value that holds constant across all contexts.
  • The real measure of a PET: how well does it protect consumers against concealed influence?
  • The useless measure we’ve focused on too long: how well is our data hidden from advertisers’ view?

The Advantages of Agency

Before we start imagining better PETs, it is helpful to stop here and ask why agency might be a good thing in the first place.

Moral Autonomy

Perhaps the greatest social good derived from exercising our agency in selecting the attributes of our PII that we choose to expose in a given set of circumstances is moral autonomy.

I have written previously on why PETs should focus on protecting cognitive consent in digital spaces. To double down on this point, I want to draw on several passages from Cornell University professor Helen Nissenbaum’s excellent book, Privacy in Context: Technology, Policy, and the Integrity of Social Life, which compiles multiple scholarly perspectives on the varying contextual values of privacy.

Nissenbaum offers social philosopher Stanley Benn’s description of moral autonomy as the self-determination of acting according to principles that are one’s own. According to Benn, these principles should be subject to “critical review, rather than taking them over unexamined from his social environment”.

In the same book, Nissenbaum also offers Georgetown University law scholar July Cohen’s definition of moral autonomy as a condition of “independence of critical faculty and imperviousness to influence”. Both conditions describe a state of critical thought guiding independent action that is diametric to the covert manipulation that digital advertising aims to achieve given ever more perfect information and prediction.

If the destination of a PET is ultimately to safeguard moral autonomy and cognitive consent in digital spaces, then a useful methodology for evaluating how well a PET does this must examine its agility in modulating privacy settings across contexts, and not how well private data is secured and obscured.

It is worth noting that even though the value of privacy is contextual and variable, autonomy and cognitive consent are not morally neutral concepts. They are, in fact, foundational and essential to achieving a just and sustainable digital life.

We live in an era dominated by technology business models that offer “free” services in exchange for consumer data. But it is naïve to accept that the absence of moral autonomy is merely the cost of doing business in a tech-enabled world.

Sidebar: Alternatives exist! See my post about the prospects of web3 to address some of the ills of web’s data-for-service business models.*

Mass technological surveillance has a disciplining effect on how humans behave within our self-selected social groups. By weaponizing our need for social approbation, surveillance reduces our reliance on self-determination, critical thinking, and independent decision-making. We become more susceptible to confirmation bias and more likely to believe in dis- and misinformation shared by those we emulate or hope to impress.


Sidebar: Want to gross yourself out by learning about how marketers intentionally short-circuit your faulty human heuristics? Type “cognitive bias in advertising” into your search engine and enjoy the results!


This state of affairs causes demonstrable harm, which many studies have exhaustively documented — and is perhaps most visibly demonstrated in the increased ideological partisanship, extremism, hate speech, inauthentic account activity, and political instability that marked the last two general elections and the ongoing global response to the COVID-19 pandemic.

Informational Symmetry

The second benefit derived from increased agency in digital spaces is informational symmetry, which I define as a condition in which digital consumers know what information is collected, where it is sent, what is done with it, and how it is used to target them for other goods and services beyond the initial transaction.

Informational symmetry allows consumers to make informed decisions about how to modulate their privacy preferences in context. Absent such equity in the distribution of knowledge, Nissenbaum describes a digital marketplace that is skewed in favor of those who have more information, “leading to inequalities of starting position”. This results in “an insidious form of discrimination among individuals and groups, with some deemed more worthy” for certain treatments because of their relative position.

We readily see this type of discrimination today in the form of bias in advertising. Examples of this include discriminatory loan offers, recruiting practices, and medical treatments.

Although the Federal Trade Commission expressly prohibits discriminatory advertising, regulations are another example of infosec thinking bleeding into digital privacy. They treat the data itself as sacrosanct and privacy as having intrinsic value, instead of focusing on privacy’s contextual value by examining the applied purposes made possible by access to private data. Such regulations, much like privacy laws themselves, are wholly insufficient to protect consumers against concealed influence, and focus only on alleviating symptoms instead of attacking root causes.

Down With Advertising?

If concealed influence negatively impacts society and privacy laws are insufficient to protect consumers, then you might be wondering if the problem is not advertising itself?

It is not. There is a discernible difference between open, obvious, and disclosed promotional activity, and the precise tracking, targeting, and covert manipulation of the sort that defines the modern ad tech industry, which is driven by prediction rather than curation.

Here, I turn to the work of philosopher Gerald Dworkin in “The Nature of Autonomy”, who describes self-imposed limitations that are not at all antithetical to self-determination:

“A person who wishes to be restricted in various ways, whether by the discipline of the monastery, regimentation of the army, or even by coercion, is not, on that account alone, less autonomous”.

Publicity that caters to the needs of its target audience — when done with that target audience’s express consent — yields a beneficial and voluntary restriction on what the target audience sees. By contrast, a process that conceals the mechanisms that determine why and when a target audience is selected for exposure to a certain product or service is a different form of publicity altogether, and has more in common with manipulation than advertising. As Dworkin continues:

“It is important to distinguish those ways of influencing people’s reflective and critical faculties which subvert them from those which promote and improve them. It involves distinguishing those influences such as hypnotic suggestion, manipulation, coercive persuasion, subliminal influence and so forth…Both coercion and deception infringe upon the voluntary character of the agent’s actions. In both cases, a person will feel used, will see himself or herself as an instrument of another’s will. His or her actions, although in one sense his or hers because he or she did them, are in another sense attributable to another. It is because of this that such infringements may excuse or (partially) relieve a person of responsibility for what he or she has done. The normal links between action and character are broken when action is involuntary.”

The Failure of Current Frameworks

Privacy frameworks seeking to protect consumers from concealed influence should focus on digital products, services, and systems that impinge on moral autonomy, cognitive consent, and informational symmetry, and not on whether someone has built a better lockbox for consumer PII.

To return to Gavison’s definition, privacy is the degree of control that we have over others’ access to us through information, attention, and physical proximity. Current privacy laws govern who has access to what PII and when, but they say very little about who has control over that degree of access, which is an essential dimension of privacy applied in context. Laws that only govern access, but that say nothing about who is in control, will continue to leave digital consumers vulnerable to concealed influence.

It follows that PETs should allow internet users to modulate the amount and type of signal they emit to advertisers, and not on where that signal gets stored. A suitable rubric for evaluating the usefulness and quality of such technologies must likewise examine the conditions that lead to moral autonomy, cognitive consent, and informational symmetry. These should include consumer choice about what information is shared and with whom, whether that information can be ported between providers at the consumers’ discretion, and who ultimately owns the information.

A Better CIA Triad for Evaluating Digital Privacy

Cognitive consent really boils down to internet users having control over:

  1. who has access to our information, and
  2. choice in how our data is used.

Control, therefore, is a key lever for evaluating PETs.

Products and services designed to protect and secure information are often evaluated against to the well-known CIA Triad: Confidentiality, Integrity, and Availability. As we have already seen, infosec concepts do not translate well to digital privacy.

I propose a new rubric for evaluating digital privacy designed to put internet consumers in control of their cognitive consent in digital spaces:

Confidentiality:

Governs the encryption and protection of data, objects, and resources from unauthorized viewing and access. This is basically the same as the familiar information security definition, because the ability to protect information from unauthorized access is a prerequisite foundational layer for the other two dimensions of the triad.

Interoperability:

The maintenance of identity data and private information in a fully portable format that is accepted and readable by a multitude of service providers, protocols, and standards. Interoperability asks whether I as an internet user can port my privacy preferences from one platform to another with minimal friction. This dimension evaluates how well a PET protects me across a wide digital surface area, or whether it applies to a narrow set of digital experiences and is therefore minimally useful.

Interoperability is key because it puts the onus on digital products and services to accommodate user privacy preferences and needs, and not the other way around. Users suffer from insurmountable time constraints and information overload that make it effectively impossible to make sound, educated decisions about our privacy for every single website we visit and service we use. Consider the following numbers, from “The Myth of Individual Control: Mapping the Limitations of Privacy Self-Management”:

“There is a vast discrepancy between the time required for the meaningful exercise of privacy self-management and people’s time constraints (Obar 2015; Rothchild 2018). In order to make an informed choice, data subjects need to invest time in (i) gathering all relevant information, (ii) carefully examining the information, (iii) estimating costs and benefits of data disclosure based on the information and (iv) determining whether the expected consequences are compatible with their preferences (van Ooijen and Vrabec 2019)…Additionally, data subjects have to repeat the process if they want to compare the terms of competing service providers. As it is not unusual for companies to frequently modify their privacy policies, studying them all just once would still not suffice (Solove 2012)… Mcdonald and Cranor (2008) estimated that an average Internet user would need more than six full working weeks (244 hours) to read the privacy policies of every website visited in a one-year period, which would result in $781 billion of lost productivity in the US alone. Notably, the study was conducted in 2008, since when the amount of Internet traffic has increased more than tenfold (Cisco Systems 2009, 2020). The study also exclusively focuses on web browsing and does not consider time required for policy re-reading and policy comparisons between alternative service providers. Therefore, while the outcome is astonishing, it can be regarded as a highly conservative estimate and most certainly understates the effort that would be required today to read the privacy policies of all services used by the average consumer (utilities, insurances, financial services, mobile apps, etc.). Given the time constraints of everyday life, it is unrealistic to expect data subjects to read through thousands of pages of privacy policies…Making truly informed privacy choices in a modern, technology-based society would require a significant amount of economic, technical and legal background knowledge.”

Users cannot reasonably be expected to read, understand, and act upon the terms and conditions of every single online service they wish to use. An interoperable PET would provide a standard that is broadly accepted by a multitude of digital products and protocols. It would allow you to create an interest profile with the type of content you wish to see and what you’d prefer to avoid, and then apply that same profile across the full spectrum of digital platforms and experiences.

Privacy preferences should reflect conscious decisions made through the exercise of our moral autonomy, rather than hidden, unconscious behavior patterns mined across thousands of data points and analyzed through machine learning. By making these preferences not only conscious but portable, interoperability ensures users are not burdened with information overload and insurmountable time constraints every time they visit a new site or download a new app, and can enjoy a fluid consent-based user experience.

So how do we build interoperable tools that allow us to set our defaults across our entire digital experience, and modify as necessary for context? Agency is the key.

Agency:

The degree to which the consumer can exercise choice and consent over the information that belongs to him, when that information is transferred, viewed, or accessed by another party, and how it will be used. Perhaps the most important pillar of the digital privacy triad, Agency is concerned with the measure of choice that internet users have over the aspects of their identity or personal information that they wish to expose or conceal in a given context or transaction.

Agency is intended to empower users to modulate the amount and type of signal they emit to advertisers. In so doing, users exercise consent over their digital experience and are not exposed to unwanted, concealed influence that they have not expressly agree to see. This dimension evaluates how well a PET takes a contextual approach to privacy by allowing users to toggle and manage their privacy preferences according to context instead of following the uniform, one-size-fits-all approach of information security. Finally, Agency seeks to return data ownership to internet users, who are empowered to determine when their information is transferred, viewed, or accessed by another party, and how it will ultimately be used.

Let’s see how PETs such as Google’s FLoC and Apple’s on-device processing measure up against the digital privacy triad.

Confidentiality

While FLoC is still very new and its impact is yet to be studied, FLoC does appear to satisfy the Confidentiality dimension at least partially.

Cohorts do not contain identifiable user data, so they are inherently confidential. However, one of the inherent and increasingly prevalent risks of internet use today is data re-identification or de-anonymization, which is the practice of cross-referencing publicly available information with anonymized data to match patterns to individual users. While addressing this risk is outside the scope of FLoC’s intended purpose, one of the unintended results of its design is, ironically, to increase the ease of re-identification. Moreover, privacy advocates have raised concerns that, as cohorts get smaller and more granular, it may be possible to identify individual users even without cross-referencing.

On-device processing, on the other hand, more fully satisfies Confidentiality, since the results of the on-device computation are not aggregated into interest-based groups. Everything, from computation to targeting, is accomplished on-device. Advertisers receive reports that an ad was clicked and that an outcome was achieved, but the underlying data remains hidden.

Interoperability

Neither FLoC nor on-device processing appear to offer any benefits in the way of interoperable privacy preferences, since FLoC is a standard that only applies to Google’s own Chrome browser. Furthermore, the only way internet users can express their preferences for FLoC is to opt out of using Chrome browsers, since tracking is automatic.

According to Apple’s documentation, expressing preferences for on-device processing is likewise limited to turning off Location Services and Personalized Ads, without further granularity. Neither technology provides users with any degree of interoperability or portability across the digital surface area.

Agency

Both FLoC and on-device processing fail spectacularly when it comes to empowering internet users with choice in their digital experience.

We already covered that the only meaningful way internet users can exercise choice with FLoC and on-device processing is to opt-out entirely, leaving the entire spectrum of meaningful choice and control on the table.

We also discussed how limitations on autonomy can be desirable, so long as those limitations are consensual and self-imposed. When self-imposed, limitations produce curation instead of covert manipulation. For example, you may wish to see advertising related to sports and fitness while de-prioritizing automotive and travel categories. These self-selected choices impose a limitation on the types of ads you see, effectively limiting that your digital experience.

However, since this limitation is achieved through your own conscious choice, based on your active agency in the decision instead of behavior tracking and hidden prediction, cognitive consent remains intact. Without the ability to express such preferences, your agency is reduced to an on/off switch: you can either opt in or opt out. There is no middle ground and therefore, no meaningful choice.

It is obvious that neither FLoC nor on-device processing satisfy the digital privacy triad. Internet users are not empowered with control over their experience because the critical missing piece with both technologies is consent.

The stated purpose of both is to accomplish targeting while hiding PII; consent is nowhere in that equation. Since PII itself holds no intrinsic value in digital privacy, and since the value of PII to advertisers is the predictive targeting it makes possible, it follows that hiding PII is insufficient.

Behavioral targeting leads to concealed influence. So long as digital advertising is done in a surreptitious manner, based on calculations internet users do not understand nor have control over — and with the express purpose of predicting and influencing our behavior — then our digital experience is fundamentally non-consensual.

To arrive at a place where users enjoy true cognitive consent in digital spaces, we need to look at technologies that put data ownership fully into the hands of internet users. My next post will explore whether web3 technologies such as decentralized self-sovereign identity provide a viable alternative to data-for-service business models and federated identity providers that track and siphon your data.

Does crypto hold up against the digital privacy triad? I’ll explore that in part three of this series.

Subscribe to Anastasia
Receive the latest updates directly to your inbox.
Verification
This entry has been permanently stored onchain and signed by its creator.