Averting Cambridge Analytica in the Metaverse: Identity, Privacy, Interoperability & Agency in Emerging Digital Worlds

By collecting this NFT, you’ll be helping fund Yeet, an invite-only discourse club reducing the coordination costs among crypto founders, funders, and operators committed to building agentic technologies. Interested in joining? Get in touch.

Preface

In summer 2020, I started my M.S. in information security and privacy because I wanted to solve surveillance capitalism.

I believe that surveillance capitalism poses an existential risk that, left to run rampant in ever more immersive digital spaces, will lead to the total dissolution of free and open societies -- and by extension, hasten the onset of worldwide authoritarianism and civilizational collapse -- in our lifetimes.

So I thought it a worthwhile endeavor. NBD. 🤷‍♀️

I graduated in May 2022, and all the efforts of my last semester revolved around a singular question:

“What would it take to build a sufficiently robust privacy moat around every digital citizen such that the costs of surveillance capitalism become too onerous for companies to pursue?”

Basically, how can I increase the costs enough to kill off this sociopathic business model…forever?

Some people think that the answer – the whole answer – is crypto. Sorry to disappoint, but the blockchain is not the droid you’re looking for. It’s one of the spare parts of the human dignity droid you seek. Alas, it is not, by itself, enough. Not nearly.

Here’s 17,500 words on why not, and what to do instead.

This essay is the long-delayed (I was busy in grad school, after all) conclusion to my three-part series on digital privacy, concealed influence, and cognitive consent. See previously published Part I and Part II.

I submitted portions of this essay as part of my graduate capstone requirements. While the original tone veered academic, I’ve adapted it here to be more internet/Mirror friendly and accessible. (If you want the vastly abridged version that I was required to submit for my final capstone, click here.)


On a related note, I originally wrote this essay in March 2022, and a lot of it focuses on decentralized identity. That was mere months before crypto suddenly fell in love with solving identity -- after neglecting the issue for entirely too long. It's been amusing to watch the brouhaha instigated by the publication of the now-infamous Soulbound tokens paper by Glen, Vitalik, and Puja (a prior version of which, called simply Soulbound, is even referenced in this essay).

The entire time I was in grad school, hardly anyone in crypto gave a shit about who owns our identities -- and therefore, our data -- which is partly what this essay is about. That was frustrating because to me, identity is like the master key for the whole surveillance capitalism puzzle I had gone to grad school to solve. So in a sense, I found this community’s newfound romance with identity edifying, even if much of it was ill-conceived. (For my thoughts on all that, see this Twitter thread.) But it also caused me to regret not having published this essay (or the abridged capstone) on my Mirror earlier, because having these artifacts published would have been helpful when speaking with people about DIDs, VCs, and SBTs.

Better late than never, though. I took the entire month of June off to chill, read, work on personal projects, and finally, finally publish this doorstop of an essay. I wrote it to burst our collective crypto hype bubble about “data sovereignty” (bro, I do not think that term means what you think it means). But I also wrote it to offer real, workable pathways to human agency in this new, reimagined internet we’re busy building. So long as we’re out here reinventing compute, we might as well do it right this time around, right?

There is hope. There is still time. And thank goodness for the bear market, for now there is even the mental space and focus needed to get it right.

Part I. Introduction

The point of privacy has been pretty hard to pin down.

Its shaky legal standing and philosophical underpinnings have mystified and eluded scholars, ethicists, jurists, legislators, and technologists for centuries. Certainly a hotly debated topic among intellectual elites, hoodie-cloaked DEFCON attendees, and cypherpunk libertarians for ages, less rarified company has always squinted at it with suspicion:

“If I nothing to hide, what do I have to fear?”

The Cambridge Analytica scandal finally thrust privacy -- and its attendant ambiguities -- into the sphere of popular inquiry and outcry.

Expensive and embarrassing data breaches such as those suffered by Yahoo and Equifax had previously drawn headlines about unwarranted disclosures of personally identifiable information (PII). But whereas those data breaches constituted failures in information security, Cambridge Analytica underscored the need for a different type of assurance: privacy protection.

People are often surprised to hear this, but there was no data breach of Facebook’s records. The harvested datasets from 87 million Facebook users that Cambridge Analytica used to microtarget voters and sway elections were freely available because Facebook’s inadequate privacy protections allowed these manipulations to occur.

Advertising-supported technology platforms such as Meta, Google, and Twitter compete for attention and engagement to generate revenue, devising intentionally addictive user experiences and adversarial product interfaces designed to maximize user information capture. This data exhaust, or the trail of personal telemetry left behind by Internet users with little insight into or control over their digital footprints, is then sold to advertisers, malicious actors, and anyone willing to pay for the data, giving rise to the lucrative monetization of data for secondary purposes known as surveillance capitalism. Whether borne of hype, naïveté, or willful ignorance of history and facts, crypto and web3 (I’ll call it web3 for easy reference from here on out) perpetuate a misconception that blockchains are a sort of panacea that protects users against the predations of the past 20 years of surveillance-funded Big Tech business models.

This is patently wrong. On most popular blockchains, sensitive data is public by default. Far from protecting user autonomy and data sovereignty, blockchains greatly increase the attack surface for digital privacy infringement. Meanwhile, mechanisms for protecting our digital privacy remain almost nonexistent and certainly inadequate. In some cases, society’s attempts to protect digital privacy have arguably made matters materially worse, as with the false sense of security provided by the General Data Protection Act (GDPR), for example (more on that later).

In short, we are not ready for the metaverse.

This essay shows that crypto alone will not defeat surveillance capitalism, even if the underlying business model shifts from centralized to decentralized, or from platform to protocol. It offers, instead, an outcomes-based framework for designing strong protections against surveillance capitalism by isolating the inputs (such as data exhaust) that give rise to it and replacing them with mechanisms that enhance user sovereignty and agency.

I examine the history of privacy jurisprudence, philosophy, and identity governance. I then review the inadequacies of existing protections and frameworks. I isolate the primitives that produce surveilled dystopias: neglect of user-centric identity, overexposure of user telemetry, and consent theatrics. I conclude by asking the technical, engineering, and standards community that’s building emerging technologies to coordinate, conspire, and root out these pathologies.

I recognize this is a massive coordination challenge. The purpose of the essay is to motivate the collective willpower of the web3 community to meet the challenge not with resignation and exasperation, but with resolve, curiosity, creativity, and craft.

After all, we are still so early, but we are already almost too late.

Part II. Isolating Root Causes

Data exhaust is valuable to marketers and information operatives because it powers behavior models used to predict and manipulate outcomes. These outcomes can be as innocuous as influencing a purchase, or something altogether more alarming, such as polarizing an electorate and swaying a vote. There is growing acknowledgement among lawmakers, policy experts, technologists, and consumers alike that platforms that feed on user telemetry are correlated with and often causally linked to political polarization, online toxicity, isolation and depression, and distrust in democratic institutions (Harris, 2019). The balkanization of public consensus resulting from behavior manipulation and targeted narrative warfare spills over into real world violence (Smith, 2021) and has, according to former National Security Advisor Lieutenant General H.R. McMaster, become a serious national security threat (Harris & Raskin, 2022).

Regulations have failed to stem the tide of data exhaust that powers surveillance capitalism. This is in part due to the pacing problem: innovation moves faster than its effects can be felt and described with sufficient precision to devise mitigation strategies – and by that time, the harm is already done (Thierer 2020). Even then, regulatory remedies only address symptoms that emerge downstream from a critical flaw in the original design. They do not, by definition, address underlying causes, so harms perpetuate in forms not captured in the new law. Ambiguity about what exactly privacy is and how much privacy citizens should enjoy further complicates the task of devising solutions for a problem that eludes definition (Cohen, 2012).


Food for thought: the existence of regulation is a very good sign that something has gone terribly wrong further upstream at the system architecture and incentive design level. The false sense of security provided by regulation lulls us into a dangerous complacency: instead of addressing root causes, we slap a law on it and call our work done. And so, negative externalities accumulate.


Although pacing problems and definitional ambiguities do not serve society well, they are not the crux of the issue. The first root cause of surveillance capitalism is a failure to build the Internet with a persistent, portable, and composable identity layer that allows users to self-custody their privacy decisions and self-govern how they connect, to what services, and under which conditions. The best available proxy for digital identity became the email address – and later, the social media account – which associated users to a data store of attributes and interactions owned and operated by technology companies – such as Google and Meta – issuing those accounts.

Since this model of identity management separated users from custody over their data, privacy governance has focused on how companies should handle their users’ data, and not what users should be able to do with their own data. But privacy is a continuum of people’s ever-shifting boundaries and preferences that ebb and flow according to situation, environment, time, and mood. These characteristics defy the prescriptive, uniform definitions promulgated by centrally-defined regulatory frameworks and legal remedies. Contexts shift while governance remains static and codified, leaving user privacy boundaries vulnerable to exploitation.

The Cambridge Analytica scandal laid bare the need for better protections against surveillance capitalism: protections not only for the information security of the sensitive PII that platforms manage, but for the metadata contained in our data exhaust that falls outside the scope of information security, and, crucially, for the autonomous agency and cognition of the person described by that metadata and PII. To reimagine privacy governance from first principles, we must first rearchitect how we issue, manage, and govern digital identity — and in so doing, place users at the center of their data sovereignty.

With the rise of public blockchains, this challenge is now even more pressing. Smart contracts enter sensitive transactions in the immutable public record for anyone, including advertisers, data brokers, and malicious actors, to track, analyze, and model, exacerbating the data exhaust problem that powers surveillance capitalism on platform businesses. Meanwhile, the persuasive, hypnotic immersions of the emerging metaverse, already valued as a $1 trillion opportunity by a recent report by J.P. Morgan, further enrich the menu of data types, biometric markers, and interaction patterns subject to monetizable capture.

Internet users require robust protections across the full landscape of digital experiences, a daunting and infinitely complex undertaking that centrally managed governance has provably failed to provide. Unless we give users full custody of their identities – and therefore their data, privacy preferences, and access controls – blockchains will not only replicate but multiply the same problems that gave us Cambridge Analytica.

Part III: Privacy as a Public Good

“The data revolution will bring untold benefits to the citizens of the future. They will have unprecedented insight into how other people think, behave and adhere to norms…The newfound ability to obtain accurate and verified information online, easily, in native language and in endless quantity, will usher in an era of critical thinking in societies around the world…Anyone with a mobile handset and access to the Internet will be able to play a part in promoting accountability and transparency.” – Eric Schmidt, former Google CEO (Schmidt & Cohen, 2014)

Technology always outmaneuvers the legal, philosophical, and regulatory frameworks that humans construct to keep pace with the ingenuity of our inventions, leaving our cognitive defenses and behavioral norms forever struggling to adapt to the evolving pressures and affordances of exponential innovation.

So we read Eric Schmidt’s breathless enthusiasm with a sense of irony and foreboding: Just where are these new critical thinking skills and accountability benefits we were promised? How did we mess up? What would it take to stymie the accretive privacy losses to ubiquitous technology and thrive in partnership with our inventions? Can we still shape our proverbial tools, or is our sovereign agency consigned to executing to our tools’ design, without recourse for self-determination?

A bad workman always blames his tools, as they say. Might we become better workmen by shifting away from over-reliance on tools (such as cybersecurity technologies and regulatory remedies) and towards nimbler mental models, design constructs, and technosocial virtues capable of adapting alongside an industry that is by necessity in constant transition?

The Western mindset is deeply informed by the natural rights philosophy of John Locke and Thomas Hobbes: life in a state of nature is mostly nasty, brutish, and short, and rights are limited to what is observable about man in this austere environment. He has the right to his life, to his freedom, and to the property that he possesses. In return, he ought simply not interfere with these same empirically derived natural rights that all other men possess except through contractual agreement. Conceptions of what is right, good, and ethical in Western thought tend to be framed in terms of what produces social utility and favorable consequences without imposing on aforementioned on natural rights.

By contrast, claims to essentialism or innate morality are often rejected as irrational, mystical, and outdated. If the separation of church and state proscribes any divine right of kings, then there can be no inherent code of ethics that exists outside the sphere of empirical inquiry. To the extent that we have natural rights, these are rationally derived from observation. Even Immanuel Kant’s deontological ethics, classically understood as the opposite of utilitarian consequentialism, firmly rejects essentialism in favor of logic and reason. The categorical imperative judges actions by how likely they are to produce sound universal laws, and not by any innate, “God-given,” or mystical quality.

Atop these rationalist roots, the Western tradition layers on the various pleasures of the social contract and the conditions required to maintain it. Most ethical debates about what rational man ought and ought not do in a society likewise revolve around evidence-based, consequential reasoning about the qualities of a society people want to live in.

Until philosopher Helen Nissenbaum described privacy as a contextual spectrum determined by situation-specific informational norms (2010), most experts and lawmakers had sought to articulate a precise universal definition – and to define privacy’s value as a function of consequences: what we enjoy with privacy, and what we suffer without it. Prior to Nissenbaum’s work, a monolithic description of privacy, or at least a quantifiable evaluation of its societal worth, had been seen as a precondition for devising governance. But while laws require precise signifiers and irrefutable evidence, people’s privacy boundaries are variable and idiosyncratic, so any singular definition will necessarily feel reductive, vague, and incomplete.

Samuel Warren and Louis Brandeis’ essay “The Right to Privacy” in a 19th century volume of the Harvard Law Review offered perhaps the first contribution to legal scholarship on privacy. Their polemic against the intrusions of the “modern media”, which reported on a private wedding ceremony without permission (reporting made possible, notably, by the “disruptive” technology of the day: the invention of photo cameras) influenced over a century of privacy jurisprudence (1890). They argued that just as “a man’s house is his castle,” the private facts, thoughts, and emotions that constitute “inviolate personality” should likewise be shielded from public view.

Their impassioned plea foreshadowed modern-day concerns about the impact of technology on private life and laid the groundwork for the subsequent establishment of privacy torts that enumerate four specific harms: intrusion upon seclusion, public disclosure of private facts, false light, and appropriation of name or likeness (Prosser, 1960). Importantly, Warren and Brandeis’ essay continued the longstanding Western tradition of evidence-based reasoning by framing privacy in terms of harms done to the inviolate autonomy of the individual. In so doing, Warren and Brandeis carried Western consequentialism out of philosophy and over to jurisprudence.

Historical attempts to capture and define the legal boundaries of experiences as subjective and context-dependent as privacy expectations have largely proven too rigid and incomplete to be practicable. This is especially true of digital privacy, where in-the-moment harms seem too theoretical and diffuse to be real (until, of course, they accumulate; see: privacy’s “dead body” problem). To give digital privacy the weight of consequence, scholars have sought to broaden the scope of harms beyond the narrow definitions of intrusive injuries and torts by connecting privacy to ideals of self-expression and self-actualization.

Psychiatrist David Rosen and literary scholar Aaron Santesso, for example, write that privacy is:

“…a necessary condition for the formation of an autonomous person; integrity of soul, inversely, is the underlying justification for a new right. The various invasions of daily life are not mere annoyances but threats that might arrest the development of the self, a chrysalislike process that, to be successful, must remain ‘inviolate.’” (2011).

In legal expert Julie Cohen’s view, privacy preserves “spaces for free moral and cultural play” necessary for moral autonomy and independent critical thought:

“[Scholars should] examine the experiences of network users through the lens [of] ordinary routines and rhythms of everyday practice. In particular, scholars concerned with the domains of creativity and subject formation should pay careful attention to the connections between everyday practice and play…and the ways in which culture and subjectivity emerge.” (Cohen, ch. 5, pp. 1-2)

Law professor Herbert Burkert argues that privacy enables agency over how we see ourselves and in turn present ourselves to others:

“One of the more recent critiques of the notion of privacy argues that it misses the essence of identity. Identity is not a constant but a process. We do not develop our identity by keeping ourselves separate from others; our identity is what others know about us. In knowing about us, power is already exercised. The way we are seen by others greatly influences the way we see ourselves” (1998, p. 138).

Ethicist Shannon Vallor describes how affronts to human agency short-circuit the capacity to soundly evaluate options and make agentic choices:

“Our digital liberation from cultural hegemony is itself endangered by multinational media consolidation and the deliberate design of ‘sticky’ digital media delivery systems that exploit neurological and psychological mechanisms to undermine our self-control…[but] many new media pleasures are consciously designed to be delivered to us in ways that undermine our cognitive autonomy and moral agency. They make it harder, not easier, for us to choose well.” (2016, pp. 166-168)

Finally, political scientist Priscilla Regan believes that privacy provides positive social value:

“When we credit privacy with the role it plays in promoting values such as autonomy, human development, and freedom of thought and action, we stop at the good of the individuals. It is time…to emphasize how critical these values – values such as freedom of speech, association, and religion – are to the flourishing of liberal societies. As such, privacy is to be grasped as a common value.” (Nissenbaum 2010, p. 84)

Scholars have also argued the inverse: that the absence of privacy causes various harms, including self-censorship, preference falsification, chilling effects, interference with autonomy, and even reality distortions.

Privacy scholar Daniel Solove writes that people need privacy not to conceal illicit activity, but to avert social disciplining effects, without which their choices fall subject to decisional interference (2007). People generally fear the disapproval of their peers, so individuals in surveilled spaces self-censor, falsify their preferences, and communicate ideas that differ from their true perspectives, generating a distorted view of reality which produces a chilling effect on society at large. The accumulated misrepresentations of people’s real thoughts and sentiments conceal true consensus about what people actually believe. These distorted signals achieve increasingly genuine social acceptance and normalization over time, leading to what philosopher Jeffrey Reiman calls psychopolitical metamorphosis:

“Extrinsic losses of freedom occur when people curtail outward behaviors that might be unpopular, unusual, or unconventional because they fear tangible and intangible reprisals, such as ridicule, loss of a job, or denial of benefits. Intrinsic losses of freedom are the result of internal censorship caused by awareness that one’s every action is being noted and recorded…They are thus deprived of spontaneity and full agency as they self-consciously formulate plans and actions from this third-party perspective.” (Nissenbaum 2010, p. 75).

Public opinion research supports the notion that privacy losses produce harmful inhibiting behaviors:

“The Pew Research Center…has released a worrisome study suggesting that social media platforms not only fail to reliably foster open civic discourse, they often inhibit it…With remarkable consistency across the study metrics, the researchers found that social media users seem to be disproportionately inclined in civic life to fall victim to the phenomenon known as the ‘spiral of silence.’ This is the tendency of holders of a minority viewpoint to become increasingly inclined to self-censor in order to escape the social penalties for disagreement, which in turn artificially boosts the perceived strength of the majority opinion across the community, further quieting dissent. The study authors hypothesize that in online contexts, ‘this heightened self-censorship might be tied to social media users’ greater awareness of the opinions of others in their network.’” (Vallor 2016, p 183).

As a result, Reiman argues that constant panoptic surveillance not only changes the thoughts people express but also the behaviors they adopt. A virulent modern example of this phenomenon is apparent in how social media alters the flow of public discourse. People fearing “cancellation,” loss of job, and reputational harm refrain from participating in certain conversations or subjecting their ideas to scrutiny to avoid retribution. This chilling effect on society discourages or inhibits the legitimate exercise of people’s natural and legal rights simply because they fear there might be repercussions, even if those fears may be unfounded.


The term “panoptic” refers to the “panopticon,” a prison architecture devised by philosopher Jeremy Bentham where guards observe prisoners at will without any of the prisoners knowing whether or when they are being watched. The panopticon is often cited as a metaphor for modern-day surveillance technologies, where watchers observe but the watched cannot reciprocate, widening the power dynamics between institutions and the public –and between companies and customers. An important account of panoptic architectures as a tool of social discipline is included in philosopher Michel Foucault’s Discipline and Punish (Foucault & Sheridan 1995).


Finally, social psychologist Shoshana Zuboff offers a human rights-based defense against the disciplining effects, asymmetric power dynamics, and reality distortions of panoptic life:

“Self-determination and autonomous moral judgment, generally regarded as the bulwark of civilization, are recast [by surveillance capitalist business models] as a threat to collective well-being. Social pressure, well-known to psychologists for its dangerous production of obedience and conformity, is elevated to the highest good as the means to extinguish the unpredictable influences of autonomous thought and moral judgment. These new architectures feed on our fellow feeling to exploit and ultimately to suffocate the individually sensed inwardness that is the wellspring of personal autonomy and moral judgment, the first-person voice, the will to will, and the sense of an inalienable right to the future tense.” (2019, p. 444)

By placing our survival needs for social approbation above our developmental needs for individual agency, surveilled spaces invite confirmation bias and groupthink. Individuals become more susceptible to social pressure and propaganda, especially if shared by those they wish to emulate or impress. Thus, even if people have nothing whatsoever to hide, surveillance materially alters perceived social norms and expressed behaviors, and interferes with our ability to build trust, perceive consensus, coordinate across social groups, and arrive at shared truth.

But why, if privacy carries no innate moral worth, and if its value is merely a function of its consequences, has Western scholarship attempted to govern privacy as if it does have uniform significance? Universally defined, centrally managed privacy frameworks (such as the GDPR and others covered later in this essay) all stumble on contradictions and inadequacies because they seek to establish consensus about privacy’s essence that holds constant across all contexts, settings, and environments. In contrast, we have seen that the utility and valuation of privacy varies greatly depending on the harms and benefits derived. In other words, Western scholarship seeks essentialist governance for evidential experience.

This, of course, creates massive conceptual and practical difficulties. The paradox between rationalist ethics and essentialist governance contributes to the chaotic state of privacy governance today. Cohen offers a particularly vivid account:

People’s understandings and expectations of privacy [are] hard to understand. Surveys report that ordinary people experience a relatively high generalized concern about privacy but a relatively low level of concern about the data generated by specific transactions, movements, and communications. Some policy makers interpret the surveys as indicating either a low commitment to privacy or a general readiness to trade privacy for other goods. Others argue that the various ‘markets’ for privacy have informational and structural defects that prevent them from generating privacy-friendly choices. They argue, as well, that inconsistencies between reported preferences and revealed behavior reflect a combination of resignation and befuddlement; most Internet users do not understand how the technologies work, what privacy policies mean, or how the information generated about them will actually be used.

Confronted with these developments and struggling to make sense of them, courts increasingly throw up their hands, concluding that constitutional guarantees of privacy simply do not speak to many of the new technologies, business models, and behaviors, and that privacy policy is best left to legislators. Legislators are quick to hold hearings but increasingly slow to take action; in many cases, they prefer to delegate day-to-day authority to regulators. Regulators, for their part, rely heavily on principles of notice, consent, reasonable expectation, and implied waiver to define the scope of individual rights with respect to the practices that fall within their jurisdiction.

Legal scholars also have struggled to respond to these social, technological, and legal trends. There is widespread (though not unanimous) scholarly consensus on the continuing importance of privacy in the networked information economy, but little consensus about what privacy is or should be. Among other things, legal scholars differ on whether privacy is a fundamental human right, what circumstances would justify pervasive government monitoring of movements and communications, whether guarantees of notice and informed consent are good or even effective safeguards against private-sector practices that implicate privacy, and what to make of the inconsistency between expressed preferences for more privacy and revealed behavior that suggests a relatively low level of concern. (2012, ch.5, pp. 1-2)

To bring Western consequentialism in coherence with its inconveniently essentialist privacy governance, protections should focus on responsiveness to experience and agility in context. Instead, current privacy governance is determinate and universal: a fitting model for essentialist, unchanging ideas, but not ones that are driven by observable utility and subjective experience.

Perhaps a more useful framework for governing privacy might offer individuals greater latitude to modulate how much privacy they wish and when. More privacy is not always desirable, nor is it always possible. The key is to be able to choose, or at least to bring the absence of choice into conscious awareness.

Part IV: Distinguishing Between Concealed Influence and Legitimate Exchange

“There is nothing in the mind which was not first in some manner in the senses.” – Rene Descartes

While there is widespread agreement about the causal link between agency, autonomy, self-determination, and privacy, its subjective and context-dependent nature does not lend itself to quantification or uniform governance. Yet it is precisely this nebulous space of shifting norms and expectations that privacy management must somehow locate and defend without collapsing the context necessary for nuanced, context-responsive governance.

It is worth pausing here to inquire where the distinction between concealed influence and autonomous self-formation truly lies. As philosopher Gerald Dworkin writes in his essay “The Nature of Autonomy,” autonomy is “identified with qualities of self-assertion, critical reflection, freedom from obligation, absence of external causation and knowledge of one’s own interests” (2015). But where precisely do these interests lie, if they are in constant negotiation with the external pressures and considerations of socialized daily life? To paraphrase the great rationalist dictum, can there be anything in the mind that did not get there through the senses first? And if so, can a meaningful border be drawn between concealed influence of the sort that produces manipulation and legitimate interaction of the sort that creates consensual exchange? Nissenbaum notes the obvious ambiguity:

“We may all readily agree that no one (except possibly a hermit or someone living in total isolation) can have absolute control over all information about him- or herself. In this regard, we are all at the mercy of those simply passing by us on streets or engaging with us in mundane daily interactions, and more so with others who have virtually unbounded access to us. So, even if we agree that a claim to privacy is morally legitimate, and that privacy is about control or restricted access to information about us, it would be singularly unhelpful to circumscribe a right in such a way that it is violated every time a motorist peers at a pedestrian crossing the street” (2010, p.73).

To locate that defining line between manipulation and conversation, it’s helpful to interrogate the interplay between influence and autonomy. Like Nissenbaum, Dworkin grants that fully impervious “autonomy in the acquisition of principles and values is impossible,” but caveats that autonomy is preserved when influence is contextualized by clues as to the surrounding situation (1988, p. 18). Such contextualization offers the autonomous individual an opportunity to consent to or reject the influence:

“[There are] ways of influencing people’s reflective and critical faculties which subvert them from those which promote and improve them. It involves distinguishing those influences such as hypnotic suggestion, manipulation, coercive persuasion, subliminal influence, and so forth, and doing so in a non ad hoc fashion.” (ibid)

According to Nissenbaum, “there seems to be a line, however fuzzy, between entrepreneurial salesmanship and unethical manipulation” which is determined by the way that information is assembled. Information may be assembled to inform the subject and overtly seek conscious consent – or it may be assembled “for purposes that are manipulative and paternalistic and not transparently evident to the consumers who are the subjects of these personalized treatments” (2010, pp. 210-211).

There is a discernible difference in kind between information and manipulation, even if the distinction is somewhat blurry. The former constitutes open, contextually obvious, and disclosed promotional activity based on an individual’s declarative choices. The latter involves precise tracking, targeting, and covert manipulation constructed from inference instead of declarative preference, and driven by prediction rather than curation. Historian Yuval Harari helps understand this distinction:

“As biotechnology and machine learning improves, it will become easier to manipulate people’s deepest emotions and desires and it will become more dangerous than ever to just follow your heart. When Coca-Cola, Amazon, Baidu, or the government know how to pull the strings of your heart and press the buttons of your brain, will you be able to tell the difference between your self and their marketing experts?” (Ammerman 2019, p. 173)

The medium is the message that carries key contextual clues for obtaining conscious consent. For example, a sign outside a restaurant is definitionally visible to all people who pass by, invisible to people who do not, and is surrounded by contextual information about the neighborhood and other nearby establishments. Importantly, these clues are the same for everyone. Targeted treatments in digital environments, on the other hand, conceal the medium and obfuscate the context, making truly consent-based decisions impossible to provide.

When entrepreneurial salesmanship crosses that ill-defined line into unethical manipulation, interactions cease to be legitimate, consent obtained ceases to be conscious, and influence becomes intentionally concealed. So long as digital advertising is done in a surreptitious manner, based on calculations internet users do not understand nor have control over — and with the express purpose of predicting and manipulating their behavior — then digital experiences are fundamentally non-consensual.

It is this boundary between concealed influence and cognitive consent that privacy governance must somehow locate and negotiate. But it is difficult to see how any centrally managed framework that aspires to define privacy universally across all settings could accomplish this feat of nuance with due rigor.

Perhaps the locus of control to negotiate this porous boundary between influence and manipulation should belong to the individual whose consequential experience is most affected and who therefore has the most granular context upon which to formulate a defensive response?

In “Privacy and the Limits of Law”, law professor Ruth Gavison offers the first building block for a more coherent model of responsive privacy governance. Gavison reframes privacy as a degree of control that we exert over others’ access to our cognition through information exchanged, attention given, and physical presence shared (1980). In so doing, she lifts privacy governance out of failed normative attempts to universally protect something that has variable value. In short, if privacy does not carry any intrinsic, essential morality that holds constant across contexts and requires a persistent and unchanging control setting, then governance should be responsive and modular while controls should be based on experiential preference and needs.

A workable model for privacy management would therefore have to find a way to empower individuals to make context-dependent, consensual choices about external influences and to modulate the amount of access they allow others into their private, autonomous sphere of cognition.

Part V: Addressing Our Collective Neglect of Privacy

That this boundary between self-derived notions and exogenous manipulations is so very porous is precisely why digital privacy has proven difficult to legislate, regulate, or meaningfully protect in Western scholarship and technology education. Absent a strong tradition of innate morality in the deeply rational Western canon, one must either appeal to infringements on life, liberty, or property, or else produce quantifiable evidence of social value or harm. In other words, for consequentialist ethics, one must produce consequences.

But how should one produce evidence of harm with nothing more than a nebulous feeling of manipulation or perceived restriction? As law professor Ann Bartow laments, privacy does not have “enough dead bodies” to serve as proof that privacy encroachments produce real injuries that merit serious attention (2006).

But Bartow was writing in 2006, and today, the consequences of adversarial technology, including the proliferation of mis- and disinformation, online toxicity, and real-world violence, have been exhaustively documented. Especially vivid accounts of privacy’s dead bodies come courtesy of Cambridge Analytica whistleblower Chris Wylie in his tell-all book, Mindf*ck (2019), in the U.S. Senate Intelligence Committee report that followed Wylie’s revelations (2020), in the popular Netflix documentary The Social Dilemma that brought these revelations more sharply into the public’s view, and of course, via Zuboff’s own work.

The boogeyman all these accounts point to is the business model of engagement-driven economics, which relies on harvesting user attention for resale to advertisers. When revenues depend on the reliable delivery of views, clicks, and shares to advertisers, companies have a fiduciary incentive to uncover the emotional and psychological triggers that induce people to engage and build ingenious mechanisms to serve up those triggers with exquisite precision. These mechanisms, better known as algorithms, reward and amplify outrageous, inflammatory, polarizing, and very often inaccurate content because it weaponizes certain personality traits (known as Big Five and Dark Triad) while downplaying more moderate, measured perspectives because they do not inspire as much reaction and engagement (Grassegger & Krogerus 2017, Sumner et al 2012, and Bhagat et al 2020). The resulting fracture of public trust in media and institutions creates echo-chambers that manufacture consensus, pollute the information ecology, and erode cognitive autonomy to make sovereign, consent-based choices not only about our purchasing decisions, but also in our sensemaking. These harms are not confined to digital spaces, or even to the emotional baggage we accumulate when we disconnect; they spill directly over into tragedies such as the 2021 U.S. Capitol riot (Smith, 2021).

If we do away with the business model that powers surveillance capitalism, do the underlying privacy encroachments go away with it? That is to say, if the Big Tech platforms that aggregate and monetize engagement cede ground to a decentralized model that is fundamentally different from the centralized businesses it replaces, will the main pathology that leads to surveillance capitalism – the neglect of privacy and user-owned identity that leaves behind so much rich data exhaust to mine for prediction and manipulation – be somehow “healed”? If users cease to be the product and become the customer, is their cognitive consent thereby secured?

Blockchain technologies are birthing a new era in how humans transact value, exchange ideas, and construct meaning. They promise to deliver consumers out of extractive platforms that monetize personal data and into a new economic paradigm in which users own their data. Will this effectively end surveillance capitalism, or merely shift its business model?

Emphatically, the answer (and this point cannot be overstated and therefore must be obnoxiously capitalized) is NO, IT DOES NOT defeat surveillance capitalism and YES, IT DOES only shift the operating business model.

Blockchains, by themselves, do not produce data sovereignty. The only thing that blockchains do is defeat platform data capture by opening up the social graph. Sure, Meta no longer owns my user history, but I’ll do you one better: now everyone on the internet has access to it and can do with it as they damn well please! And the last person who has any control over it is me, the original user and the ostensible owner of that data.

Data transparency is not the same thing as data sovereignty, which has become something of a meaningless marketing term misused by anyone building analytics and social graph products in web3. If we want to achieve an agentic, dignified technological future without surveillance capitalism, we cannot be satisfied with merely defeating platform lock-in by opening the social graph. We need to deal with the other primitives that give rise to surveillance capitalism: users separated from control over their identities, unchecked data exhaust, and consent theater.

There is a flaw in the hopeful logic that blockchain technologies will, on their own, address most of Big Tech’s harms because most of those harms are rooted in ineffective privacy protections, not economics. While self-evidently true that new incentives produce new behaviors, the prospect of a new economic model does not by itself address the underlying forces that resulted in the current era of surveillance capitalism: data exhaust, behavior tracking, aggregation, and users separated from ownership of their personal information. The same forces will produce surveillance capitalism in crypto because the business model is merely a symptom of the problem, not its source. So long as the behavioral surplus that emerges from ineffective and outdated privacy frameworks enables the monetization of personal data for commercial gain, surveillance capitalism will pervade digital environments irrespective of their business economics.

Ironically, privacy is made far worse, not better, on the blockchain, because even if users own their data, all their transactions become a matter of public record, exposing them to targeted monitoring and surveillance. Instead of entrusting personal data to a handful of dominant cloud technology companies, social media firms, and third-party data brokers, crypto users instead entrust their data to anyone who wishes to perform a rudimentary Etherscan search.

Painting a complete picture of all historical transactions tied to a public key is trivial for anyone who can use an API. With some additional effort, reidentification and pattern correlation of the type that enable concealed influence and outcome manipulation are not much more difficult, and don’t even require sophisticated machine learning algorithms to produce. In fact, it has already been done: in 2020, a team of journalists at Decrypt “analyzed 133,000 Ethereum names and their respective balances” and “found it was possible to identify several high-profile people, even if they weren’t using their real names…see business deals, and watch people’s movements, just using the blockchain.” (Copeland, 2020) All that surveillance capability using a blockchain that has nothing to do with engagement farming or advertising-supported extractive platform economics!

Blaming Big Tech has become a crutch for crypto’s unwillingness to admit its own culpability in adding to the surveillance problem. Instead of scapegoating platform economics, it is time for web3 to drop the facile narrative that engagement-driven business models are solely to blame for eroding consumer privacy. This narrative is reductive and lazy. The underlying issue is that we fail to protect privacy by not giving users meaningful control over how they are influenced and when. This holds true regardless of the business model.

If we want to solve for privacy, we have to deal with privacy, not business models.

“Law 31 - Control the Options: Get Others to Play with the Cards you Deal: The best deceptions are the ones that seem to give the other person a choice: Your victims feel they are in control, but are actually your puppets. Give people options that come out in your favor whichever one they choose. Force them to make choices between the lesser of two evils, both of which serve your purpose. Put them on the horns of a dilemma: They are gored wherever they turn.” - Robert Greene, The 48 Laws of Power

Without clear consensus about what privacy means or when it has value, it has been impossible to unify perspectives on how to adjudicate, defend, or design for it. Absent definitional clarity, some approaches took on the ambitious and ill-fated task of predicting what kinds of data environments a user might face, imagining all the data types that might be exchanged and for what purpose, and prescribing exactly how each of those transactions and interactions are to be governed. In a technology space experiencing exponential innovation, such deterministic proposals not only limit the possibility space for what can be invented, but rapidly fall out of relevance as system designers and engineers find creative workarounds that most often find their way to the consumer in the form of high-friction user experiences.

The National Institute of Standards and Technology’s Privacy Framework (NIST), for example, set out to “build customer trust” by “future-proofing products and services” (2020). It proposed to do this essentially by predicting the future: defining information inventory strategies and data processing policies, subdividing data types and users into categories, and prescribing response and communication protocols across the entire surface area of consumer-facing technology. That this framework was built to help companies comply with requirements rather than to help users preserve their cognitive autonomy is the first clue to why its relevance and applicability are limited.

More importantly, it is not difficult to imagine how attempts to “future-proof” technology by setting out to enumerate all emergent data types and interaction flows a user in some hypothetical future product might face would lead to contradictory design choices for software architects and unintended complications for users.

The practice of writing long and inscrutable Terms of Service (TOS) and End-User License Agreements (EULA) is the direct result of the tension between compliance requirements and irreconcilable design contradictions. Since companies cannot anticipate every possible context that a user might face any more so than a regulatory framework can, they settled on a workaround. By notifying users and obtaining their consent to relinquish data and decision rights in exchange for service, companies got off the hook for safeguarding contextual integrity while passing the burden of privacy management off to users without giving them the necessary tools to manage their boundaries.

Of course, companies have no expectation that users will actually read, understand, or make rational and agentic choices about EULAs or TOS due to [impracticable time and expertise barriers](https://ssrn.com/abstract=3881776 or http:/dx.doi.org/10.2139/ssrn.3881776):

“There is a vast discrepancy between the time required for the meaningful exercise of privacy self-management and people’s time constraints’ including gathering information, estimating its costs and benefits, and determining whether the expected consequences are compatible with their preferences…Additionally, data subjects have to repeat the process if they want to compare the terms of competing service providers. As it is not unusual for companies to frequently modify their privacy policies, studying them all just once would still not suffice…An average Internet user would need more than six full working weeks (244 hours) to read the privacy policies of every website visited in a one-year period, which would result in $781 billion of lost productivity in the U.S. alone. (Kröger, et al 2021)”

Those productivity estimates are conservative as they show 2008 figures, while the volume of combined worldwide digital activity has increased by several step functions. The purely performative practice of informed consent has nonetheless become a widespread convenient fiction that neutralizes public concerns and allows companies to satisfy compliance without meaningfully protecting privacy. As a result, companies have been incentivized to treat privacy as a legal condition to be met while ignoring the intent of privacy management: to preserve the moral autonomy, cognitive consent, and contextual boundaries of individuals whose unprotected digital telemetry leaves them vulnerable to surveillance and concealed influence.

The GDPR is yet another labyrinthine attempt to protect all privacy for users, all the time, across all technology environments, despite the obvious impossibility of achieving centralized protection for contextual needs. It also has the dubious honor of consent-washing you into a false sense of security by placing infuriating cookie popups between you and your enjoyment of the internet. These popups of course do nothing whatsoever to give you real choice or protect your privacy. They do, however, incentivize product designers to devise creative workarounds (see: dark patterns) to obscure your optionality and get you to consent to tracking out of sheer exasperation.

In “What’s wrong with the GDPR?” authors Martin Brinne and Daniel Westman explain how the GDPR complicates application design that creates unnecessary user experience friction rather than protecting their privacy:

“The challenges that businesses are facing is due, in large amount, to the GDPR’s often vague and difficult to interpret provisions…and a lack of guidance and uncertainty regarding international data flows. This leads to a level of uncertainty in many companies about what is applicable and how they should act. The broad scope and the desire to regulate all processing of personal data creates direct contradictions or at least tensions in relation to other regulations, that further complicates the application of the GDPR.” (2019)

A new framework built by the XR Safety Initiative in partnership with Georgia Institute of Technology extends this same prescriptive, centrally managed approach to emergent environments, layering further complexity on top of an already unworkable idea:

“The XRSI Privacy Framework was inspired by the NIST privacy framework’s approach and was strategically designed to be compatible with existing U.S.-based and international legal and regulatory regimes and usable by any type of organization to enable widespread adoption. It explicitly considers key regulations such as GDPR, CCPA, COPPA, FERPA , and a few others, as previously mentioned.” (2022)

Among its other prescriptions, XRSI recommends that emergent technologies find ways to evolve that “satisfy CCPA and GDPR requirements” and “provide just-in-time disclosures to individuals and obtain their affirmatively expressed consent” (ibid). These preemptive restrictions extend the awkward user experience of disruptive, chaotic consent notification requirements into immersive digital worlds whose precise properties and interaction patterns have not even been invented yet!

How did we arrive at this point where consumers suffer the consequences of their own need for privacy protection, without having any meaningful say in it? It stems in part from a conflation between data handling and privacy management driven by the idea that privacy is too ill-defined, fuzzy, and ineffable a concept to concretize in practice. To rein in the uncomfortable ambiguity, framework developers abstracted away the “person” behind the data and broke privacy into more digestible components: data objects and transmission principles to be protected through information security practices.

In his 1998 essay “Re-Engineering the Right to Privacy,” Simon Davis foretells with stunning accuracy the shift from privacy protection to data protection that would take place through the development of technical frameworks instead of contextual, situation-specific, and user-driven rules to govern the handling of data. This shift offered “the illusion of voluntariness” by putting the onus of protecting privacy entirely on users without giving users any true choice, while at the same time neutralizing public concerns about surveillance and loss of privacy in technology-mediated life (1998, p. 144). As a result, companies have been incentivized and habituated to treat privacy as a legal condition to be satisfied and the assurance of confidentiality as an end goal in itself – while ignoring the applied intent of privacy management: to preserve the moral autonomy, cognitive consent, and shifting privacy boundaries of individuals leaving increasingly trackable and intermediated lives.

Unfortunately, this trend of passing the buck on to consumers instead of dealing with privacy management from first principles is carrying on enthusiastically in crypto! The popular refrain echoed across crypto Twitter and Discord is “DYOR”, or “do your own research”. Little mind is paid to what is anecdotally obvious and empirically true: very few people possess the willpower, capacity, expertise, or spare time to pore over financial data or understand the code written in smart contracts to determine the privacy impact or the data exhaust left behind – and what story it might tell.

We continue to build with full knowledge of this glaringly obvious loophole: that requiring consent or awareness without actually expecting consent or awareness to exist (except as a checkmark provided under conditions of stress or exasperation) is merely a convenient fiction we have collectively constructed to sidestep the conceptual quagmire of meaningful privacy management. Because, technically, the terms are all there in the smart contract for anyone to read: all one has to do is do the research by examining the contract to fulfill consent.

Of course, the entire paradigm of notice-and-consent privacy management was never intended to ascertain users’ true preferences or protect their sphere of cognitive autonomy. It is a kind of consent theater that gets the company over a legal hurdle to provide a certain service – and the user over the friction hurdle to obtain it.

Consent theater is one of the key primitives that produces surveillance capitalism. It is intuitive to understand why it is predatory – and why centralized governance such as NIST, GAPP, and GDRP or irresponsible conceptual shortcuts such as “DYOR” or “read the contract” are woefully inadequate. This practice would never meet the criteria for legal protection in a different context requiring consent: voluntary sexual acts between adults. If a person is too cognitively compromised or too young to grant consent, a defense such as “s/he said yes” would not stand. And yet, clicking “Yes” or “Accept” on a TOS or EULA is all that our privacy frameworks require to meet the legal threshold for informing users about their data and privacy rights and obtaining agreement to company terms.

Finally, the idea that a binary choice between predetermined options constitutes an expression of agency and free will is absurd. This is especially so when opting out of a digital service means retreating from basic forms of interaction, learning, and socializing that constitute daily life in modern society. As Nissenbaum writes, “Some claim it is unfair to characterize a choice as deliberate when the alternative is not really viable; for instance, that life without a credit card, without a telephone, or without search engines requires an unreasonable sacrifice” (2010, p. 105). When opting out of a pervasive technology’s privacy scheme means opting out of the social contract and reverting to a pre-digital state of nature, then is this choice truly consensual -- or patently coerced?

Consent cannot be considered freely given when valid alternatives to opting out do not exist. What, then, would it take to move technology from the current paradigm of consent theatrics towards a new model rooted in cognitive consent obtained through freely made, sovereign choices? What qualities might this framework have?

Universal, centralized, exogenous technical frameworks separate users from agency over their digital identities and data. They promulgate a deterministic conception of privacy that holds constant across all contexts, settings, environments, and technological futures. Realistic privacy governance would offer individuals latitude to modulate how much privacy they desire and when, and to respond in context to their shifting privacy boundaries. More privacy is not always preferable, nor is it always possible. Users might, for example, wish to simplify the labyrinthine process of collecting and transferring patient histories by lowering their privacy settings to quickly share relevant records between medical officers. On the other hand, users would almost certainly want more privacy for their Amazon purchasing history or before placing a large bid on an NFT.

Since the only party with sufficiently rich information to make an agentic, context-informed choice is the individual in question, responsive privacy governance must center end users, not companies.

Part VII: The Urgency of Fixing Crypto Now

Transparent public accounting of all transactions on distributed ledgers combined with expansive data collection on immersive inputs compound the scope of the surveillance problem in web3. Although the data itself is cryptographically secure and immutably available, blockchains’ privacy implications merit urgent attention. Infuriatingly little such attention is actually paid.

Indeed, web3 has already encountered familiar privacy intrusion problems. DAOs, for example, struggle with preference falsification in governance proposals. If DAO voting members can see how others vote, their own expressions of preferences may change to fit in with the group. Instead of registering their true wishes, DAO voting members self-discipline for fear of opposing prevailing norms and losing social approval.

In DeFi, transaction transparency invites the risk of sandwich attacks. Since all participants can see the price of any trade, bad actors can manipulate outcomes by scanning protocols for pending transactions. Once found, attackers issue two orders: one just before the transaction and one right after. A successful attack artificially increases the price of the trade, generating profits for the attacker.

Within the metaverse, users will eventually find themselves in an immersive environment where every behavior – from how long they stare at an object to which users they interact with – becomes a data point for predictive modeling. The metaverse surface area for concealed influence increases by many orders of magnitude from what users experience today on social networks because augmented and virtual reality interfaces produce richer, more descriptive telemetry, including biometric markers.

Gaming is a particularly challenging usecase for privacy. Game designers focus a great deal of research on the development of addictive triggers that induce continuous gameplay by doling out rewards at unpredictable intervals and offering ever more challenging quests that increase with the player’s skill level. Precise behavioral data on players’ emotional responsiveness to specific triggers enables game designers to deliver exquisitely addictive and potentially harmful experiences in which the player is ill-equipped to express full agency and disrupt gameplay, even when consent to keep playing is declaratively given.

Because web3 alters the fundamentals about how people transact value and construct meaning, many observers ascribe to it all manner of unrealistic hopes, including the erroneous notion that users will own their data simply by opening the social graph and exiting extractive platforms that monetize attention. In fact, web3’s open data paradigm does not by itself to end surveillance capitalism because the primitives that drive surveillance capitalism – a missing user-centric identity layer, unchecked data exhaust, and consent theater – are all left unaddressed. Even if we do away with centralized platforms altogether and shift all business to decentralized protocols, the only thing that changes is where user telemetry gets stored: in the cloud or on the public ledger.

It is therefore frustrating that far too little such attention is paid. Or, even when privacy harms are noted, the issue is glossed over with the imagined optimism that since all data is public and not owned by a corporation, everything shakes out even in the end and becomes safe merely by virtue of being decentralized.

This is of course naïve, and happens for a few reasons:

  1. The space is still nascent and the harms still too theoretical to seem an imminent concern;
  2. A great deal of hype and excitement surrounds the space, which distracts from uncomfortable realities, earning those who bring up worthwhile criticism the unwelcome reputation of being harbingers of “FUD”, a disingenuous acronym used to discredit even educated skeptics as unbelievers; and
  3. Many builders in the space simply do not fully understand how privacy works, why it matters, and how to build with privacy in mind. This last group is likely very open to education and dialogue on the topic of how to build agentic technologies, and simply need access to useful resources and convincing arguments.

But crypto’s privacy-eroding harms are not lost on careful observers. Moxie Marlinspike, the founder of end-to-end encrypted messaging app, Signal, has written that “it seems worth thinking about how to avoid web3 being web2x2 (web2 but with even less privacy) with some urgency” (2022). Vitalik Buterin, one of the cofounders of the Ethereum blockchain, has voiced similar sentiments in the original Soulbound piece that predates the current hype about soulbound tokens:

“Privacy is an important part of making this kind of ecosystem work well. In some cases, the underlying thing…is already public, and so there is no point in trying to add privacy. But in many other cases, users would not want to reveal everything that they have. If, one day in the future, being vaccinated becomes [an NFT], one of the worst things we could do would be to create a system where the [NFT] is automatically advertised for everyone to see and everyone has no choice but to let their medical decision be influenced by what would look cool in their particular social circle. Privacy being a core part of the design can avoid these bad outcomes and increase the chance that we create something great.” (2022)

Similarly, a report by J.P. Morgan highlighted the criticality of developing privacy standards to fully capitalize on the metaverse’s $1 trillion business opportunity:

“Some key elements to support commerce and the meta-economy still need to be determined and scaled…to preserve privacy and enable digital freedom…User identification and privacy safeguards will be crucial for both interacting and transacting in the metaverse…Preservation of one’s ability to have multiple avatars/identities, with the addition of…private KYC/AML-compliant commerce and payments; verifiable credentials that can be easily structured to enable easier identification of fellow community/team members, or to enable configurable access to varying virtual world locations and experiences; [and] prevention against cyberbullying or online harassment/assault across virtual worlds. Expansion of NFT token-gated spaces [should] include the creation of private interactions, discussion and messaging.” (2022)

To fully appreciate the enormity of the privacy challenge in the metaverse alone, it is important to understand just how much data becomes available for capture. David Nuti, a Vice President at Nord Security, writes:

“In an augmented reality environment, a company may want to serve me an advertisement for a couch because they can see in my augmented environment that my couch is kind of ratty in the background. Through artificial intelligence, they’ll serve me up a color of a new couch that matches the paint on the wall of my house. If I serve up an advertisement, it’s no longer knowing that I’m serving up the advertiser to the person, but how long my eyeballs are focused on that content.” (Preimesberger 2022)

This richness of data exhaust available to mine for psychological insights and predictive behavioral modeling is made possible through what is called biometrically inferred data:

“‘This enormous eye-tracking, gait-tracking the way you move, the way you walk – all this analysis – can infer a lot of information about you. And then there are the intersections of these other technologies, which is just like a brain-computer interface that will provide the alpha, beta, gamma – and even your thoughts – at some point. What happens to privacy when our thoughts are not even protected? All this information – stacked in cloud storage and constantly being analyzed by multiple buyers – could give companies a greater ability to understand individual traits,’ Pearlman said. An insurance company, for example, might see a behavioral clue inferring a customer’s health problem before the person notices anything herself. ‘Now, the data is in inferences.’” (ibid)

Biometrically inferred data traces its roots to affective computing, a field that began in 1997 at the MIT Media Lab:

“[Affective computing aims] to combine facial expression with the computation of vocal intonation and other physiological signals of emotion that could be measured as behavior…Some emotions are available to the conscious mind and can be expressed ‘cognitively’ (‘I feel scared’), whereas others may elude consciousness but nevertheless be expressed physically in beads of sweat, a widening of the eyes, or a nearly imperceptible tightening of the jaw…The key to affective computing…was to render both conscious and unconscious emotion as observable behavior for coding and calculation. A computer…would be able to render your emotions as behavioral information. Affect recognition…is ‘a pattern recognition problem,’ and ‘affect expression’ is pattern synthesis.” (Zuboff 2020, pp. 284-285)

Biometrically inferred data and modern-day affective computing inherit and compound a privacy quagmire we have hardly begun to untangle, even when it is plain-old social media and cloud computing that is surveilling users and monetizing data exhaust for profit. This gets far more troubling when dealing with 3D immersive spaces.

But it is not too late. The urgency for better privacy governance in emerging technologies is never more pressing than exactly when the technology is being invented, if only because, as Langdon Winner famously wrote, “the greatest latitude of choice exists the very first time a particular instrument, system, or technique is introduced” (1980). Now would actually be the perfect time.

What would it take, then, to build a sufficiently robust privacy-protecting moat around every citizen on the blockchain, in the metaverse, and in any emergent digital environment such that the economic costs of surveillance capitalism become too onerous to remain commercially viable, regardless of the prevailing business model?

Decentralization will not address what our outdated privacy frameworks do not protect. Unless we grapple with how technologies safeguard cognitive autonomy in context, we will invariably create the conditions for another Cambridge Analytica – this time in much more immersive and data-rich environments that would make the original scandal appear tame by comparison.

Part VIII: Reimagining Privacy and Identity Management

Gavison defines privacy as the degree of control that we have over others’ access to us through information, attention, and physical proximity (1980). She writes: “The requirement of respect for privacy is that others may not have access to us fully at their discretion” (ibid). A workable privacy scheme would empower users with cognitive access controls to modulate at their discretion – and not at the discretion of the system, another user, or a third party – how much they wish to share, with whom, and when. These access controls must somehow be fully manageable by users without requiring technical expertise or onerous time constraints to understand and evaluate their options, as with current theatrical notice-and-consent frameworks.

Further, there would seem to be a need for a great deal of flexibility to modulate these controls in response to ever-shifting contexts. There are many situations where less privacy is actually preferable. Users might, for example, wish to simplify the labyrinthine process of collecting and transferring a comprehensive patient history between medical offices by lowering their privacy settings to share relevant records between trusted professionals with minimal friction. On the other hand, users would almost certainly want more privacy for their Amazon purchasing history, before placing a very large bid on an NFT project, or, as Buterin suggests, to conceal the on-chain details about their vaccination status.

Nissenbaum offers precisely this dynamic view of privacy by disposing of the requirement for an all-encompassing definition. She shifts attention away from capturing privacy’s value and towards identifying conditions than maintain privacy expectations intact versus rupturing them. Her description of contextual integrity outlines “context-relative social norms…privacy is preserved when informational norms are respected and violated when informational norms are breached” (2010, p. 140).

Burkert argues that privacy enhancing technologies should offer “systems for managing our various displays of personality in different social settings”, which he differentiates from the dominant informed consent approach that afflicts users today (1998, p. 128). This approach is:

“…too often applied without sufficiently analyzing the range of ‘true’ choices available to the data subject…the degree of information provided, however, tends to vary widely, with the data subjects at a disadvantage because they cannot know how much they should know without fully understanding the system and its interconnection with other systems” (ibid).

As I have argued before, the quality of a privacy-protecting framework depends on “the degree of choice it affords to users to exercise agency about the sharing of their personal data. Its key feature would empower users to expand or contract their privacy settings with minimal friction and according to choices informed by surrounding context. Its key feature would not be how well the system safeguards personal information across all contexts” (Uglova 2022). In that essay, I distinguish between information security, which concerns itself with safeguarding PII wherever it transits or rests and regardless of context, and privacy protection, which concerns itself with maintaining coherence between consensual privacy boundaries and digital experience:

“It may be counterintuitive to realize that PII does not hold intrinsic value, but that is because we are used to thinking in information security terms, where PII is used for a different purpose than digital advertising: Criminals want your data to engage in fraud, theft, espionage, and trafficking. This means that the data itself has some innate value, and a great lockbox keeps the criminals out. Advertisers want your data for an entirely different reason: to make predictions with it in order to influence your actions — and to do so as covertly as possible… If there were another, better way to accomplish influence without accessing your private data, they’d be game for that, too.” (2022)

Predictively, approaches driven by information security thinking naturally protect only the information, while leaving behind the person described by this information. As a result, users must accept privacy intrusions or opt out of modern digital life entirely.

Therefore, it is the second-order effects of PII possession that concern privacy management: the formulation of concealed influence operations. We need to look at how contexts affect choice making, not how secure the PII is from prying eyes. We may have moved privacy out of innate moralism and into a rationalist utility based on values and harms experienced in context, but our privacy governance has yet to catch up because it continues to focus on prescriptive regulations for PII handling, security, and revocation across all contexts instead of variable access controls for in-the-moment choicemaking. Taking a cue from information security theory, which necessarily holds the value of PII constant across all contexts to protect it from any and all unauthorized intrusion, we retrofitted the same rigid idea onto privacy protection. A dynamic and flexible approach would have been more appropriate.

Ultimately, if the purpose of privacy management is to safeguard moral autonomy and cognitive consent in digital spaces, then a useful methodology for developing such controls must include the ability to modulate myriad privacy experiences and preferences across contexts at the user’s discretion, no matter how much variation in these variables. This is an ambitious request! We are contemplating the full expression and management of a comprehensive range of available choices across all systems while providing users with a granular understanding of their options and how they interact with other systems, so that users can assert their cognitive consent through conscious awareness.

If the goal is to return agency, moral autonomy, and cognitive consent to digital users, then users need the tools to modulate their own access controls and preferences across infinite contexts. An exhaustively descriptive and infinitely flexible expression of all possible choices across all systems and futures is impossible only if one considers building it in a centralized way, where governance takes place exogenously, outside the locus of the user, through universal practices and compliance requirements imposed from outside.

But people want different things. Exogenous, centralized privacy management cannot possibly give people the precise level of privacy they want for every digital interaction because that is tantamount to predicting the future. No fully descriptive scheme for comprehensive privacy management across all imaginable contexts can possibly give people exactly what they want. This idea is even less workable when those contexts are always shifting and when users’ privacy boundaries adapt in response to emergent properties of new technologies that are in a constant state of innovation and reinvention.

The only parties capable of formulating just-in-time responses to infinite environments, attributes, transmission principles, and contexts – and modulating how much signal they emit in response – are end users themselves. So, what would it look like if these controls were instead pushed down to the lowest level where context is most immediate and apparent? In other words, what if the controls belonged to the end user? Can this kind of radical decentralization and localization of privacy management be achieved?

Part IX: Out of the Conceptual Quagmire Towards Workable Governance

A realistic model for managing privacy boundaries in context must center the user. It must furthermore be flexible, situationally adaptive, and capable of negotiating privacy boundary conditions and preserving context-relative social norms. These are the design requirements for the kind of robust privacy governance necessary to staunch the data exhaust that powers surveillance capitalism and threatens the hopes of the web3 community for a more symbiotic relationship between humans and their tools.


I use the word “symbiotic” intentionally to indicate that it is not only humans who progress and evolve through their tools, but also tools that progress and evolve as humans use them. A symbiotic relationship is characterized by mutuality, interdependence, and cooperation. As we inch closer to achieving artificial general intelligence, the quality of that relationship – and that it not devolve into predation in one direction or the other – becomes especially vital.


There is an obvious need, it seems, to shift away from centralized attempts to manage privacy by governing private data, behaviors, attributes, transmission principles, and experiences across all digital publics, and towards more flexible approaches that put users in control of their own choices. Instead of building better information security “lockboxes,” we need an approach to building consumer technologies that safeguards user cognition, decision-making, and autonomy against multiplicative forms of targeted limbic hijacking and manipulation. We need a methodology that can protect privacy boundaries across all contexts where it might encounter assaults to its integrity – that is to say, in constant daily encounters with digital life. It follows that this methodology should allow internet users to modulate the amount and type of digital exhaust they emit, and that they should be able to do this at their own discretion – as they see fit. Who better to show such discretion than the user herself?

Underlying the issue of a privacy management is a deeper question about identity management. And nearly every issue in privacy governance stems from the same original sin: the absence of self-owned, user-centric identity and access controls. We evolved effective identifiers for websites and endpoints, but not for the people using them. If individuals must manage their own privacy experience, we must first determine how identity preferences, expressions, and credentials are issued, by whom, where they are stored, how they are presented to others, and under what conditions.

For users to interact with websites beyond the passive read-only era of the web that predated the age of social media and platform economics, companies began issuing local accounts with usernames and passwords. This siloed approach to digital identity meant that users had to create unique accounts for every site, leading to a poor user experience and creating massive breach liabilities for companies whose only interest was to grant users access, not to manage their accounts and PII. The prospect of getting out of the business of storing sensitive data and having to manage expensive cybersecurity schemes to fend off hackers became attractive to companies, who were happy to outsource the entire thing to bigger players with more ambitious plans for PII.

This gave rise to federated identity, an opportunity to both streamline fragmented user experiences across identity silos and monetize vast quantities of user telemetry for secondary use. Providers such as Facebook, Google, and Amazon entered the identity space to become the “trusted” middlemen of digital identity credentials, offering users a way log in with their pre-existing accounts while shifting the burden for information security assurance from fragmented businesses to collective federations equipped with the vast resources of technology platforms.

Importantly, this centralization of identity into federations abstracted interaction and identity decisions away from users and their context. Users ended up with “tens or hundreds of fragments of themselves scattered across different organizations, with no ability to control, update or secure them effectively” (Tobin, 2016), perfecting another of the essential conditions for surveillance capitalism.

Only internet users have the granular just-in-time context necessary to formulate an appropriate response to a consent moment, so, logically, privacy governance should reside within the individual’s purview. But absent the requisite identity layer to make such self-custody possible, digital identity shifted to the next available proxy: email addresses and social logins. Management of those digital identities likewise shifted to the custodians of those proxies: the technology companies issuing email and social login credentials. This awkward workaround for user-centric identity necessitated prescriptive and deterministic regulatory governance that, predictably, has failed to maintain the contextual integrity of users’ privacy boundaries while leaving companies to amass vast stores of user telemetry for exploitation by hackers and third parties.

Decentralized or self-sovereign identity (SSI) is an approach to identity management that empowers users to self-custody their own identities, data, and privacy decisions. SSI can eliminate centralized middlemen and the overexposure of PII to federations by appending encrypted identity attributes to a user’s decentralized identifier, which only the user or a designated third party with the right public key can access. The flow of information between parties happens only with the cryptographic consent of the identity owner whose credentials are requested. In its ideal state, SSI allows users to “log in” to any product, service, game, metaverse, or protocol – irrespective of the user’s chosen identity tool or wallet – and to transact while minimizing data exchange.

SSI provides an essential primitive for architecting a coherent privacy framework that centeres locus of control around the user. Although not yet in widespread use, a more stripped-down version of SSI has gained broad adoption: the crypto wallet. Providers such as MetaMask, Rainbow, Phantom, and Solflare offer wallets that interact with the Ethereum and Solana blockchains and allow users to make self-sovereign decisions. Because their usecases are limited to financial transactions and contract signatures – and because they are not interoperable between all blockchains, they are not fully-fledged SSI solutions for the management of a broad range of user attestations and preferences and across all digital surface areas. They are, however, a first step.

Significant obstacles stand in the way of ecosystem-wide adoption of user-centric identity. In order for an identity to be useful, trusted identity providers must agree to issue their credentials to the user’s identity or namespace, and verifying parties must be satisfied that the levels of assurance followed by issuers satisfy security criteria. But for identity providers to undertake the effort to develop credentials, the credentials themselves must first be accepted by enough verifying parties, or their usecase becomes too narrow to pursue. In turn, for credentials to see wide adoption, enough issuers must first agree to develop and issue them. This cold-start problem in decentralized identity requires urgent resolution, because absent user-centric identity management, the problems of federated identity will compound in web3.

The most obvious issue with overconfidence in web3’s ability to deliver users out of surveillance capitalism and right the wrongs of the platform economy is the transparency of on-chain transactions. While user-centric identity it is an important part of a privacy-first approach to technology, it is incomplete. SSI allows users to model and greatly reduce the aggregate signal that they emit every time they transact, but provides zero confidentiality for signals emitted once a smart contract is signed and entered into a public ledger. Those signals become public record and leave a traceable trail of remittances, holdings, purchases, and other transaction data for granular reconstruction of identity, modeling of behavior, and targeting of concealed influence to manipulate outcomes.

As such, solving for user-centric identity does not by itself provide a bulwark against surveillance capitalism. Neither does decentralization by sheer dint of being a new business model.

As previously established, surveillance capitalism is not business-model dependent. Platform economics certainly makes surveillance capitalism easier to implement because data is highly centralized and trivial to obtain. But the mere presence of a new model that supplants platform economics while leaving just as much data exhaust available for anyone collect, study, and model simply recreates surveillance capitalism with new economic incentives. Because most blockchains leave a publicly viewable record of every user transaction, the most critical component of a privacy-preserving framework – the confidentiality of private data – remains wholly unresolved.

To defend against surveillance capitalism, technology must deal with each of the primitives that invite it, including unprotected, widely available data exhaust. A workable solution for privacy management would therefore need to provide default confidentiality by obscuring the details of smart contract transactions or entirely breaking the link between interconnected public keys.

A realistic framework must also solve for information overload and unrealistic time and expertise requirements that produce another key primitive for surveillance capitalism: consent theater. As shown, current notice-and-consent approaches that make users responsible for reading and comparing EULAs or “doing their own research” are purely performative. Accepting that reality is not the same as saying end users are lazy. It is merely a design requirement for realistic system architecture. Technologists should stop averting their eyes from this design constraint by perpetuating the convenient fiction that users can be experts. They can respond and declare preferences in context, but they will not compare technical terms or read contract code.

The time requirements to make informed choices exceed the time available to users – and would impose billions of dollars of productivity losses if users were to undertake this impossible task for every product and service that powers the global economy. No company is naïve enough to suggest that they even expect this of their users. Given the power asymmetry between teams of legal and compliance experts who write EULAs and TOS – or in the case of web3, the engineers who write smart contracts – these practices are intentional compliance theater and willful negligence of consent. If users were expected to examine, verify, and approve every single smart contract transaction individually, we would just recreate this compliance theater in web3.

A workable framework would therefore need to provide some way to securely set and execute default preferences across categories of similar transactions or experiences without the user’s technical examination of every contract – while also protecting these groupings against exploitation by malicious code. Providers of digital products and services would, by extension, need to read and understand these default preferences and adjust the experience they provide accordingly instead of expecting users to be the ones to lower their privacy boundaries. Unless a user-centric identity scheme allows users to express default preference and consent groupings without getting bogged down in contract code – and with the assurance that companies will respect and comply with these user-defined preferences – a decentralized privacy framework is no less onerous than the centrally managed consent theater it replaces.

Finally, how can we be sure these preferences are interoperably accepted and implemented? Broad and interoperable acceptance is a precondition for functional utility. If a user must repeatedly switch between multiple identity providers to access different products and services, reset privacy and consent preferences in accordance with the varying and non-interoperable options those products and services offer, and compare smart contracts to understand privacy impact, we go right back at square one: siloed identity and high-friction user experiences. Not good enough.

Part X: A New Framework

This essay has shown how well-formulated information security principles permeated privacy thinking, a discipline that is distinct from information security and has very different governance needs, and for which those information security principles are maladapted. In so doing, the architects of privacy governance left consumer privacy less protected instead of better off.

But not all artifacts of information security theory should be discarded when developing a privacy-first framework. What if we were to take the same level of intellectual rigor that governs data security, and apply it to safeguard cognition and autonomy as well?

Current privacy management schemes such as NIST and GAPP, as well as legal frameworks such as GDPR and CCPA fail because, while seeking to be maximally protective, they become excessively prescriptive. By focusing on the how of privacy management, they prioritize process and rules over guidelines and outcomes. Since it is impossible to govern and foresee all emergent contexts in advance, these overly prescriptive privacy schemes impose unintended consequences on end-users in the form of high-friction experiences and consent theater.

Rather than layering on more compliance rules written by people who do not necessarily understand the technologies they regulate, could a workable privacy-protecting framework be outcomes-based instead? Might we develop useful and uncomplicated guidelines that define desirable outcomes but allow engineers to do what they do best: innovate critical paths to arrive at those outcomes? Could we approach the task of developing privacy standards as design constraints instead of punitive restrictions retrofitted from the outside?

Few technologists enjoy compliance – least of all the creatives and visionaries coding the cutting edge of innovation. The instinct is to push the boundaries of what is possible, not circumnavigate restrictive rules and contradictions. By repositioning privacy as an unexplored design space and communicating standards that function as design constraints instead of telling engineers how to do their jobs, we bring privacy management out of the world of legalese and into the native language of innovation.

This is precisely where privacy management can take a page from information security. Information security offers an outcome-based model designed to guide organizations and cybersecurity professionals to make sound data security decisions without telling them how to do their jobs. This model is called the CIA Triad, which stands for Confidentiality, Integrity, and Availability.

Confidentiality governs the encryption and protection of private data, objects, and resources from unauthorized viewing and access. Integrity is concerned with maintaining consistent, accurate, and trustworthy data across its lifecycle at rest and in transit. Availability requires that information remain accessible to authorized parties who can rely on access upon request.

The CIA Triad is a set of guiding principles for rigorous, outcomes-based judgement in rapidly-shifting cybersecurity contexts. Rather than promulgating a set of definitional rules, it invites practitioners to envision best- and worst-case scenarios, understand tradeoffs and interdependencies, and do their best instead of only doing exactly as they are told. What would the three dimensions of a privacy-focused CIA Triad look like?

For any kind of privacy management – whether siloed, federated, or decentralized – the ability to protect private information from unauthorized view or access is a prerequisite. Without confidentiality, the data exhaust that becomes public record moves behavior extraction out of centralized platforms and into distributed ledgers, reconstructing the primitives of surveillance capitalism. Confidentiality, therefore, applies in privacy management as well as in information security.

Successful decentralized privacy management also requires interoperable optionality and data ownership without platform lock-in. Internet users must be able to port their identity data, privacy preferences, and default settings between services, products, platforms, and protocols with minimal friction – and be likewise assured that those data, preferences, and settings will be read, accepted, and respected by any systems that interact with that user’s identity and data. Interoperability is not just a technical challenge; it is truly an essential data sovereignty and privacy rights issue. Without interoperability, crypto reverts to the dominant paradigm of unrealistic expectations, convenient fictions, and compliance theater that arose out of platform economics and that powers today’s surveillance capitalism.

Finally, decentralized privacy management must empower users to exercise meaningful, pluralistic choice and cognitive consent with respect to the governance of their data, including how it is transferred and used by other parties and for what purposes. Unless users can make granular, context-based decisions about their privacy, we reconstruct the coerced consent of agree-or-opt-out dichotomies, which subvert agency with predetermined options driven by companies, not users. If using digital technology to transact, communicate, and interact is an indispensable precondition for participation in a global economy, then opting out is not an expression of sovereign agency but rather coerced resignation between two insufferable choices. If web3 is on track to supplant platform businesses as the prevailing digital means to coordinate, communicate, and transfer value, then pluralistic privacy options must replace the coerced consent that produces data exhaust and incentivizes surveillance capitalism. Agency, therefore, is the third pillar of the privacy CIA triad.

What, then, is the current state of Confidentiality, Interoperability, and Agency in crypto?

Confidentiality

To mitigate the accumulation of sensitive telemetry on public ledgers, a privacy-preserving decentralized identity ecosystem would need to provide confidentiality by obscuring the details of smart contract signatures and blockchain transactions, or by altogether breaking the link between interconnected public keys. By taking advantage of selective disclosure and least privilege access, decentralized identity can help users reduce their cumulative data exhaust and make surveillance more time-intensive and less lucrative.

Almost all smart contract transactions, including purchases and sales of assets such as tokens and NFTs, liquidity staking, yield farming, and certain aspects of on-chain gaming are, by default, public. Grading the crypto ecosystem on confidentiality would yield almost universal zeros – with some notable exceptions.

A number of players in crypto are innovating on zero-knowledge proof (ZKP) cryptography, a technique that allows a party to prove that a statement is true without providing proof of the statement itself. For example, a requesting party such as a video streaming service might wish to ascertain that a user is over a certain age before providing access to content. The user may not wish to furnish sensitive identity documents (such as a passport or a driver’s license that also expose the user’s exact date of birth, addresses, and other private details) to a streaming company the user does not trust with all that personal data. For the user, such data overexposure is an obvious privacy risk. But even for the company, there is little upside in taking on the breach liability to securely store private account data to provide continuous service. Using ZKPs, users can prove that they meet certain criteria required for the transaction (such as an age threshold) without providing any extraneous details – not even the exact date of birth. Meanwhile, companies sidestep much of the honeypot risk of storing vast databases of user PII to attract hackers.

For surveillance capitalism to become too costly to remain commercially viable, scaled data aggregation must be met with obstacles onerous enough to make such efforts unprofitable. For this reason, identity wallets must store verifiable credentials off-chain and by default avail of ZKP (or other cryptographic techniques to mask the trail between addresses to protect user confidentiality) for every single interaction where a credential is furnished or user information is requested. The key advantage of combining decentralized identity with off-chain storage and ZKP is that aggregate data exhaust becomes too variable – and the resulting behavioral model too incomplete and unreliable – to make surveillance commercially appealing. It would cost too much to build up enough data and pattern match enough transactions to reidentify users and create a behavioral model with sufficient fidelity for targeted influence.

In DeFi, ZKP has enabled “coin mixers” such as Tornado Cash, L2s such as Aztec Network, and private currencies such as Zcash to provide what is called transactional privacy. Smart contracts can execute transactions because they are able to arrive at consensus that a certain criterion is met and that certain addresses are involved without needing to see, reveal, or record the details of those specifics on the ledger. All that is recorded is that the values are cryptographically true.

ZKP can be used to break the link between sending and receiving public keys, or to hash the link so that the trail becomes meaningless and impossible to reconstruct. However, ZKP is not the only way to provide blockchain privacy. Umbra is a coin mixer that uses ordinary elliptic curve cryptography to break the link between senders and receivers, while Monero relies on ring signature techniques to accomplish the same.

Secret Network goes a step beyond transactional privacy to provide programmable privacy through the use of Trusted Execution Environments (TEEs), which their white paper describes as a “neutral party in the form of hardware for secure and private computations” (2022). Since facilitating transactions between senders and receivers is only one type of on-chain activity among many others, Secret offers a more “expansive” vision of blockchain privacy by using “encrypted inputs, encrypted outputs, and encrypted state, meaning we can enable groundbreaking new use cases for smart contracts and decentralized applications….allowing for a wide degree of flexibility with both design and implementation choices” (ibid).

Clearly, there is growing acknowledgement of the criticality of stemming the accumulation of data exhaust that powers concealed influence by providing default-confidential, privacy-preserving crypto primitives. Unfortunately, these solutions have not yet seen widespread adoption and present significant barriers to entry to onboard mainstream users who lack both the privacy awareness and technical expertise to make privacy-enhancing technologies a foundational part of their crypto stack. Given a choice between privacy and convenience, users will invariably choose convenience every time. For confidentiality to become the blockchain norm and not the exception, the responsibility for making privacy-preserving design and system architecture choices falls on crypto developers and founders, not users.

Interoperability

The key feature of platform data capture is platform lock-in that prevents users from taking their data and leaving. Platform businesses devote significant resources to building data moats by expanding their footprint while displacing or subsuming competitors, so that consumers are left without meaningful choice as to how or where to live their digital lives.

Interoperability is therefore a precondition for ecosystem-wide adoption of decentralized identity, which requires that all companies, applications, protocols, and platforms agree to use a common set of data rails and not lock users into their own proprietary ways of handling identity-related assets, consent preferences, and access controls.

A familiar way to think about interoperability is having all email providers using the same Simple Mail Transfer Protocol (SMTP), without which we could not communicate between servers. Such platform-agnostic compatibility is critical because it not only eliminates silos but alleviates the information overload and unrealistic time and expertise requirements that arise when users are asked to consent to TOS and EULAs (or, in the case of blockchains, to “do their own research” by reading smart contract code, a popular and cynical version of the notice-and-consent paradigm that is emerging in Web3). Users cannot and will not read technical agreements; nor will they ever become privacy experts – and not because they are lazy but because the idea itself is impractical. This is a design constraint that technologists must stop ignoring, minimizing, and sidestepping.

For decentralized, privacy-preserving identity to provide value, users would need ways to “set and forget” default preferences across categories of similar transactions and experiences without getting bogged down in code and contracts – while also protecting these groupings against exploitation by malicious code. Digital products and services would, by extension, be obligated to read and abide by these default preferences, adjusting the experience they provide accordingly instead of expecting users to lower their privacy boundaries. Unless the entire ecosystem operates on mutually-readable, standardized data rails, decentralized identity will not be an improvement over the consent theater of federated identity that it replaces.

If user preferences and default settings need to interoperate across all products, services, and protocols to minimize friction, productivity losses, and consent theater, then products, services, and protocols need to be able to read each other’s code and data rails. Ironically, blockchain protocols cannot easily talk to each other or to the outside world without introducing significant security vulnerabilities, posing a major barrier to wider adoption of crypto. As a result, value and information are fractured across multiple blockchains that do not interoperate with each other or communicate state with off-chain systems.

A number of crypto players are exploring the potential of oracles and bridges to address the inconvenience that composable code yields applications that cannot actually interoperate. Chainlink is a company that builds oracle networks to enable smart contracts to communicate with off-chain resources, such as APIs. Meanwhile, Cosmos is focused on improving interoperability by bridging through the interblockchain communication protocol, similar in principle to TCP/IP. Unfortunately, solutions that attempt to connect blockchains present risks in asset transfer and contagion, among many others as noted by Buterin:

“Imagine what happens if you move 100 ETH onto a bridge on Solana to get 100 Solana-WETH, and then Ethereum gets 51% attacked. The attacker deposited a bunch of their own ETH into Solana-WETH and then reverted that transaction on the Ethereum side as soon as the Solana side confirmed it. The Solana-WETH contract is now no longer fully backed, and perhaps your 100 Solana-WETH is now only worth 60 ETH. Even if there’s a perfect ZK-SNARK-based bridge that fully validates consensus, it’s still vulnerable to theft through 51% attacks like this… The problem gets worse when you go beyond two chains. If there are 100 chains, then there will end up being dapps with many interdependencies between those chains, and 51% attacking even one chain would create a systemic contagion that threatens the economy on that entire ecosystem.”

Ultimately, crypto interoperability depends as much on will as on technical feasibility and risk mitigation. Companies have to align on standards and people have to choose to build interoperable protocols and applications and solve for the attendant security risk.

Agency

The primacy of adaptable, context-dependent privacy governance is never more apparent than in the need for localized access controls to cognition. For users to become the ultimate arbiters of their online lives, they must be able to modulate their privacy boundaries entirely at their discretion, requiring a rich menu of controls to adapt their privacy preferences in response to shifting norms and surrounding context.

With interoperably accepted, default-confidential, user-centric identity, users control information flows in response to context. As such, they are fully empowered both informationally and technologically to react situationally and make context-informed decisions about their privacy boundaries. This calls for clear, easily navigable user interfaces that present transparent controls to expand or contract their privacy options. Since very little interaction design research has been conducted on self-sovereign avatar interfaces and identity wallets, this is an area of web3 that is ripe for innovation and invention.

Part XI: Caveat Emptor

This paper has focused on one crucial building block for a different future in which we might live well with the tools we build: consumer privacy. But privacy is not the only building block.

Two critical areas that were specifically outside the scope of this paper were law enforcement and consumer trust and safety.

Crypto critics accuse it of many things – a get-rich-quick Ponzi scheme, a passing fad, and a convenient method for laundering money and hiding from the law. The recommendations of this essay do not look to create irrevocable privacy and offer no point of view on government surveillance, whether legitimate or illegitimate. The scope of the recommendations offered is limited strictly to making surveillance capitalism too costly for commercial entities. But failing to mention law enforcement interests seems irresponsible.

While crypto has been used to evade law enforcement, so, frankly, has cash. A wholesale rejection of a tool because one of its usecases is objectionable does not withstand logical scrutiny. However, that does not mean that the ecosystem should ignore its critics; crypto must deal seriously with its shadowy side if it hopes to onboard more users. Are there, for example, ways to report on or trace confidential transactions that may be connected to a crime without compromising privacy for the vast majority of crypto’s compliant users? Can KYC schemes – regulations intended to prevent money laundering, but which unintentionally deny access to crucially-needed financial services to billions of unbanked customers lacking identity – be redesigned to be more inclusive and compliant?

Panther Protocol, for example, is designed as an “end-to-end solution that restores privacy in [crypto] while providing financial institutions with a clear path to compliantly participate in decentralized finance” (n.d.) Their white paper offers ideas for further research:

“In the developed countries, the current compliance regime operates under something akin to a guilty unless proven innocent presumption. Segments of society are denied access to financial services. Customers’ Personally Identifiable Information (PII) and transaction data are by default collected, stored, data-mined for patterns and subject to sharing with third parties and authorities…The problem with traditional compliance stems from the assumption that in order to detect financial crime, it is necessary to gather and analyze large amounts of raw data. However, with the advancement of computer science and mathematics, this assumption no longer holds true.” (2021)

Panther is by far not the only crypto player working to provide a path to compliance, despite the overall combative stance adopted by most lawmakers toward crypto. As I have written elsewhere:

“Findora, for example, ‘envisions a world where financial networks are compliant and publicly auditable at all times” and which aims to “prevent fraud and make compliance with regulations easier through the use of auditability tools, without sacrificing privacy’. Horizen is a protocol focused on ‘auditable transparency with privacy’. And Zcash, a popular privacy coin, is specifically ‘designed to protect consumers’ financial privacy while retaining compatibility with global AML/CFT standards’.” (Uglova, April 2022)

Pioneering research is also being done in zero-knowledge KYC that issues designated entities (such as government agencies) a token key to reveal identifying information encrypted in a ZK proof upon compelling evidence of malicious activity (Pauwels et al., 2022).

Another topic that merits urgent attention as we spend more time online in increasingly immersive environments is trust and safety. Already, we have seen cases of toxicity, bullying, and harassment in the metaverse (Frenkel & Browning, 2021). Important work in this field is being pioneered by companies such as Spectrum Labs, which provides AI-powered content moderation services to technology companies. Tiffany Zingyu Wang, an executive at Spectrum Labs, recently founded OASIS Consortium, a think tank that provides thought leadership, education, advocacy, and ethics standards focused on building safe online communities, including the metaverse (Basu, 2022).

Part XII: Conclusion

“Privacy is necessary for free expression, for diversity of interactions and identities, and often for safety, and yet it has become a luxury good, with privacy-preserving technologies remaining the preserve of those with knowledge or capital, not the default. – Pluriverse (Chang et al, 2022)

Perhaps one reason that engineers and technology companies are often negligent about protecting privacy – that is, until citizens and tech activists get up in arms and governments force them to act – is that it feels punitive to have to do it. What a drag. Nobody likes being told what to do – least of all a young engineer attracted to the space by the possibility of testing what can be built, rather than studying what can’t.

Nevertheless, any new iteration of the web must be built with privacy in mind – at least if our collective intent is to make digital technology better, less adversarial, and more humane. Or else, why are we even bothering with all the additional complexity and compute of decentralization, if we’re just going to rearchitect the same surveillance capitalism on new rails? What will have been the point?

If the incentive in technology culture is to push the envelope, then let’s change the envelope.

This essay aims to stimulate exactly this discourse about the edges of the agentic design space among the engineers, founders, and visionaries building web3. In so doing, it hopes to elevate privacy out of the realm of academia and the notoriety of ineffectual, retroactive, and counterproductive compliance, and reignite the creative potential for technological innovation and fresh thinking in this poorly understood, urgently needed, and long neglected space.

Deterministic, exogenous privacy frameworks not only limit the possibility space for invention, but quickly fall out of relevance as system designers devise creative workarounds that most often take the form of high-friction user experiences. This is especially true of web3, a space that is experiencing rapid innovation, consequences that fall outside the scope of existing governance, and impact that reaches beyond narrowly intended “ideal customers” to affect future generations and those “not in the room” where design decisions are made.

Blockchains do not by default address the harms that pervade our dominant platforms because most of those harms are rooted in ineffective privacy protections and poorly designed identity management, not extractive business models. The promise of a new business model does not by itself address the underlying primitives that create surveillance capitalism: digital identities abstracted from their true owners, data exhaust that leads to behavior tracking and aggregation, and consent theater that masquerades as choice. The business model of attention-driven economics is merely a symptom of surveillance capitalism, not its source. So long as the surplus telemetry that emerges from ineffective and outdated privacy frameworks and identity governance enables the monetization of data exhaust for commercial gain, surveillance capitalism will persist.

If we ignore these failure modes now, there will be no material difference between the extractive web of today and the decentralized version that hopes to supplant it. Unless we architect web3 with users at the center, there will be no material difference between extractive platform businesses and the decentralized versions that hope to supplant them. Systematic commercialization of attention will merely shift from platforms to protocols, yielding the same predations we have grown weary of today, indistinguishable except in degree: reidentification, targeting, and concealed influence in even more immersive, inescapable, pervasive, and immutable forms.

One of the many challenges in privacy management up to this point has been a reluctance in the engineering community to build with privacy in mind. The roots of this compliance puzzle stem from a rationalist Western school of thought that values privacy consequentially – in terms of harms and benefits accrued – while managing it centrally and universally – through essentialist and impossibly prescriptive governance that imagines the proper handling of every kind of PII across all contexts and situations. As a result, privacy has been treated as a punitive compliance puzzle instead of a creative, inviting, and inspiring design challenge.

In her analysis of how technological innovation veered off course into surveillance capitalism, the first reason Zuboff gives is that the change was unprecedented:

“Most of us did not resist the early incursions of Google, Facebook, and other surveillance capitalist operations because it was impossible to recognize the ways in which they differed from anything that had gone before. The basic operational mechanisms and business practices were so new and strange, so utterly sui generis*, that all we could see was a gaggle of ‘innovative’ horseless carriages. Most significantly, anxiety and vigilance have been fixed on the known threats of surveillance and control associated with state power. Earlier incursions of behavior modification at scale were understood as an extension of the state, and were not prepared for the onslaught from private firms.” (2020, p. 340)

But we are no longer swimming in uncharted waters, unsure of what the future of technology holds. Today, we have the benefit of historical hindsight and no defensible reason to operate willfully blind to the lessons of the past.

While web3 exposes users to more risk, it presents a unique opportunity to abandon outmoded frameworks in favor of identity and privacy schemes that center individual autonomy and agency. This is an opportunity that nobody, least of all those building web3, can afford to ignore.

We really have no more excuses.

References]

AICPA. Privacy Management Framework. (2009). https://us.aicpa.org/interestareas/informationtechnology/privacy-management-framework

Ammerman, W. (2019). The Invisible Brand: Marketing in the Age of Automation, Big Data, and Machine Learning (1st ed.). McGraw Hill.

Basu, T. (2022, January 20). This group of tech firms just signed up to a safer metaverse. MIT Technology Review. https://www.technologyreview.com/2022/01/20/1043843/safe-metaverse-oasis-consortium-roblox-meta/

Bartow, A. (2006), A Feeling of Unease About Privacy Law. University of Pennsylvania Law Review, Vol. 154, Available at SSRN: https://ssrn.com/abstract=938482

Bhagat, S., Kim, D. J., & Parrish, J. (2020). Disinformation in social media: Role of dark triad personality traits and self-regulation. In 26th Americas Conference on Information Systems, AMCIS 2020

Brignull, H. (2013, August 29). Dark Patterns: inside the interfaces designed to trick you. The Verge. https://www.theverge.com/2013/8/29/4640308/dark-patterns-inside-the-interfaces-designed-to-trick-you

Brinnen, M. and Westman, D. (2019, December). “What’s wrong with the GDPR?” Swedish Enterprise. https://www.svensktnaringsliv.se/material/skrivelser/xf8sub_whats-wrong-with-the-gdpr-webbpdf_1005076.html/What's+wrong+with+the+GDPR+Webb.pdf

Burkert, H. (1998). “Privacy-Enhancing Technologies: Typology, Critique, Vision.” In Technology and Privacy: The New Landscape (ed. Philip E. Agre and Marc Rotenberg).

Chang, S., Salas, A. G., Siddarth, D., Wang, J., & Zhao, J. (2022, February). “Towards a Digital Pluriverse.” Pluriverse World. https://pluriverse.world/

Choy, D. (2022, January 14). Vitalik Buterin on why cross-chain bridges will not be a part of the multi-chain future. CryptoSlate. https://cryptoslate.com/vitalik-buterin-on-why-cross-chain-bridges-will-not-be-a-part-of-the-multi-chain-future/

Cohen, J. E. (2012). Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Illustrated ed.). Yale University Press.

Copeland, T. (2020, February 18). We tracked 133,000 Ethereum names and exposed their secrets. Decrypt. https://decrypt.co/19423/we-tracked-133000-ethereum-names-and-exposed-their-secrets

Courtland, C., SarahDW, & Moore, S. (2022, February 28). “Promoting the Pluriverse.” Crypto, Culture, & Society. https://society.mirror.xyz/2JSrCE929TDLmKGf4Td7bruzlxRUX9s-hIyeAgRLuvo

danah boyd. (2010). "Social Network Sites as Networked Publics: Affordances, Dynamics, and Implications." In Networked Self: Identity, Community, and Culture on Social Network Sites (ed. Zizi Papacharissi), pp. 39-58.

Davies, S. (1998). “Re-Engineering the Right to Privacy.” In Technology and Privacy: The New Landscape (ed. Philip E. Agre and Marc Rotenberg).

Dworkin, G (2015). The nature of autonomy. Nordic Journal of Studies in Educational Policy, 2015:2, 28479, DOI: 10.3402/nstep.v1.28479 https://www.tandfonline.com/doi/full/10.3402/nstep.v1.28479

Dworkin, G. (1988). The Theory and Practice of Autonomy. Cambridge, MA: Belknap Press of Harvard University Press.

Foucault, M., & Sheridan, A. (1995). Discipline & Punish: The Birth of the Prison. Vintage Books.

Frenkel, S., & Browning, K. (2021, December 30). The Metaverse’s Dark Side: Here Come Harassment and Assaults. The New York Times. https://www.nytimes.com/2021/12/30/technology/metaverse-harassment-assaults.html

Gavison, R. (1980). Privacy and the Limits of Law. The Yale Law Journal, 89(3), 421. https://doi.org/10.2307/795891

Grassegger, H., & Krogerus, M. (2017, January 28). “The Data That Turned the World Upside Down.” Vice. https://www.vice.com/en/article/mg9vvn/how-our-likes-helped-trump-win

Harris, T. (2019). “Tech is ‘Downgrading Humans.’ It’s Time to Fight Back.” Wired. https://www.wired.com/story/tristan-harris-tech-is-downgrading-humans-time-to-fight-back/

Harris, T. and Raskin, A. (Hosts). (2022, January 13). Is World War III Already Here? (No. 45) Guest: Lieutenant General H.R. McMaster [Audio podcast episode]. In Your Undivided Attention. TED. https://www.humanetech.com/podcast/45-is-world-war-iii-already-here

J.P. Morgan. (2022, February). Opportunities in the Metaverse: How businesses can explore the metaverse and navigate the hype vs. reality. https://www.jpmorgan.com/content/dam/jpm/treasury-services/documents/opportunities-in-the-metaverse.pdf

Jemima, J. (2014, March 12). An online Magna Carta: Berners-Lee calls for bill of rights for web. The Guardian. https://www.theguardian.com/technology/2014/mar/12/online-magna-carta-berners-lee-web

Katz v. United States. United States Supreme Court. 389 U.S. 347: 1967. https://supreme.justia.com/cases/federal/us/389/347/

Kröger, J.L., Lutz, O.H.M., and Ullrich, S., (2021, July 7). “The Myth of Individual Control: Mapping the Limitations of Privacy Self-management.” https://ssrn.com/abstract=3881776 or http://dx.doi.org/10.2139/ssrn.3881776

Kuran, T. (1997). Private Truths, Public Lies: The Social Consequences of Preference Falsification (Reprint ed.). Harvard University Press.

Locke, J. (2022). The Second Treatise of Civil Government and a Letter Concerning Toleration (1st Edition). Basil Blackwell.

Mihajlović, M. (2022, February 11). “How To Do Your Own Research (DYOR).” Crypto Investing Guide: Fundamental Analysis. https://academy.shrimpy.io/lesson/how-to-do-your-own-research-dyor

Moxie Marlinspike. (2022, January 7). “My first impressions of web3.” https://moxie.org/2022/01/07/web3-first-impressions.html

National Institute of Standards and Technology. NIST Privacy Framework. (2020, January 16). https://doi.org/10.6028/NIST.CSWP.01162020

Nissenbaum, H. (2010). Privacy in Context: Technology, Policy, and the Integrity of Social Life (1st ed.). Stanford Law Books.

OASIS Consortium. Ethical Technology - Digital Sustainability - Oasis Consortium. (n.d.). https://www.oasisconsortium.com/

Orlowski, J. (Director). (2020). The Social Dilemma [Documentary]. Netflix.

Panther Protocol. Panther Protocol. (n.d.). https://www.pantherprotocol.io/

Pauwels, P., Pirovich, J., Braunz, P., & Deeb, J. (2022, March). zkKYC in DeFi: An approach for implementing the zkKYC solution concept in Decentralized Finance. https://eprint.iacr.org/2022/321.pdf

Preimesberger, C. J. (2022, January 31). Metaverse vs. data privacy: A clash of the titans? VentureBeat. https://venturebeat.com/2022/01/28/metaverse-vs-data-privacy-a-clash-of-the-titans

Panther Protocol. Privacy preserving protocol for digital assets. (2021, July). https://www.pantherprotocol.io/resources/panther-protocol-v-1-0-1.pdf

Prosser, W. L. (1960). Privacy. California Law Review, 48(3), 383. https://doi.org/10.2307/3478805

Rosen, D., & Santesso, A. (2011). Inviolate Personality and the Literary Roots of the Right to Privacy. Law and Literature, 23(1), 1–25. https://doi.org/10.1525/lal.2011.23.1.1

Schmidt, E., & Cohen, J. (2014). The New Digital Age: Transforming Nations, Businesses, and Our Lives (Reprint ed.). Vintage.

Secret Network. (2022). Secret Network: A Privacy-Preserving Secret Contract & Decentralized Application Platform. (n.d.). Secret Network. https://scrt.network/graypaper

Smith, A. (2021, October 25). Facebook whistleblower says riots and genocides are the ‘opening chapters’ if action isn’t taken. The Independent. https://www.independent.co.uk/life-style/gadgets-and-tech/facebook-whistleblower-zuckerberg-frances-haugen-b1944865.html

Solove, Daniel J. (2007) 'I've Got Nothing to Hide' and Other Misunderstandings of Privacy. San Diego Law Review, Vol. 44, p. 745, 2007, GWU Law School Public Law Research Paper No. 289, Available at SSRN: https://ssrn.com/abstract=998565

Sumner, C., Byers, A., Boochever, R. and Park, G. J. (2012) "Predicting Dark Triad Personality Traits from Twitter Usage and a Linguistic Analysis of Tweets," 2012 11th International Conference on Machine Learning and Applications, 2012, pp. 386-393, doi: 10.1109/ICMLA.2012.218.

Thierer, A. (2020, June 8). The Pacing Problem and the Future of Technology Regulation. Mercatus Center. https://www.mercatus.org/bridge/commentary/pacing-problem-and-future-technology-regulation

Tobin, A., & Reed, D. (2016, September). The Inevitable Rise of Self-Sovereign Identity. Sovrin Foundation. https://sovrin.org/wp-content/uploads/2017/06/The-Inevitable-Rise-of-Self-Sovereign-Identity.pdf

Uglova, A. (2020, May 9). A Letter to Americans Explaining the InfoSec & PsyOps Misinformation Problem on Social Media. Medium. https://medium.com/im-probably-wrong/towards-a-baseline-infosec-psyops-understanding-of-the-misinformation-problem-on-social-media-895784564918

Uglova, A. (2021, October 7). Cognitive Consent in Digital Privacy (Part 1 of 3) - I’m Probably Wrong. Medium. https://medium.com/im-probably-wrong/digital-privacy-as-an-expression-of-cognitive-consent-c30421bb21d9

Uglova, A. (2022, January 5). A New Framework for Digital Privacy (Part 2 of 3) - I’m Probably Wrong. Medium. https://medium.com/im-probably-wrong/a-new-framework-for-digital-privacy-part-2-of-3-d28965ae32ea

Uglova, A. (2022, April 7). What Biden’s Executive Order Means for Privacy. Mirror. https://ana.mirror.xyz/Is5u-zode1yltjL6QqgElcr4jgdiJpdOh_1bv2BRMSI

U.S. Senate. (2020, November). Report of the 116th Select Committee on Intelligence, United States Senate, on Russian Active Measures, Campaigns, and Interference in the 2016 US. Election. https://www.intelligence.senate.gov/publications/report-select-committee-intelligence-united-states-senate-russian-active-measures

Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.

Warren, S. D., & Brandeis, L. D. (1890). The Right to Privacy. Harvard Law Review, 4(5), 193. https://doi.org/10.2307/1321160

Wu, T. (2017). The Attention Merchants: The Epic Scramble to Get Inside Our Heads (Illustrated ed.). Vintage.
Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121 136.
https://dx.doi.org/10.4324/9781315259697-21

Wylie, C. (2019). Mindf*ck: Cambridge Analytica and the Plot to Break America. Random House.

XR Safety Initiative. (2022, February 16). The XRSI Privacy and Safety Framework. XRSI – XR Safety Initiative. https://xrsi.org/publication/the-xrsi-privacy-framework

Zuboff, S. (2020). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Reprint ed.). PublicAffairs.

Subscribe to Anastasia
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.