Cognitive Consent in Digital Privacy (Part 1 of 3)

Note: This post is part one of a three-part series on digital privacy, concealed influence, and cognitive consent. Click here for part two.

Digital privacy is about having the ability to choose whom you allow access to your mind. Worry about the amount of signal you emit to advertisers, and not where that signal is stored.

For all its glamour and turmoil, the Cambridge Analytica (CA) scandal provided the inducement and inspiration I needed to finally go get my Master’s degree. I was drawn in by the concept of computational psychology, which I came across in Christopher Wylie’s tell-all book, Mindf*ck: extremely precise behavior modification at scale is achievable given any sufficiently large ML training set of social media patterns. More than that, the scandal laid bare that not only is this capability available, but it is also for sale.

I am probably not wrong in crediting CA with putting digital privacy on the radar of ordinary citizens, where previously it had been the domain of policy wonks, tech bloggers, and libertarian absolutists. At the very least, CA is how I came to be concerned (fascinated?) about the link between information and manipulation.

The target of my graduate inquiry is to examine and architect solutions — whether policy-based, technological, or contractual — that curtail the abilities of private and public entities to use the information they gather to successfully deploy behavior modification techniques without users’ express awareness and consent. In other words, my goal is to find ways for consumers to recognize and resist concealed influence by reasserting cognitive consent in digital spaces. Digital advertising, of course, is the widest and most lucrative attack surface for pay-to-play behavior modification, so that’s where I begin my inquiry. In this post, I examine two ideas that promise greater privacy AND better targeting.

I came across Antonio García Martínez’s writing on the future of digital advertising and privacy after learning about his much-publicized hiring and firing by Apple for alleged misogyny following an employee uprising in response to his notorious bestseller, Chaos Monkeys. In addition to lamenting the various shortcomings of Bay Area women, the novel chronicles García Martínez’s time at the forefront of Facebook’s ads targeting business, which played a leading role in the CA drama that brought my ass back to class.

Despite the hubbub about him memorialized in so many colorful Twitter threads, García Martínez’ career gives him a unique, experiential, and usefully subjective vantage point from which to observe the evolution of privacy and digital advertising. So of course I was paying attention when, in a recent Substack post, he asserted that the future of the industry lies in abandoning cookies and moving targeting entirely off the cloud and onto devices. This reality-bending feat would provide advertisers with a high degree of granularity while simultaneously tourniquetting the heedless hemorrhage of our behavioral scent markers all over the worldwide web. As he writes: “We have broken all laws of advertising physics: we have a product whose privacy profile is better which also monetizes better than the non-private version.”

Is on-device targeting the end-run around the encroachments of the digital advertising industry that consumer privacy advocates have long hoped for? This blog post, adapted from a case study I prepared for one of my classes this semester, explores its privacy-preserving features, its technological feasibility, and whether this hypothetical feat of computational ingenuity holds any prospects for reasserting cognitive consent in digital spaces.

Digital advertising: c’est quoi?

Digital advertising is the business of micro-targeting audiences through a patchwork of tools (of which there are thousands) that capture user IDs, IP addresses, and other identifiers, and map them to web browsing patterns to create a monetizable model of digital behavior that can be marketed to advertisers.

“You’re sure the homing beacon is secure aboard their ship?”
“You’re sure the homing beacon is secure aboard their ship?”

Facebook, for example, embeds a code that collects data on the browsing, purchasing, and social media patterns of account holders that use Facebook to authenticate their identity across millions of applications. The code, which works kind of like a digital version of the Imperial homing beacon aboard the Millennium Falcon, creates impressively precise behavioral targeting models for Facebook’s advertising clients. The more granular the model, the more lucrative the ad placement.

The privacy conundrum at the heart of digital advertising — and of the sophisticated AdTech industry that makes it all possible — is that to make sense of this deluge of digital exhaust wafting among millions of websites, publishers, agencies, trading desks, and re-targeting apps, it must be tied together by something unique. A thread must somehow weave it all into a coherent behavioral tapestry. This unique thread — a primary key in database-speak — identifies the individual described by the profile. Digital advertising is therefore by its very nature not private.

FLoC and the outcry against tracking

Not everyone is pleased with the jigsaw puzzle that advertisers, data brokers, analytics platforms, and social media companies are playing with the fragments of our online lives. According to a Wired article earlier this year:

“Cookie tracking has become more and more invasive. Embedded, far-reaching trackers known as third-party cookies keep tabs on users as they move across multiple websites, while advertisers also use an invasive technique called fingerprinting to know who you are even with anti-tracking measures turned on (through your use of fonts, or your computer’s ID, your connected Bluetooth devices or other means).”

With the increase in public scrutiny, browser privacy has become a differentiating product feature, so companies interested in retaining users have begun blocking third-party cookies. This market pressure creates a massive, existential revenue problem for companies that depend on revenue from advertising, including Facebook, Google, and the AdTech players themselves: without tracking, they cannot target, and without targeting, they cannot sell ads. Given the public outcry and an increasingly labyrinthine regulatory regime, it is clear that the entire digital advertising industry will have to get crafty about sorting out how to display the right creative to the right audience at the right time, without running afoul of regulators.

One measure that is currently being piloted to reintroduce privacy back into an ecosystem that depends on the inherently identifying features of behavioral patterns and primary keys is Google’s Federated Learning of Cohorts. FLoC proposes anonymously clustering large groups of people with similar interests into a monetizable cohort for a timeboxed period, and then reshuffling everyone again, and again, and again. Instead of selling targeted advertising based on granular behavior profiles of discrete individuals, FLoC “hides” individuals in a crowd that shares many of the same affinities and attributes, and then keeps updating and reorganizing that crowd.

According to the Privacy Sandbox:

“FLoC keeps you among a large group of people with similar browsing history. This group isn’t based on who the individuals are, but rather on their collective interests so advertisers can still show relevant ads”.

FLoC is an interesting privacy-preserving proposal, in that it diagnoses your quirks and indiscretions, cross-references your shared predilections, lumps you in with other like-minded weirdos, and sends you all off to a sleepaway camp where nobody has a name and everyone shares the same number: “Here be a set of weirdos #24601. Please select suitable recreational activities accordingly.” Then you get shipped off to a different sleepaway camp the following week with a different set of weirdos that are just as eccentric, but in other ways.

But how does that solve for our central problem: that of unwanted, undisclosed behavioral modification via just-in-time inducements served according to a sophisticated behavioral model that predicts exactly how to get unwitting humans to pay for a thing or vote a certain way? The point of digital privacy, I believe, is to make our influence markets consensual — to give users agency over their information environment, awareness about what they’re being asked to do, and control over how they respond.

FLoC treats digital privacy as if concealment is the primary goal, but is it? Do we want to hide who we are because secrecy itself is desirable and good — or because secrecy is a means to a much more important end: individual agency, awareness of the forces around us, and control over how we behave?

On to on-device targeting

What if, as García Martínez writes in his blog post, we were to abandon the cloud-based information hemorrhage altogether, and move the entire ad targeting game onto devices? What if personally identifiable information (PII), whether siloed or federated, never left our phones, wearables, or computers? Would we then achieve a world of consensual influence instead of the concealed manipulation of our daily experience?

García Martínez theorizes that the AdTech industry is about to revolutionize how we think about privacy, by working with consumer tech manufacturers such as Apple and Google to move the entire ad targeting model onto devices for storage and computation. Out with the logo cloud of endless platforms, interfaces, and marketplaces sourcing, chewing, analyzing, rearranging, and retargeting our behavioral breadcrumbs towards a more perfect avatar of our online selves. What if all that permutation happened with the ever-expanding processing power right inside our pockets?

If the computational power that builds our behavioral profiles lived on-device, advertisers would have access to a much finer level of detail about our activities, emotions, and environmental factors that no cloud-based app publisher could ever hope to assemble. Just think what an advertiser might be able to accomplish with up-to-the-minute knowledge about your network state, geolocation, your connected IoT devices, and the weather in your area. The targeting logic would be correct, recent, and rich. No privacy laws would be broken. No third-party platforms benefit. It’s a goldmine for behavior modification that cuts out the middleman…and without a single byte of your PII ever sneaking off into the cloud.

While he admits such a move would require considerable engineering prowess, García Martínez brings up a sensible point:

Just why the hell do we take all this data being generated by all these dozens of apps running on a device, and eat up huge network bandwidth getting it into the cloud, only to then pass back more state — decisions about products to show, images, ads, everything — back to the device to generate a user experience. Whose data (once again!) we send into the cloud for yet more computation…when you’ve got native code that could do math just bloody sitting right there? There are mobile-specific versions of SQLite, TensorFlow, and other data and ML tools now for Android; there’s absolutely no hard technical bound for any of this, just a fundamentally different way of thinking about the mobile world…Our 20-year-old data paradigm is simply dumb if you think about the problem from first principles, and the exact inverse of how you’d design a data ecosystem in our current mobile-first, in-app world.”

And how does one opt-out of on-device targeting? As García Martínez suggests, simply take the phone and throw it into the Bay. 😃 D’oh! Bold, exceedingly final, and satisfyingly compliant.

Yeah, but still, what about consent? We should be thinking about the amount of signal we emit to advertisers to better infiltrate our brains, and not where that signal is stored. Even if all our data stays with us and the phone accomplishes all the calculations to determine what ad impression or social media post will get us to cooperate, we are still being influenced without our knowing consent. You can celebrate being a proper internet sleuth, using disposable emails, poring over Ts & Cs, and masking your IP all day long, but if you’re still subject to behavioral manipulation based on what your phone knows about you — even if that data never leaves the phone — you have not reasserted your consent, PII be damned.

Implications for concealed influence via computational psychology

Privacy is a means to an end, not an end itself.

The problem with on-device targeting and FLoC is that both approach privacy as the main objective. They treat it as a legal condition to be satisfied while ignoring the applied intent of privacy: to give us control over the degree of access others have to us through information, attention, and proximity. (I am indebted to Ruth Gavison for this elegant definition from her Yale Law Journal article, “Privacy and the Limits of Law”.) Think of it as the ability to choose whom you allow access to your mind. Advertising auctions function on machine learning models aimed at infiltrating the brain. That is a precise definition, not an alarmist one.

This gets at the intrinsic value of privacy: why do we want it? Is it because hiding is somehow inherently useful, or does visibility expose us to a host of undesirable risks and influences? I would argue it is the latter, and that most people want to avoid being tracked in order to fend off unwanted, unseen, and unaccountable influence of the sort that the CA scandal exposed.

In other words, barring child predators and human traffickers (and that’s a huge bar, I admit) nobody actually cares who you are or what your PII is: that’s a red herring. What advertisers care about is building the tools to induce consumers en masse to buy, vote, act, or speak in a way that is prescribed by whoever is paying for the ad impression — by whatever technological means available. Where there’s a will and a paying customer, there’s a way. An industry that’s literally run by creatives can obviously get very creative!

If, instead of holding up new hoops for advertisers to jump through, we wanted to solve for concealed influence, our goal should be to make the concealment itself more difficult to achieve. Solving for PII by treating privacy as a goal is like blocking a road: motorists are going to establish a detour around your “legal condition” to get where they’re going. If you want to stop them from getting there in the first place, you have to focus on making the destination — concealed influence — unappealing.

Ultimately, FLoC and on-device targeting are sideshow distractions that focus us on where our data is stored, and not on who has access to it. Neither FLoC nor on-device targeting satisfy the applied purpose of privacy — to defend against concealed influence — because both are satisfied merely to glide past awkwardly written, reactive regulations while entirely circumventing what is at the heart of the problem that the CA scandal exposed: behavioral targeting by its very nature is invasive and manipulative. Privacy is not an end goal, but a means of wresting back control and reestablishing cognitive consent in digital spaces.

In subsequent blog posts that I’ll adapt from my assignments, I will explore the intrinsic vs. applied value of privacy, continue to demolish non-consensual behavior modification in the digital realm, and discuss the promise and limits of decentralized, self-sovereign identity.

Yes, all roads lead to identity. So long as users do not own their identity and credentials — and so long as we treat privacy as a compliance puzzle instead of a defensive weapon — we’ll forever be in a regulatory rat race instead of truly asserting our cognitive consent. Regulatory bodies and laws such as GDPR and CCPA create a false sense of security by making things more complicated, but they do not meaningfully advance the interest of consumers. They just make stuff harder: classic examples of the government “doing something” so that the unwashed masses might be temporarily appeased.

But just as obscurity is not security, obfuscation is not protection. That’s just a design problem for enterprising engineers to solve. We can always build a higher hurdle, and train a better athlete to jump over it, but the race is basically the same.

The very nature of the race must be made substantially different. Let’s see if decentralized identity is different enough.

Subscribe to Anastasia
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.