A Letter to Americans Explaining the InfoSec & PsyOps Misinformation Problem on Social Media

Note: This is an import of a Medium essay I originally wrote in May 2020.

Please sit down with a nice cup of coffee to digest this longform piece about the media consumption patterns that affect you daily. Misinformation is spreading among people like you, who may not have a baseline understanding about the very wonky technical and policy mechanics in place. This is my attempt to synthesize it, and to equip you to make your own decisions about how you process information and perceive truth. The links I provide at the bottom are readings I hand-picked if I were to teach a graduate seminar on disinformation and tech ethics, say. So if you do actually read them all, you will be significantly better informed about what’s going on, and be able to draw your own conclusions.

Conspiracy theorists are truth seekers who get lost. The impulse is noble, the method is mistaken. I share the impulse, but I like my theories falsifiable and shave my legs with Occam’s Razor.

This is not about Trump or not Trump. It’s not about Hillary. I don’t care about your political affiliations. I will be open about mine: predominantly libertarian, tending to distrust centralized power (I was born in the USSR) and favor individual liberties, and I like shooting guns. I have probably read the Federalist Papers more often than you have. I was a Hamilton fangirl before the second Hamilton bio came out, which inspired the musical. I don’t care about your political affiliations. I just want a country.

People seek neat explanations for chaotic, senseless events because:

  1. We prefer order to chaos, because pattern recognition helps us survive and survival is our most base instinct.
  2. We are uncomfortable with a dark, unfeeling cosmos that doesn’t care about us, and powerfully prefer an omnipresent force guiding it all. All = everything we don’t understand.
  3. We are matter. Matter is the antithesis of entropy. In order to create a thing, you have to expend energy to organize entropic particles into an orderly structure. We are matter, and to exist, we must defy entropy, daily. Since we are matter, we do as matter does.
  4. We are genetically programmed through eons of sexual selection and evolution to procreate with religious mates due to societal and cultural pressures that favor religious adherents and cast heretics and nonbelievers out of the available gene pool (see Dean Hamer, The God Gene).

So we are, as you see, some very credulous and explanation-hungry creatures who lap up anything that promises to explain all the wild and disordered things and tie them up for us with a neat little bow.

There are 330 million Twitter accounts and an estimated 50 million bots. The numbers on Facebook are worse. (This is the reason for your perceived censorship of free speech, which I’ll get into later. Please forgive the tech companies — they are trying, and the discipline is very young and untested.) The bots are more prolific than humans — since they are programmed to be so — and generate more noise per account than real ones for two reasons:

  1. They can and do post more often, physically, being robots and all.
  2. The content they are programmed to post is intentionally surprising, inflammatory, outrageous, and fringe. These are the ingredients that generate engagement, and get retweeted and shared more often (and often by their own armies of bots to their respective unwitting followers), thereby amplifying the message, the visibility/credibility of the account, and its follower numbers. You don’t retweet humdrum, obvious things. You retweet scoops.

A majority of bots are managed and controlled by the same agenda-driven groups. That is to say, same group managing multiple and ideologically-opposite accounts. They post to Twitter and create Facebook groups for completely opposing viewpoints in order to sow dissent among citizens. For example, FB groups created by Steve Bannon’s campaign operatives “organized” both #BlackLivesMatter protests AND #BlueLivesMatter marches. They posted content to their followers and FB groups carefully curated with AI-enabled predictive data modeling precision to be shared and retweeted by citizens on both sides. The intent was never to advocate for one or another of these divisive political issues; the intent was to sow discontent, distrust, and disengagement with mainstream media/politics, in order to discourage certain types of voters while enraging others into action. It’s a divide-and-conquer strategy and it’s not even particularly sophisticated. It’s just standard pages from information warfare playbooks well developed by legitimate non-kinetic combat operatives (CIA, etc.) to fight insurgents, drug cartels, and sundry nefarious boogeymen with words instead of bombs. And it works really well.

FB groups created by Steve Bannon’s campaign operatives “organized” both #BlackLivesMatter protests AND #BlueLivesMatter marches.

Not all bots are managed by political campaign operatives. And not all disinformation accounts are bots. Many are from agenda-driven factions that are likewise motivated to sow discontent for one reason or another. For example, a well-known campaign by some racist 4chan thugs came up with a plan to distribute fake Starbucks coupons that could be redeemed by black Americans for free drinks. No such campaign by Starbucks existed. The intent was to draw black Americans into Starbucks, where they would be denied free drinks, fly into a rage, be captured on video and shared on social media, and drive the narrative about angry black people. Stuff about Colin Kaepernick’s campaign with Nike? Likewise amplified by the selfsame groups to discredit Nike, because Nike is a commercial institution that unifies society. It’s awful. And of course it works well, because why would you not retweet or share a coupon for free Starbucks drinks? And why would you not post a video of a person having a bad-day meltdown on social media? Remember the formula for engagement: surprising, inflammatory, outrageous, fringe, scoop.

The intent was to draw black Americans into Starbucks, where they would be denied free drinks, fly into a rage, be captured on video and shared on social media, and drive the narrative about angry black people.

Why are social media platforms built to captivate and increase engagement? For monetization. Twitter and Facebook are “free” platforms because you pay for them with your data and your eyeballs, which are far more valuable than gated subscription models. I’m not even being nihilistic here. It’s just how advertising works. The more engaging the content, the more you scroll and share, the more share of voice an advertiser captures, the more the advertiser pays the platform. (See why Sam Harris does not sell ads on his podcast and requires a subscription.)

There are some guys — titans of tech company product development, really — whom you should get to know, in order to better understand Silicon Valley business models: Tristan Harris, Aza Raskin, BJ Fogg.

Tristan Harris:

Google product manager for Gmail, whose 2013 Slides presentation on addictive technology was widely shared and went viral (and you should ABSOLUTELY read it). Google nominated him to be their in-house tech ethicist. He later founded the Center for Humane Technology with…

Aza Raskin:

Creator of the “infinite scroll”, a ubiquitous feature on social media platforms like Facebook and Twitter that allows users to continuously move up or down a page, removing any need to press “refresh” or hit a “next page” button. An ideal engagement delivery device, really.

Infinite scroll is a type of ludic loop, a psychological technique perfected by gambling casinos. It’s a reward cycle of doing the same thing over and over because you get just enough reward (from the slot machine, from discovering something engaging in your newsfeed) to keep you trying to earn it again. It’s also why you keep checking your email. The rewards come at random and unpredictable intervals, which makes them even more delectable and addicting, because you’re unconsciously trying to discern a pattern of reliable reward attainment (see above, pattern recognition).

BJ Fogg:

Founder and director of the Stanford Persuasive Technology Lab, through which a who’s-who of tech founders, engineers, and product managers (including Tristan and Aza) graduated, studying behavior models in order to create addictive user interfaces and user experiences. (He also created the Tiny Habits model for making meaningful behavior shifts in your life, which I highly recommend.)

Anyway, these guys soured on the whole addictive technology thing, because it got us this engagement problem, which empowers bots, trolls, operatives, and the rest of the unsavory lot to pretty much tell us what we want to hear and ensure we will tell our friends about it.

Ok so, heuristics and fear/anger. I have no idea how I’m going to explain all this because it’s dense but here goes. Basically there are these personality models** **that have been exhaustively studied and empirically confirmed that show that specific **personality traits *(“the Big Five”, as they’re called) make people more or less susceptible to manipulation, to action, to believing a certain thing. These personality models study the outcomes of manipulating the heuristics *of individuals with certain personality traits to generate a cognitive bias. So let’s define our terms:

  • Heuristics: Mental shortcuts based on the individual’s learned experience used to quickly solve a problem, make a decision, or answer a question without doing a whole bunch of mental math when, for example, a large animal with sharp teeth is nearby.
  • **Cognitive bias: **Drawing a false conclusion or making the wrong decision based on incomplete data because you took a shortcut and killed the animal without seeing that it was being hunted by an even larger one, and now you’re really screwed.

You can read more in-depth about this fun stuff here. From that article:

However useful heuristics are, they can also fail at making correct assumptions about the world. When the heuristics ‘fail’, the result is a cognitive bias: drawing a false conclusion based on prior data. One of the most common heuristics is availability heuristic, where it is easier to recall events with greater consequence or impact. For example, after multiple news reports about shark sightings, you are more afraid of being bitten by a shark when you swim at the beach compared to something more mundane such as having a car crash on the way to the beach. In reality, you are much more likely to have a car crash than be bitten by a shark, but the focus is placed on the shark.

Manipulating a useful mental heuristic to generate an incorrect cognitive bias is something we do all the time in information warfare and psychological operations, both forward-deployed and, increasingly, at home.

We do it all the time in information warfare and psychological operations, both forward-deployed and, increasingly, at home.

One such technique of weaponizing heuristics to generate a cognitive bias is called **priming. **You basically tell a person something before asking him about the thing you want him to believe, and the answer will be diametrically opposite to what would happen if you hadn’t primed him with the thing before asking the question. We do this all the time in survey methodology. And in taking down warlords.

For example, if you were to ask a person if he’s happy, he’d be like, “Meh, sure, I’m pretty happy”. But if you were to say, “Hey, Bob over there is really attractive. He gets all the ladies in his fast cars headed to his big home. Anyway, would you say you’re happy?” You’d get a very different answer. That’s priming.

Another example. I’m operating undercover as a drug runner in a Mexican drug cartel. “Hey, do you like our cartel boss?” “Yes, he’s the man.” VERSUS: “Hey, it kinda feels like bossman isn’t always honest with us. Don’t you think he’s keeping more cash for himself than he should? And he was rough on you that time. I hate it when he does that to me, too. Anyway, what do you think of him?” “You know, I’ve begun to think he’s not such a wise guy.” Great! Now I’ve sown discontent and I’m inching closer to a non-kinetic victory. I can turn this guy against his organization. That’s priming.

Heuristics and fear/anger. Angry people, personality-wise, empirically, are less inclined towards rational, logical thought and more inclined to take un-calculated action against the source of their anger, a.k.a. “lashing out”. Fear inspires action (see Anakin Skywalker).

Ok, now computational propaganda. (I’m drowning here in how-am-I-going-to-explain-thisness.) Cambridge Analytica created a cutesy personality test distributed to 270,000 Facebook users at a cost of $1 per user. 270,000 personal profiles were scrubbed, and, unbeknownst to those 270,000, so were ALL THEIR FRIENDS’ PROFILES. Cambridge Analytica knew that the average Facebook user has 400 friends. The vulnerability in Facebook’s permissions allowed CA to harvest not only the users of their personality test, but also the data of all their friends. So, that’s 87 million profiles (and their entire Like histories, group memberships, and personal messages) for a measly $270,000 investment. Now that’s good ROI when you’re trying to win a general election.

Why is this important? Microtargeted ads and predictive data modeling. I’m pulling this out of another article because I don’t have my CA whistleblower (and chief data scientist’s) book handy, but 68 Facebook Likes by a user could predict the person’s skin color, sexual orientation, and political leanings — voting Republican or Democrat — with an accuracy averaging 90%. Democrats, for instance, tended to like the Colbert Report, Nicki Minaj, and Hello Kitty, and dislike camping and Mitt Romney. Now you know whom to target and what narrative to feed them.

From the article:

In 2012, Kosinski proved that on the basis of an average of 68 Facebook “likes” by a user, it was possible to predict their skin color (with 95 percent accuracy), their sexual orientation (88 percent accuracy), and their affiliation to the Democratic or Republican party (85 percent). But it didn’t stop there. Intelligence, religious affiliation, as well as alcohol, cigarette and drug use, could all be determined. From the data it was even possible to deduce whether someone’s parents were divorced.

The strength of their modeling was illustrated by how well it could predict a subject’s answers. Kosinski continued to work on the models incessantly: before long, he was able to evaluate a person better than the average work colleague, merely on the basis of ten Facebook “likes.” Seventy “likes” were enough to outdo what a person’s friends knew, 150 what their parents knew, and 300 “likes” what their partner knew. More “likes” could even surpass what a person thought they knew about themselves.

With 200 Likes, a machine learning data model could predict whether a person would cheat on or divorce his mate with higher accuracy than his closest and most intimate friends, because we filter how we present ourselves to friends, but we do not filter our “private” Facebook activity, so it’s a COMPLETE behavioral model for extrapolative precision. Now, add PsyOps and predictive, AI-enabled machine learning into the statistical mix, and you know exactly whom to microtarget, how to prime them with what information, and whether they’re gonna go your way and bring their friends along.

Now go back to the engagement stuff, bots, motivated reasoning, and political psychological operations. You overwhelm the information space around a person with an orchestrated abundance of evidence. And that is how you manufacture consensus.

You overwhelm the information space around a person with an orchestrated abundance of evidence. And that is how you manufacture consensus.

The takeaway: you have no idea whether or when you are being manipulated. And we have no idea what’s behind all the conspiracy theories we are reading, what their agenda is, or who is pulling the strings. It’s all very esoteric and sophisticated and quite literally beyond the grasp of your human understanding because you are up against your species’ entire evolutionary predisposition, your mind’s heuristic vulnerabilities and programming errors, and your carbon-based life-form’s biological inability to compute large numbers the way that an artificially intelligent and machine learned statistical model can. Your fallibility is a feature, not a bug.

Your fallibility is a feature, not a bug.

What we know is that we have an election coming up. Just the same way we were divided in 2016, and African Americans duped with fake Starbucks coupons, and manifold tactics, techniques, and procedures calibrated to sow mistrust and division, history repeats itself and history indicates it is happening again. Very possibly, the same conspiracy theorist who is behind Plandemic is also behind the people debunking the half-truths in Plandemic. And it’s not a conspiracy theorist at all, but an operator operating his loyal conspiracy theorists. Exactly like a puppeteer. I have no idea whether what’s-her-face who stole the data from the lab and got arrested also had some of her legitimate research stolen by the allergy and infectious diseases institute guy. Very possibly, he might have. It’s not the first time someone’s ideas got grabbed. Welcome to being a woman in a mansplaining conference room, y’all. Plus now there’s an axe to grind.

But what I do know is that we do have a pandemic, and pandemics happen. We were warned about the certainty of a coronavirus going rogue because of research and predictive modeling, not because it was staged. We interact with coronavirus-ridden animals every day. Of course the thing is gonna mutate. That is literally what viruses do. They were here from the very advent of life on Earth, and they only made it from advent to present day by way of successful mutation. There are billions of viruses out there, with more to be discovered. You carry a bunch of them. (Again, due to the limitations of your carbon-based human brain, you have difficulty grasping the bigness of a billion. Here’s a fascinating visual essay to help you comprehend large numbers. Well worth it.) The fact of a pandemic should not be a cause for doubt about its eventuality. We do need to contain its spread, and there are serious and legitimate questions about the best way to do this, to include reopening and herd immunity.

Viruses were here from the very advent of life on Earth, and they only made it from advent to present day by way of successful mutation.

There are also legitimate questions about civil rights here. Crisis and Leviathan theory (see Robert Higgs) shows that historically, new laws, regulation, legislation, and executive powers enacted in response to crisis are seldom if ever relinquished once the crisis abates, growing the leviathan (see FDA, Sarbanes-Oxley, enemy combatants). And as a fairly libertarian thinker, I do not want to see my freedoms curtailed and my individuality controlled. And how would we know whether this is not exactly what is happening now? How would we know THIS, and not some other unimaginable thing, is not the imminent takeover by a dictatorial government against which the Declaration of Independence enjoins us to take up arms and mobilize our small militias? How do you tell when it is time? (See Masha Gessen, The Unimaginable Reality of American Concentration Camps.)

New laws, regulation, legislation, and executive powers enacted in response to crisis are seldom if ever relinquished once the crisis abates, growing the leviathan.

We do not. I do not. But as a skeptic, I do not think that it is. Because historically, the usurpation of power by governments is a much more bumbling, violent, and incongruous affair, and not this coordinated, stage-managed, and creeping process of carefully-planted conspiracy theories and diligent spoon-feeding of talking points to a credulous media while simultaneously orchestrating a financial coup to develop a pricy vaccine for a population that didn’t need it. Government takeovers don’t look like this. I was born in the Soviet Union. Our government is trying, and failing. But it is not tyrannizing.

And so are, by the way, the much-maligned Facebooks and technology companies. Trying, that is. We face a moment of enormous possibility to write a new compact with consumers. Digital protections are still in their infancy, and companies have barely scratched the surface in standing up new engineering practices and safeguards. Ethics standards, corporate policies, and enforcement mechanisms are only now being devised, so please excuse the occasional taking down of a video or a thread because it trips some coordinated misinformation wire. It might have been a mistake, or it might have been a bot, and we’re trying to figure out how the hell to tell the two apart.

Ethics standards, corporate policies, and enforcement mechanisms are only now being devised.

Remember the thing about overwhelming the information space? That is what is happening, and companies that have hardly begun to understand the scope of the misinformation problem are doing their best to curtail it, so that YOU, citizen, can have your voice amid the din of bots and bad actors seeking to do real harm by short-circuiting your ability to think and speak by using your psychological vulnerabilities against you. Free speech is not as simple as it was when we had a town square and every person had a voice and a verifiable identity. You’re up against machines now. Does an AI-enabled data model have the right to free speech, too?

You’re up against machines now. Does an AI-enabled data model have the right to free speech, too?

In this information war, legislation is necessary but is too low a bar. Digital citizens deserve better, and technologists are hard at work figuring out how to unscramble the rancid egg. Some ideas include: Replacing engagement metrics with time well spent. Another one is engineering **privacy by design **by preventing certain vulnerabilities, redoubling red-teaming and penetration testing protocols, removing heuristic vampirism from user interface designs, and so forth.

One interesting idea is borrowed from construction engineering. In essence, creating software engineering codes that are like building codes. When you enter a building, you do not need to know about the math, safety, and structural integrity calculations that went into its construction. You know you are safe inside it, because the building passed rigorous code inspection. The same can presumably be done with social media platforms, whereby software designers follow certain safe/ethical engineering codes so that when you agree to cookies and terms of use, you are not expected to be a legal expert and understand the inscrutable terms written by teams of lawyers to know that your data and time spent are safe. Except that structural engineers had thousands of years lead time to devise safe building codes while social media software engineers have only had a couple decades and the problem only became apparent FOUR YEARS AGO. It’s a thought.

Structural engineers had thousands of years lead time to devise safe building codes while software engineers have only had a couple decades and the problem only became apparent four years ago.

Technology companies ARE emerging as opinion leaders and stewards in this space. Let us help them by being smart about what we consume and unrelenting about what we want, so they can serve us better.

And you know what? Government is, too. Trying, that is. Government does not want to default on its loans, and it’s worried about the S&P 500 just as much as you are. Let’s help? Try being more united. And try trusting your fellow Americans by listening more and talking less? I dunno. It’s a thought. Divisionism was exactly the game plan in 2016. Let’s maybe learn from the enemy we already faced before?

So look, when you watch or read something, anything, ask yourself, who stands to benefit from this?

Also, I’m really sorry Russia did a lot of this. I was not involved.

Anyway, sorry for the opus. As Mark Twain said, “I didn’t have time to write a short letter, so I wrote a long one instead.”

***Update based on some comments I received***

Re: Asking people to talk less and listen more is not intended to silence people I disagree with. It’s actually meant towards people who berate anti-vaxxers, QAnon, or anyone else they disagree with, without making an attempt to understand the personal life experiences that led detractors to their conclusions. I have a strong distaste for insulting, ad hominem argumentation. It’s rude and it’s not even smart. It’s just lazy. We would generate more genuine (as in, not manufactured) consensus and change more minds if we listened more with intent to understand rather than expressly to debunk. Many people have trouble trusting their fellow Americans’ best intentions, likely because they’ve been treated with condescension, derision, and suspicion. This is a very regrettable problem. Ridicule is not a solution.

*Re: *Why don’t I recommend a clear course of action based upon my synthesis? Humility stopped me short of claiming to know how to deal with this paradigmatic problem. I simply do not know. I just wanted to lay out some facts and foundational information so people can make their own informed decisions and not go into the misinformation pandemic blindly.

For further reading:

Yonder: Factions: Mapping The Internet’s New Dynamics (by yours truly!!!)

Tristan Harris Presentation: A Call to Minimize Distraction & Respect Users’ Attention

Stanford: The Data That Turned the World Upside Down

The Atlantic: The Billion Dollar Disinformation Campaign to Reelect the President

*Yale: *Computational Propaganda: If You Make It Trend, You Make It True

Book: The Invisible Brand: Marketing in the Age of Automation, Big Data, and Machine Learning

The Atlantic: The Dark Psychology of Social Networks

TED: How a Handful of Tech Companies Control Billions of Minds Every Day

Wired: Our Minds Have Been Hijacked by Our Phones. Tristan Harris Wants to Rescue

Rolling Stone: That Uplifting Tweet You Just Shared? A Russian Troll Sent It

*Ribbon Farm: *Mediating Consent

Ribbon Farm: The Digital Maginot Line

Medium: Propaganda-lytics & Weaponized Shadow Tracking

LinkedIn: Everything Is a Marketing Campaign Now, Even Policy Ideas

*New Yorker: *The Unimaginable Reality of American Concentration Camps

*NY Times: *What We Know About Russian Disinformation

Just Security: Information Operations are a Cybersecurity Problem: Toward a New Strategic Paradigm to Combat Disinformation

The Guardian: Cambridge Analytica a Year On: A Lesson in Institutional Failure

Vice: The Bots That Are Changing Politics

NY Times: He Combs the Web for Russian Bots. That Makes Him a Target

Wired: Facebook Exposed 87 Million Users to Cambridge Analytica

The Atlantic: The Grim Conclusions of the Largest-Ever Study of Fake News

*arXiv: *Debunking in a World of Tribes

CCW: User-Generated Content Gone Awry: Starbucks’ Best and Worst Fake Campaigns

AND OF COURSE!!!! Mindf*ck: Cambridge Analytica and the Plot to Break America (If you read nothing else in this list, read this.)

Subscribe to Anastasia
Receive the latest updates directly to your inbox.
Verification
This entry has been permanently stored onchain and signed by its creator.