Note: This essay was originally published on Medium in August 2020.
In late 2017, after six years in Rwanda building a cloud software company for the region’s volatile employment market and leading the communications, technology, and security posture of a university network, I decided to come home and focus on the most important issues of the decade: privacy and responsible data stewardship. I had been paying close attention to the Cambridge Analytica scandal that placed Facebook squarely in the crosshairs of public scrutiny. It felt especially personal to me; as a U.S. citizen born in Russia, the tangle of revelations emerging from whistleblower testimonies, Senate hearings, and Justice Department investigations made me feel both embarrassed and determined to carve out my role in the future of digital rights.
I was shocked — and, candidly, intrigued — by the accuracy of Cambridge Analytica’s data model, built upon the leaked information of 87 million Facebook profiles harvested from a seemingly innocuous personality quiz that had gone viral. As I dug deeper, I learned that with 200 Likes, a machine learning model could predict whether a person would cheat on or divorce his mate with higher accuracy than his closest and most intimate friends, because we filter how we present ourselves to friends, but we do not filter our “private” Facebook activity. Our Facebook profiles, therefore, provide a complete behavioral model for extrapolative precision. Now, add targeted psychological operations and predictive, AI-enabled analytics into the statistical mix, and the attacker knows exactly whom to microtarget and how to prime them. Finally, by overwhelming the information space around the target, the attacker, or campaigner, or unscrupulous advertiser can orchestrate the abundance of motivated evidence necessary to manufacture consensus and manipulate action.
Free speech is not as simple as it was when societies enjoyed a physical town square and every person had a voice and a verifiable identity. We are up against the machines we built now. Does an AI-enabled data model have the right to free speech, too? This is the question that inspired my inquiry into digital privacy — and the shift in my professional focus.
The scope and capabilities of advertising technology, addictive user interface design, and communication tools that incentivize and reward outrage are quite literally beyond the grasp of our human understanding because we are governed by our species’ entire evolutionary predisposition, our minds’ heuristic vulnerabilities and programming errors, and our biological inability to compute large numbers in the way that an artificially intelligent and machine learned statistical model can. I realized that our fallibility is a feature, not a bug, and that we must engineer technology with that feature in mind.
But it is not just humans who are at a disadvantage to fully grasp the impact and potential consequences of technological advancement without a rights-based framework to steer its capabilities. Companies, too, have hardly begun to grapple with the misinformation and manipulation problem we face. The demand for privacy experts in the next decade will far exceed the sum of the job openings that exist in this space today.
I have curated my career working on problems that matter in a very urgent and tangible way for the public good. I began in policy analysis in the U.S. Senate and in Washington think tanks. I moved into the private sector because I believe that a more just world is better influenced through the intent and practices of the businesses that transact that vast majority of our globalized interactions. And today, I believe that ethical technology is best architected proactively, through responsible software by design instead of reactively, through regulation.
After working at National Public Radio in Washington, DC, I accepted a position at Yonder, an artificial intelligence and machine learning company in Austin, Texas providing narrative analytics and threat alerting about dis- and misinformation to Fortune 500 and government customers. Under its previous name, New Knowledge, the company had authored the Senate Intelligence Committee report on Russian hacking in the 2016 Presidential election. Like me, Yonder does not believe that the digital “town square” is dead, and instead advocates for optimism and a new understanding about how information travels through digital spaces.
We face a moment of enormous possibility to write a new compact with internet users. Social media and the concept of digital rights are still in their infancy, and companies have barely scratched the surface in standing up new engineering practices and safeguards. Software design principles, ethics standards, compliance policies, and enforcement mechanisms are only now being devised. Legislation is necessary but is too low a bar. Digital citizens deserve better. Technology companies can and will emerge as opinion leaders and stewards in this space, and I want to play a part in facilitating this transformation.
I would like to bring my career full circle and combine my policy analysis background with my digital operations and program management acumen to minimize abuse and improve the online experience of digital citizens. To accomplish this at meaningful scale, I hope to assemble a foundation of knowledge of current rights, technology ethics, and trust frameworks, understand their impact on society and human behavior, and examine existing regulations and risk mitigation strategies through an interdisciplinary lens that covers the full spectrum of information security challenges and opportunities. The Master of Science in Information Security and Privacy at University of Texas at Austin is exactly the broad-based, policy analytical, and business-focused curriculum that would empower me with the knowledge base and managerial effectiveness to guide private sector institutions and technology companies towards contributing to a more responsible, safe, and vibrant digital world.