Elizabeth Dwoskin On The Reckoning of Silicon Valley
No one is better equipped to help us understand the perils and promise of what is happening in Silicon Valley than Elizabeth Dwoskin. As The Washington Post’s Silicon Valley correspondent since 2016—before which she was The Wall Street Journal’s big data and artificial intelligence reporter—she has become the most penetrating observer and critic of the tech scene. She has broken many crucial stories on data collection abuses, online conspiracies, Russian operatives’ use of social media to influence the 2016 election, gender bias in the tech world, Instagram as a vehicle for drug dealing, and many more. Dwoskin may even be one of the most important investigative journalists of our era because she is relentlessly and insightfully tracking the forces that have the potential to dramatically change the fate of our species.
In her keynote address at Bioneers 2018, Dwoskin discussed the Facebook monopoly, the harmful effects of social media on society and its users, and Silicon Valley’s shady and reckless business dealings and handling of private data.
Watch Elizabeth Dwoskin’s full keynote presentation here.
View more keynotes, transcripts, and more from the 2018 Bioneers Conference.
ELIZABETH DWOSKIN:
On the evening of September 19, Bailey Richardson logged onto Instagram for the last time. “The time has come for me to delete my Instagram,” she wrote to 20,000 followers. “No reason in particular. My brain just needs more space. Thanks for all the kindnesses over the years.”
On that day, Richardson joined the 68% of Americans who, according to Pew, have either quit or taken a break from social media this year. They’ve done so to protect their privacy or to relieve themselves from the pressures of quasi-obligatory exposure and connection. As Richardson discovered, using a product simply because you feel you can’t leave isn’t exactly a healthy reason to stay.
Now, many Americans have made a similar decision, but Richardson’s choice is particularly poignant because she was one of the creators of Instagram. She was among a handful of early employees who were drawn to what was then an Indie platform for artists, photographers, and hipsters who wanted to share the beautiful things that they discovered about the world. She ran the startup’s official @Instagram account and organized the first in-person InstaMeets in places as far flung as Moscow and North Korea.
In retrospect, Instagram and Snapchat were the end of an era. They were the last social media platforms that could explode on the scene in an age of Facebook’s dominance. Today, investors think it would be impossible to build something similar. Facebook would buy it, as it did Instagram, overpower it, as it is doing to Snapchat, or the mere presence of the social giant would dissuade people from building social products, as is happening across the Valley right now. This reality raises profound questions about monopolies, ones that are far from resolved by our legal system or by policymakers.
The Social Media Reckoning
Monopolies have historically been defined by people being able to undercut real or potential competition by charging lower prices than anyone else. But by that definition, Facebook will never be a monopoly. Why? Because its products are free. There’s no lower prices to charge.
As the scholar Tim Wu argues, perhaps our laws need to evolve to encompass a broader definition of what constitutes a monopoly. Essentially of what does it mean to be too powerful?
The question of power is at the core of my mission as a journalist in an age of tech giants. I don’t need to tell you that it’s a year of reckoning for Silicon Valley.
It’s also a year of reckoning for all of us that have come to rely on its products for connection, expression, shopping, learning, entertaining, bill paying, the list goes on. We’re questioning the effects of technology on our health, on our democracy, on our community, on our attention, and on our time. Ultimately we’re asking whether these products are good for the world. I want to probe that question from different angles, and to ask how we got here. But first let’s go back to Bailey Richardson.
Bailey let me tell her story because she wanted to make the case for something better. She critiqued the celebrity-driven, influencer culture that transformed the platform from a place for niche connection into the digital QVC. Some might say this is the natural evolution of technology. It’s human nature to compare yourself to others and to ogle at beautiful things. But her critique also extended to the role that technologists play in engendering these problems, the minute but impactful engineering decisions and choices that arise from a culture that is hyper-focused on growth and on commanding attention, often at the cost of well-being.
Today, this growth ethos that was pioneered at Facebook is ubiquitous throughout the tech industry. It’s embodied by the fact that most companies actually have a growth team, often staffed by former Facebook people. The result is a system iterated over and over again, expertly designed to hook you in. You know how it is. You stay off for a little while, but then you get a notification that you were tagged in a photo. You’re telling me there’s a photo of me out there but I can’t see it unless I log in? As it turns out, photo tagging became one of the most effective psychological luring tactics in the history of Silicon Valley.
Another one is the so-called recommendation engine orchestrated so that when you click on one image of a hedgehog you get 1,000 more, countless nudges and hooks, infinite micro-decisions that in my view comprise an untold history of Silicon Valley. Because you are not just the product, as has become popular to say, you’re also the guinea pig for one of the largest psychological experiments ever created. You might even say you liked it, at least at one point. We can all remember a time when social media felt more meaningful. And for many there are still pockets of meaning. As Bailey pointed out, there are cheap likes and there are deeper ones, there are quick hits of dopamine and there are deeper, serotonin-infused contentment states.
The question is: When does one bleed into the other? Robert Lustig, one of the doctors who proved that sugar was addictive, is today seeking to demonstrate that excessive technology use lights up the same destructive pathways in the brain as sugar does. Though the science is still evolving on that, one of the more interesting avenues in my reporting is to understand how people conceived of all of this at the time.
Tech companies talk a lot about the term engagement — a click, a like, a share, that’s all engagement. And notions of engagement were viewed through very circular logic. If a person engages, well then they must like it, and therefore, if you do things to induce them to engage, if you tweak and test your way to hyper growth and hyper engagement, it’s all okay. Everyone’s making a free choice here. But at some point, finding out what people liked became secondary to just making the numbers go up. Facebook even turned that notion of growth into a mission statement to connect the world. But underneath this dogma is something more insidious, a business model that needs your attention, and I believe that has led us to where we are today.
Consequences of Hyper-Connectedness
Let’s look at some of the more recent consequences — the 2016 election. Entrepreneurs looking for fast cash masqueraded as clickbaiting news outlets to draw Facebook’s users to their sites where they could make money showing ads. Some of that content that they pumped out had more viewers than The Washington Post. Now Russia is building up its own influence campaign at the time, but Facebook hasn’t discovered the extent of it yet. Even before the election, false news was being flagged by employees. But it was dismissed as a problem by higher ups, including Mark Zuckerberg. At least one powerful executive, the one who sat among the supporters behind Judge Kavanaugh in his Congressional hearings a few weeks ago, argued that they should take very limited action, because a lot of the sites that trafficked in sensationalism appeared to be more right-leaning, and they did not want to risk appearing biased against the right.
Another flashpoint was when Facebook launched live video. They were racing to catch up with trendy live-streaming apps like Meerkat and Periscope. According to reporting by my counterparts at The Wall Street Journal, executives knew that users might start committing acts of violence on camera, and that in all likelihood they would. But they considered that collateral damage in the mission to create a product that the world would use.
The massacre in Charlottesville was another flashpoint. One question is whether tech companies hold any responsibility for the tragic events. Far right groups used tech platforms, from Facebook, to WhatsApp, to lesser known apps like Discord, to organize the march. Tech companies are actually moving away from the scrolling news feeds that we have come to be used to, and are starting to emphasize private messaging in closed communities that are not visible to the broader public.
But let’s not forget how these recommendation engines work. You join one group and Facebook algorithm shows you similar groups to join, groups that were joined by people who are similar to you or in the group that you already joined. So you join one extremist group and now you’re in this ugly extremist echo chamber that software designers maybe didn’t create but have certainly amplified.
The uncanny ability of Internet, and particularly the smartphone, to profile you and find you wherever you are means that once you’re in a certain bucket, you’re likely to be pummeled with similar messages. That is the perfect influence machine, according to psychologists. Unlike Facebook, Discord and WhatsApp are end-to-end encrypted, digitally scrambled so that even the companies that own the apps can’t read their content. Facebook’s former security chief told me recently that these services are effectively a lost cause when it comes to tracking bad stuff. And perhaps this is convenient for companies in a certain sense. You’re less at fault if it’s impossible to know what really happened.
Then there’s this big privacy law that just passed in Europe. You may have heard of it. It’s called GDPR. This law changes the whole way that Europeans regulate privacy, and one little known part of it is that it requires that companies delete a lot more records that they’ve been doing until now. Until now, the philosophy’s been save everything, data mine it, use it to predict your preferences. But now that GDPR is in place, and companies are being required to delete, it’s going to make it even harder for companies to stay on top of what really happens on their platforms. The records simply won’t exist.
So you see, tech companies are in a Catch 22. Society is urgently telling them to protect people’s privacy, but we are also telling them to have complete visibility, and to increasingly police that content, and make judgment calls about the nature of that content. At some point, they will have to choose, and the question is: What choices, as a society, do we push them into?
More recently, the broadcaster Alex Jones was at the center of these free speech and censorship debates. Jones, who has denied the Sandy Hook school massacre, had one of the largest Facebook followings of any publisher. Facebook, Twitter, YouTube, for years they all debated what action to take, because, guess what, putting something false or psychologically harmful on a social network is not against the rules. Even when a Sandy Hook mom experienced trauma and death threats because of these accusations, these companies all said that Jones had done nothing wrong.
The exception to this painful decision making interestingly enough may soon be Twitter, which recently introduced rules prohibiting content that results in real world harm. Twitter has been around for 12 years and just instituted this policy. It’s the only company to do so, and it’s going to be really, really hard to enforce.
We think that algorithms are so complex, but in many ways, it’s their simplicity that’s augmented social problems. After the Parkland school massacre, a conspiracy about one of the victims shot up to become one of the top recommended videos on YouTube. The recommendation algorithm is the culprit again. When one group seems to be unusually engaged in a certain topic, the algorithm looks at that and says, wow, that’s interesting, I better show it to more people. The next thing you know, YouTube is recommending brutal conspiracies to people who would never have searched for them.
There’s been almost no common sense judgment applied to ask why people might be feverishly interested in a topic or whether that subject is worth promoting. This is the stuff journalists ask every single day.
When I think of these incidents, I think of living through them as a journalist. Banging my head against a wall as I’m trying to push these companies, these powerful, opaque companies to say anything that has a ring of truth, to disclose more information. And I think in these moments about how important it is to retain my own sense of shock, because it’s very easy when you’re in this line of work to get numbed, and most journalists have the heard-it-all-before effect, but for me, I think staying in touch with my feelings and with myself as a human, even before I’m a journalist, for me that’s the most important driver.
The Wake-Up Call
There are so many places to start the story of how things began to unravel—I haven’t even gotten to the Russian ads. For me, though, it’s the most poignant moment, because it unleashed a wake-up call for the companies and Congress as well. This wake-up call could be a game changer because the lack of regulation that tech has benefitted from is now being called into question.
The Post broke the story of the Russian ads on Facebook and on Google too. From there we moved to Twitter, where we showed how powerful Americans, influential Americans, came to be duped into retweeting content from Russian impersonators, often people they disagree with that they thought they were fighting with, but they were fake fights. Almost exactly a year ago, Facebook disclosed thousands of Russian ads seen by what they originally said were 10 million people. But that 10 million, it just seemed off. How could it be so few?
Researcher Jonathan Albright felt that there was something about Facebook’s disclosure that was a partial truth. He knew that the goal of Facebook ads isn’t just to sell products, it’s to lure people into click-liking your brand, and becoming your Facebook friend so that you can send them more content, but this time for free. Albright demonstrated that Russians were reaching a far greater audience with free content that piggybacked off their ads. This was something Facebook had not discussed. After a lot of pressure, the company admitted that it wasn’t just the 10 million who saw the ads, it was nearly 90 million Americans. That’s close to half of all Facebook users in the United States that were exposed to Russian content. And the company only admitted this late the night before they were being dragged into Congress to testify. If I’m a little cynical, you can understand why.
In this arc, we’ve gone from the highs of tech companies taking credit for the pro-democratic uprisings in the Middle East in 2011 to the lows of Russian meddling in Cambridge analytica today. But even during the highs, the seeds of hyper growth were being planted. They became part of the DNA. Can you change your DNA? It’s one of the biggest questions I’m asking this year.
With Congressional hearings and potential regulation on the horizon, Silicon Valley has been in react mode. Facebook has hired 20,000 people, new people, in the last two years to review problematic content around the world. It has built AI to detect fake and spammy accounts. And they’re making it a lot harder to game the ad system.
There are also interestingly a lot of under-the-radar internal projects taking on the problem of unhealthy tech use. And you can think of this effort right now in two camps. On the one hand, there are those who want to nudge you and tell you how much time you spent in trusting you to make an informed decision on how you use technology. Tim Kendall, the former Facebook executive who’s become an anti-addiction crusader, says this approach is like telling an alcoholic to stop drinking because they drank too much last night. His anti-tech addiction app Moment is worth checking out.
Silicon Valley’s attempt to rectify the problems are also creating a lot of downstream consequences in politics. A few weeks ago in the last two weeks, Facebook purged about 800 political accounts and pages run by Americans. The company told us that these pages were being operated by profit-driven spammers, and they were taking down their content, not for what they said, not for their speech, but because they were spammers. But when I actually talked to some of those people whose pages got deleted, the story was much more complicated. They said, “Wait, wait, we’re real Americans; we’re not Russians; we’re not even following their playbook. We’re not spammers, we’re just doing what political activists are doing every day online.” And their decision to do this is driving a stake through the heart of what online organizing means today.
They make a strong case that affects the left and the right. And their case is strengthened by the fact that Facebook won’t tell you exactly what behavior crossed the line. That means everyone, including me, is in a guessing game as to the nature of truth.
We’ve reached a tipping point, and I have little doubt that change will come. The role of people I call the defectors, like Bailey Richardson and Tim Kendall, are more important in shifting the thinking in Silicon Valley than you realize. Because as Facebook and as the Russians discovered, people are most influenced to change their ideas if it comes from within their social network. But it’s more likely, I think, that change will come from outside forces – lawsuits, state attorney generals, regulators in the US and abroad, and the politicians from both sides of the aisle and from across the pond that are increasingly demonizing tech, sometimes in ways that go way, way too far. State-level lawsuits are particularly important because they sidestep the broken political process at the federal level, and discovery in a lawsuit is important because it may give clues to people’s mindsets and intents, and that’s why tech companies are fighting them hard right now.
For my part, I will go back to my desk tomorrow morning, I’ll get my coffee and prepare to spend the day confronting companies that are wealthier and more powerful than nation states. I will try to understand the consequences of their actions, intended and unintended, consequences society has not seen before and does not have rules or frameworks to understand. There’s a dizzying number of things I can pay attention to. I can search for more Russian activity ahead of the midterms next week, I can look for more Baileys, more political organizers, more evidence of tech addiction. It’s not hard to keep your compass pointing north. I know where to look, it’s just that there is so much to look at.