He Did PR for Zuckerberg, Musk, and Google. Now He Says He ‘Only Told Half the Story’
Thirty thousand feet in the air, Mark Zuckerberg turned to his speechwriter. The duo were flying in Zuckerberg’s jet to the United Nations General Assembly in New York, where the Facebook boss was scheduled to address world leaders. Zuckerberg had a question for his companion.
“Wait, what exactly is the U.N.?”
Dex Hunter-Torricke had to hide his surprise. Zuckerberg was, by this point in 2015, the head of a company that was reshaping politics and societies around the world, with 1.5 billion users and counting. Yet according to his speechwriter, he appeared to be fuzzy about key elements of the global order. “It was astonishing,” Hunter-Torricke recalls. (A spokesperson for Meta, Facebook’s parent company, declined to comment.)
[time-brightcove not-tgx=”true”]In 15 years as a communications executive, Hunter-Torricke collaborated with Zuckerberg, managed crises for Elon Musk at SpaceX, and, most recently, helped Google’s AI division, DeepMind, craft its public messaging during the tumultuous years after ChatGPT’s release. He spun optimistic stories about how tech giants were leading the world toward a brighter future. At some point, though, Hunter-Torricke came to believe there was another side to the gauzy narratives.
In late 2025, Hunter-Torricke announced he had quit Google DeepMind to launch a new career dedicated to reining in the industry he had been paid handsomely to promote. Over the course of a series of interviews with TIME—his first since leaving Big Tech—he leveled a sharp critique of his former employers and the world they have built. “I only told half the story,” he says, sipping on peppermint tea in a London restaurant. “That was something which, personally, I consider a failing.”
The tech industry, Hunter-Torricke now says, is driving the world toward a precipice. Vastly more powerful AI systems will arrive within the next decade, he believes, and will likely lead to mass job losses, geopolitical upheaval, and damage to the environment. His former bosses have no strategies for managing any of this, he says. “There is no plan. I do not believe for a second that winging it through the biggest economic and technological transition in human history is a responsible way to do things.” Tech leaders have lobbied against major regulations, and argued publicly that AI will unlock medical discoveries, empower individuals, and boost economic growth. But Hunter-Torricke thinks they are downplaying the chances that it will instead erode democracy, increase inequality, and worsen climate change, all while handing an elite group of tech execs unprecedented wealth and power. “We are sleepwalking into disaster,” he says.
A Google DeepMind spokesperson declined to provide quotes on the record for this story, but contested Hunter-Torricke’s central argument. DeepMind, they said, has always told a balanced story about AI that takes both its risks and benefits into account. They added that the company’s mission is to build advanced AI safely and for the benefit of all humanity. Emails to Musk’s companies seeking comment went unanswered.
Hunter-Torricke’s next act, he says, will be an attempt to build a global movement to challenge Big Tech’s control over the future, and to come up with the missing plan for managing AI’s impact on jobs, geopolitics, and the climate. He will coordinate that work from the Center for Tomorrow, a nonprofit he launched on Feb. 17. Some of its seed funding—he declines to disclose the amount—comes from Tom Hunter, a wealthy businessman, whose niece married Hunter-Torricke in 2019. Hunter-Torricke is also thinking about running to become a member of parliament in the U.K., where he is based, on a platform that channels his ideas about the tech industry and the problems facing the world, which he elaborates on in a book he has been writing. The world’s AI companies and Davos-going elites, Hunter-Torricke says, have “had their time” to solve the problems he describes. “They failed comprehensively.”
It’s easy to be skeptical of a tech insider’s self-styled reinvention as a truthteller. Hunter-Torricke spent more than a decade helping Big Tech craft narratives that he now says were misleading, and is himself one of the wealthy elites he criticizes. Perhaps, as he argues, he is switching sides because the situation has become impossible to fix from the inside. But it’s also true that the transformation from mouthpiece to critic is a familiar turn for those wanting to jump-start a new phase in their careers. After remaining silent about his concerns for years, he is choosing to share them publicly at an opportune time: to coincide with the launch of his nonprofit, and possibly a career as a politician or media talking head. And while he’s willing to burn bridges with his former employers, Hunter-Torricke is not a whistleblower; he deftly skirts around questions that might elicit private information belonging to his former employers.
Hunter-Torricke rejects these criticisms. “Sure, people can choose to say, ‘Screw this guy, he worked for some terrible people,’” he says. “I agree, these people are terrible. I got out of working for them. Ultimately, wouldn’t you want somebody who understands how they operate?”
The rain was coming down at Bletchley Park. It was Nov. 2023, and world leaders and AI companies had traveled to this stately home in the countryside near London for the world’s first international AI summit. The location was intentional: here, eight decades ago, the computer scientist Alan Turing built an early computer to crack Nazi codes, which was pivotal in helping Allied forces win the Second World War.
Now, a year after the release of ChatGPT, anxious officials hoped another historic moment might occur: a landmark agreement that would curtail the worst competitive pressures of the AI race and mitigate the disruptions they saw coming in the near future.
Hunter-Torricke was one of those insiders with high hopes for the summit. He had spent months working on the event for Google DeepMind. But as the gathering drew to a close, he felt deep disappointment. Instead of seriously grappling with the mammoth implications of AI for societies, he felt, the tech companies had pushed to focus the summit on speculative and technical risks, narrowing the scope of the document they produced to a few nonbinding, technocratic agreements. “It felt deeply, deeply anti-climactic,” Hunter-Torricke says of the summit. “I felt it was such a missed opportunity.” He abandoned the warmth of the press tent and walked over to a hut on the grounds where Turing had worked during the war. This was the place where it all began, he thought, back before computing became an industry worth trillions, racing toward a destination that nobody fully understood.
Hunter-Torricke was frustrated by governments, which seemed to have failed to grasp the severity of the occasion. He was frustrated by tech companies, including Google, which had dangled the economic opportunity AI offered in front of state officials, while arguing that it was too early to implement onerous regulations. (A Google spokesperson said the summit resulted in important conversations about AI safety, and introduced frameworks that the company continues to build upon today.) And he was frustrated with himself, realizing he had been taken in by the idealistic missions of tech companies, which pledged to use their transformative power to make the world better for everyone. Now he had a gnawing sense that those statements were a veil disguising a pursuit of hard power.
Hunter-Torricke was born in London in 1985 to working-class parents. His mother was a nurse from Malaysia who worked in Britain’s National Health Service. His father was a refugee from Burma, a stateless child who fled the Japanese invasion of his home country during the Second World War, ending up in British India. He witnessed the horrors of partition before ultimately making it to the U.K., where he suffered racial abuse. In a eulogy at his father’s funeral in 2015, Hunter-Torricke spoke of how this shaped his own worldview. “I learned that to give everyone the chance to become the person they could be,” he said, “we need to build a different world.”
His first job was at the United Nations—an institution he saw as deeply flawed and in need of well-meaning people to reform it from within. In 2011, he moved to the tech industry, taking a job as a speechwriter for Google’s senior executives, attracted by the company’s lofty mission to synthesize all the world’s information. He bought into the great optimism swirling around technology at that time. The internet seemed to be bringing a wave of democratic revolutions to the Middle East. Google’s official motto was still: “Don’t be evil.”
It was around the time of his father’s funeral that Hunter-Torricke began growing more skeptical of his employers. Now at Facebook, the son of a refugee was riveted by the global refugee crisis. Two weeks before flying with Zuckerberg to the UN, Hunter-Torricke was sitting at his desk in Facebook’s California headquarters. On his computer monitor was a photograph of a 2-year-old boy, Alan Kurdi, lying dead in shallow seawater on a beach in Turkey—a heartbreaking photo that went viral in the summer of 2015. Hunter-Torricke stared at the dead child and thought of his father. He looked around Facebook’s bustling office and suspected most of his colleagues did not share his concerns about the state of the world. (A Meta spokesperson declined to comment.)
From this perch in Silicon Valley, it was easy to believe that, whatever the news might say, life was getting steadily better. This view had become something of a religion among tech elites, including Hunter-Torricke’s boss, who had recently recommended the canonical text on the matter, Steven Pinker’s The Better Angels of Our Nature, as part of his public book club. “I argued with Zuckerberg about that,” Hunter-Torricke says. “Because, for me, that entire view is flimsily reasoned.” He told Zuckerberg that it did not engage with the “deep frailness” of the various institutions and systems underpinning human prosperity. Around the world, states were disintegrating, climate systems were approaching tipping points, economies were stagnating, and inequality was rising.
Despite his misgivings, he continued collecting his paycheck. “I made the calculus that on balance, it was better to be within the system trying to improve it,” Hunter-Torricke says. On his office wall, Hunter-Torricke hung a page of a newspaper from 1849: a report on the Paris Peace Congress of that year, where some of the world’s leading bourgeois intellectuals promoted the view that war was now a thing of the past, and that humanity might finally be on a path to a peaceful future. Of course, they were wrong; the horrors of the twentieth century were yet to come. The print served as a reminder for him that, whatever his colleagues might say, historical progress was neither linear nor inevitable.
In 2023, Hunter-Torricke moved to Google DeepMind. Before long he faced similar frustrations. He was fond of his boss, DeepMind CEO Demis Hassabis, who had managed to get Google to devote significant resources to public goods, like AI that could solve the elusive protein-folding problem. But as artificial general intelligence (AGI) came into view—the idea of a transformatively powerful technology that could surpass human geniuses at most tasks—Hunter-Torricke grew convinced industry executives weren’t girding for the level of disruption the technology would bring.
“Demis and a whole bunch of the AI leaders from across the frontier labs have been using examples like: we invent [nuclear] fusion, we go to the stars, we discover new materials with AI, we cure all diseases,” Hunter-Torricke says. He thought this seemed to be the extent of their planning. The way tech companies were managing the early phase of the transition didn’t inspire confidence. “The war in Gaza was playing out every single day on screens, while I was also expected to sit in conference rooms with people who just said, ‘AI is going to make the world a better place,’” Hunter-Torricke says. Raising concerns internally about Google’s business relationship with Israel’s government, he says, was “essentially off the table.” (A Google spokesperson said that this is false, and that Google maintains an environment where employees are welcome to express their views.) But Hunter-Torricke was angry that the wider AI industry, in his view, had failed to connect its abstract talk of “AI risks” to what was playing out in the Middle East. “It is one of the most morally indefensible things I’ve seen in my lifetime,” Hunter-Torricke says. “At one of the early great killings of the 21st century, the industry has simply chosen to put its fingers in its ears and close its eyes.”
In Oct. 2024, Hunter-Torricke finally decided to leave DeepMind. He told colleagues he planned to write a book during the year-long noncompete period that kept him off the job market. There was no particular breaking point, he says—just the realization, a long time coming, that the ideals that had attracted him to the industry did not match the reality. Trump’s re-election only made him more confident in his decision. Companies are now “choosing to get in bed with the Trump Administration, and to silence themselves in order to not ruffle feathers with nationalists and with people who have brought some terrible, destructive things to the world.” That, he adds, “is a sign of how unmoored the industry has become from the values that we need to responsibly manage AI.”
Hunter-Torricke is only willing to go so far in his critique. As DeepMind’s former communications head, he had a rare view into the company’s inner workings. But he declines, when pressed, to discuss specific details; he signed a non-disclosure agreement that he has no intention of breaking. Beyond carefully selected anecdotes, his sharpest criticisms are reserved for systemic issues, not specific people or companies. Though he lambasts AI leaders as a group, he singles out Hassabis, the DeepMind CEO, for praise. “I think he personally has been very responsible as a leader,” he says. “Sure, he doesn’t have a blueprint for how to get to a post-scarcity economy. But that’s my argument: he probably shouldn’t. The choice shouldn’t belong to him, and it shouldn’t belong to the [AI] labs. It needs to be something that’s designed elsewhere.”
Here, Hunter-Torricke launches into the pitch for his new London-based nonprofit, the Center for Tomorrow. The organization will focus on research into “the big unanswered questions” that societies urgently need to address: how to adapt to job losses due to AI, finding paths for countries to work together in a climate of increasing geopolitical chaos, and safeguarding human rights and liberal democracy. It will aim to convene policymakers, business leaders, and civil society to come up with “practical policy solutions” that create a bridge between AI progress and “the human, political, and institutional innovations [that] societies need.” And it will devise ways of getting messages across to the public, too, in an effort to build political will for its favored policies.
It is an ambitious agenda. But it is still a fuzzy one. When pressed on what a new economic system for the age of AGI might look like, Hunter-Torricke hesitates. “I don’t know the answer to 99% of these things,” he says. “How could I possibly? This is the hardest transition in history.” He criticizes the idea of universal basic income—an oft-proposed solution to AI’s impact on jobs—as unworkable at scale, but offers no alternative. His new nonprofit, he says, will “mobilize experts” to come up with the answers. One lesson that he took from the 19th century newspaper that hung in his California office was that elites are bound to fail if they simply cook up solutions to the world’s problems in isolation. But that means the Center For Tomorrow’s grand plan, at least for now, is mostly a plan to make a plan.
That newspaper now hangs at Hunter-Torricke’s home in London. A close reading reveals that he may have more in common with the liberal intellectuals of 19th century Europe than he would like to believe. They were not, in fact, naively optimistic that war would soon be a thing of the past. Instead they were keenly aware of, and in despair about, their nations continuing to arm themselves against one another. For all their efforts to convene well-meaning elites, the organizers of the Paris Peace Congress remained powerless to stop the relentless logic of an arms race playing out outside their control.
“No men are so difficult to teach as professional statesmen,” one attendee lamented. “Do you suppose, then, that they will listen readily to the advice of this Congress? On the contrary, they are at this moment laughing at us as Utopists, theorists, and dreamers.”
