Project Mario: How DeepMind tried to secure independence from Google
quality 7/10 · good
0 net
Project Mario - Colossus Loading Search Podcasts Invest Like the Best Business Breakdowns Founder’s Field Guide Joys of Compounding Founders 50X Magazine Newsletter About Us Sponsors Subscribe Log in Copyright © 2026 Colossus. All rights reserved. Feature Project Mario The inside story of how DeepMind’s experiments in AI safety governance transformed Demis Hassabis from an idealist into a realist By Sebastian Mallaby March 2026 ILLUSTRATION BY ELEANOR SHAKESPEARE Subscribe to print for your office or home. The following is an exclusive excerpt adapted from the author’s new book, THE INFINITY MACHINE: Demis Hassabis, DeepMind, and the Quest for Superintelligence , out today . In the autumn of 2015 , Mustafa Suleyman embarked on a grand experiment in making AI good for society. Together with Demis Hassabis, the senior co-founder of the London-based artificial intelligence lab DeepMind, he began an extended negotiation with Google, to which they had sold the previous year. Suleyman was determined to ensure that powerful AI, when it emerged, would not fall under the sole sway of the parent company’s shareholders. For anyone concerned with AI safety, this saga remains relevant today. It shows what happens when, under unusually favorable conditions, a handful of leaders set out to create a control structure for a new technology. The trigger for this experiment was the failure of DeepMind’s first AGI safety board meeting. In August 2015, Hassabis and Suleyman had convened the board at SpaceX; Elon Musk hosted, with Google’s leadership, Reid Hoffman, and other luminaries in attendance. The meeting produced no agreements or conclusions. Personal tensions, especially between Musk and Google CEO Larry Page, and clashing visions for AI governance overwhelmed the discussion. Worse, Musk proceeded to use what he’d learned about DeepMind’s progress at the meeting to found OpenAI as a direct rival. Not only did that gathering achieve nothing; once Musk founded OpenAI as an explicitly anti‑Google, anti‑Hassabis venture, there was no way he could continue to watch over DeepMind’s progress. With that attempt at oversight stillborn, Suleyman in particular resolved to create an alternative arrangement. He imagined a novel, post‑capitalist form of governance: one that might balance the drastic tensions in the era of AI, when the imperatives of profit, existential risk, and social justice demanded a new reconciling mechanism. As always with Suleyman, his passion was not in doubt. But the obstacles were formidable. Suleyman was fortunate in who he had around him. A preoccupation with safety had been baked into DeepMind even before its founding: Hassabis had first bonded with Shane Legg, DeepMind’s third co-founder, at a 2009 lecture in which Legg warned that superintelligent computers could develop agendas of their own and subjugate or annihilate humans. In the ensuing half dozen years, Hassabis had remained committed to the safety agenda, backing Suleyman’s efforts and adding his own vivid talk about disappearing into a bunker to birth superintelligence. Suleyman was fortunate in his parent company, too. By the standards of large enterprises, Google was remarkably open to governance experiments, having conducted several of its own. For example, the founders had awarded themselves super‑voting shares on the theory that this would allow them to stand up for the company motto, “Don’t Be Evil.” Moreover, at the time when Suleyman embarked on his safety mission, DeepMind was the world’s top AI lab, and its strongest rival, Google Brain, which included the researchers who would invent the transformer, was part of the same company. Suleyman and his collocutors were therefore in a privileged position. If they could solve AI governance internally, they would go much of the way to solving it, period. The first potential replacement for the SpaceX oversight group landed in Suleyman’s lap, without him having to do anything. In 2015, Google decided to restructure itself, spinning out specialist chunks of its operation as semi‑independent “bets,” and creating a holding company called Alphabet to preside over them. In a conversation shortly before the SpaceX gathering, Google’s M&A chief, Don Harrison, had suggested to Hassabis and Suleyman that they could regain their independence via this route. The new, liberated DeepMind would have a so‑called 3‑3‑3 board: three people from DeepMind; three people from Alphabet; and three independent members. DeepMind’s leaders, fond of secretive code names, dubbed the ensuing governance talks “Project Mario.” Google’s proposal had an operational and a financial logic. On the operational side, Larry Page worried that Google was growing unwieldy. It was hard to manage a money‑gusher like the online ad business under the same roof as a pre‑revenue moonshot such as DeepMind. On the financial side, Google reasoned that hiving off cash‑burning ventures would boost the profits of the mothership, resulting in a much higher stock price. To Hassabis and Suleyman, the commercial logic of the Alphabet plan was all to the good. The 3‑3‑3 board structure would give them a strong say over the deployment of AGI and bring in credible independent directors. If the plan also served to boost Google’s share price, that was a good reason to assume that it might actually be implemented. The governance talks got underway in the first half of 2016. Hassabis met Page to go over the details on four occasions, and together with Suleyman he set about planning the revenue streams that would sustain DeepMind in its independence. Suleyman launched DeepMind Health, believing that, after a few years of pro bono work, DeepMind would earn a lucrative share of the savings that AI generated for hospitals. Hassabis, for his part, assembled a secretive hedge‑fund operation within DeepMind. He recruited a team of some 20 researchers to train high‑frequency trading algorithms, and explored a collaboration with the Wall Street behemoth BlackRock. It was not a project of which Google approved. But Hassabis, a five-time World Games Champion at the international Mind Sports Olympiad, hoped he’d found another game that he could win. One day I asked about the story of this trading project. I was told that Hassabis wanted to beat Jim Simons, the mathematician who founded the wildly successful algorithmic hedge fund Renaissance Technologies. “Rentec operated in secret, which Demis loved,” my acquaintance explained to me. I could see how the echoes of the Manhattan Project might appeal to Hassabis. Renaissance Technologies convened a band of scientific geniuses on a remote campus, even if its hideaway was in Long Island, not Los Alamos. Peter Brown, the longtime leader of Renaissance, was as driven as Hassabis, and slept even less. He had a fold‑down bed propped up against his office wall, and lived mainly at the office. Brown was a deep‑learning pioneer who had studied under Geoffrey Hinton. Did the secret DeepMind trading team make money, I wondered? No, came the answer. Because of Google’s wariness, it was quietly disbanded. In the summer of 2016, Hassabis held his fifth round of talks with Larry Page, and the details of a DeepMind spin‑out were laid down in a formal term sheet. A few months later, to ensure that everyone was on the same page, Hassabis met with the new CEO of Google, Sundar Pichai, who had assumed the top job when Page had moved upstairs to head Alphabet. An engineer with an MBA from Wharton and a background as a management consultant, Pichai had a boyish grin, an affable manner, and a dislike of confrontation. His discussions with Hassabis and Suleyman were cordial but bland. Pichai was not going to rock the boat, the DeepMinders concluded. The following week, on November 21, Hassabis and Suleyman experienced a rude awakening. Google’s chief legal officer, David Drummond, showed up in London to meet them. Regarding DeepMind’s AI safety and governance objectives, “everyone is in agreement,” Drummond affirmed. But regarding the idea of a spin‑out, there were “concerns,” he added. Drummond then elaborated on a complex new formula that was not quite a spin‑out but not quite the status quo, either. Hassabis and Suleyman were confused. The safety guarantees they had in mind depended on the spin‑out, and on the 3‑3‑3 board that came with it. Four days later, the DeepMind duo got on the phone with Pichai. This time the CEO revealed the steelier side of his personality. He said that turning DeepMind into a semi‑independent Alphabet company might not be in Google’s interests, after all. The “bet” option was for moonshots unrelated to Google’s core business, he said—projects such as autonomous cars or the science of life extension. Artificial intelligence did not belong in that bucket. To the contrary, AI was destined to become strategically important to Google’s flagship products, such as search and cloud computing. Hence the “concerns” that Drummond mentioned. Hassabis and Suleyman were still confused, however. It was not clear whether Pichai was slamming the door—or whether, given Page’s apparently contrary position, Pichai had the authority to do so. Even David Drummond, the lawyer and bad cop, had assured them that Google favored AI safety. With a bit more pushing, Hassabis and Suleyman reckoned, they could get what they wanted. Back in 2013 , Hassabis and Suleyman had administered a particular kind of push. During the acquisition negotiations with Google, they had entertained a rival bid from Facebook. Now, at the end of 2016, they cooked up another version of Plan B. They would gather pledges of $5 billion from outside investors. If Google didn’t give them the governance they wanted, they would walk out of the company. Five billion dollars was an astronomical amount, enough to cover DeepMind’s operations for more than five years. At its launch a few months earlier, OpenAI had proudly claimed to have pledges of $1 billion, and even that was smoke and mirrors. But the DeepMind leaders figured that they could raise the money by appealing to safety‑minded investors. The pitch would be that $5 billion could put AGI in a secure place, with credible governance. To hammer out the details of the plan, Suleyman assembled a team of imaginative lawyers, a top-flight communications strategist, and a prominent investment banker. Together, they proposed a legal form that would underscore DeepMind’s determination to do good, not just to pursue revenues. Rather than raising outside capital to launch a normal company, DeepMind would be a company “limited by guarantee”—the structure commonly used by nonprofits. The reconstituted DeepMind would issue no shares to its backers, nor would it pay dividends. Its obligation would be to the principles set out in its charter. Hassabis and Suleyman spent hours huddled with the advisers. Not all were convinced by the walk‑away option. “It was open to Alphabet to just say, well, back in your boxes, boys, we own you, you’ll do what we say,” one of them recalled later. DeepMind staff members were legally employed by Google; there were noncompete employment clauses, non‑solicit rules about hiring people away, verbiage on who owned DeepMind’s intellectual property, and so forth. Taking a hundred people out of Google on one day and starting a new company the next day would be legally messy. The team was not put off, however. Bolstered by one of the lawyers, who was an authority on public‑interest law, it was ready to assert that the British public had an interest in DeepMind breaking free from Google. The claim would be that a spin‑out served the public interest by improving AI safety. Surely Google cared too much about its reputation to challenge this proposition in court? Besides, even if deserting Google involved a legal risk, the threat of desertion could be valuable. It was open to Alphabet to just say, well, back in your boxes, boys, we own you, you’ll do what we say. The upshot was a two‑pronged plan. If Hassabis and Suleyman could get meetings with billionaires who might invest in a Plan B, there was no harm in talking to them: Why not deepen your network with the world’s top capitalists? But DeepMind would also be careful not to overplay the hand. “We never ever said to Google, unless you do this, we will leave,” an adviser remembered. “The art here was to get Google to take this negotiation seriously,” the adviser went on. “Google could have said, we know you aren’t leaving, so why are you wasting our time? To their credit, they never did that. That was why this episode was so unusual.” Hassabis and Suleyman were in a strange place. They were attempting to conjure an unprecedented governance structure for an unprecedented technology. They were dancing with a parent company that wasn’t saying yes and wasn’t saying no. There was a glimmer of hope. They resolved to keep pushing. PHOTO BY IMMO KLINK/CONTOUR VIA GETTY IMAGES Hassabis photographed for Bloomberg Businessweek in London, July 2016. In the first days of January 2017 , Hassabis and Suleyman showed up at the Asilomar Hotel, a serene seaside refuge on California’s Monterey Peninsula. Almost half a century earlier, the hotel had played host to a famous conference on genetic research, which had laid the ground rules for experiments with the breakthrough technology of the 1970s. Now, following the shock of Lee Sedol’s defeat by AlphaGo, Asilomar had been chosen as the venue for an analogous get‑together, this time on the rules for artificial intelligence. Hassabis and Suleyman were at the conference to address safety; after all, it was their own company’s feats that made the conference feel urgent. But they also took the opportunity to discuss their walk‑away idea with Reid Hoffman. Despite Hassabis’s wariness of the LinkedIn founder for his role in launching OpenAI, the three remained on friendly terms. Hoffman was a good billionaire, Suleyman reckoned. Hassabis and Suleyman sat down with Hoffman and got to the point. If they broke free from Google, would Hoffman help to finance a new public‑interest AI company? Hoffman was not surprised to hear of tensions between DeepMind and its big‑tech paymaster. He was seeing the same dynamic play out between OpenAI and Microsoft. Recently, Musk had flown into a fury when Microsoft had tried to turn its partnership with OpenAI into a public relations talking point. He would not let OpenAI “seem like Microsoft’s marketing bitch,” Musk protested. Moreover, Hoffman applauded the idea of novel AI governance structures. He had backed OpenAI precisely because it had been founded as a nonprofit, with a charter requiring that its technology should serve society. The format had been inspired partly by DeepMind—the SpaceX gathering had been a first attempt to add a nonprofit board to a for‑profit structure—but OpenAI harbored dizzying ambitions to push governance innovation further. “We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board,” Sam Altman proclaimed, having read James Madison’s notes on the Constitutional Convention for inspiration. “Because if I weren’t in on this I’d be, like, Why do these fuckers get to decide what happens to me?” However far‑out Altman’s ideas on global elections, Hoffman sympathized with the sentiment. AI ultimately needed some kind of non‑profit oversight with broad democratic buy‑in, especially since politicians were notoriously slow to get their minds around cutting‑edge technologies. Just a couple of months earlier, the United States had elected President Donald Trump, whose antiregulation instincts Hoffman regarded as anathema. Hoffman was open to backing a DeepMind walk‑away, especially if it filled the governance vacuum created by do‑nothing political leaders. Hassabis and Suleyman assured Hoffman that filling the governance vacuum was exactly their plan. They elaborated on their idea for a company limited by guarantee, which they had taken to calling a global interest company. Nobody would profit from this enterprise. The global interest company would be managed with capitalist intensity but its impact would be post‑capitalist. Hoffman had recently sold LinkedIn to Microsoft: His personal net worth stood at $3.8 billion. He was an unabashed idealist, proclaiming that he aimed to help humanity flourish—he was a grander, American version of what Suleyman aspired to be. Showing considerable courage in his convictions, Hoffman now agreed to commit more than a quarter of his wealth to DeepMind’s vision of societal advance: an astonishing $1 billion. It was 100 times more than he had pledged to OpenAI, just over a year earlier. “I said, look, this is the most impactful technology of my lifetime,” Hoffman recalled. “I support the idea of an independent DeepMind with a public‑interest mission. I support it for the same reasons I support OpenAI. This technology shouldn’t be used to entrench a monopoly. “Anyway, I thought that 90 percent of my wealth would flow to philanthropic causes. So I decided right then to commit $1 billion. “I didn’t tell Sam about this,” Hoffman went on, referring to Altman. “I didn’t tell Greylock,” he added, referring to the venture capital shop at which he was a partner. Billionaires answer to nobody. Read more We Have Learned Nothing Startup pundits sold us a failed science of entrepreneurship. The Red Queen offers something better. By Jerry Neumann The Patriot For two decades, Shyam Sankar has been the most important but hidden figure behind Palantir. The question now is whether he will win his wars. By Jeremy Stern Some of the DeepMind advisers favored seizing Hoffman’s offer and proceeding with the spin‑out. With a famous anchor investor in place, other capital would follow. Even if Google tried to challenge DeepMind in court, the fallout would be manageable. The prize of independence— the operational agility, the opportunity to incentivize employees with DeepMind stock—would justify the legal complications. Hassabis could see the argument. But he was leery of a drawn‑out legal fight that would swallow all his energy. Spinning out as an Alphabet company would be by far the cleaner option. To try to unstick the Alphabet process, Suleyman sought out Kent Walker at Asilomar. Walker was a top lawyer and policy strategist at Alphabet. He had attended the SpaceX safety meeting. Suleyman introduced Walker to Angela Kane, a senior United Nations official who worked on containing weapons of mass destruction. Suleyman regarded Kane as an excellent choice for the 3‑3‑3 oversight board—an example of the credibility that a spin‑out could bring to DeepMind’s mission. He also told Walker that he had sounded out Barack Obama, and he mentioned Al Gore. For good measure, Suleyman hinted that all kinds of people, some of them extremely rich, wanted DeepMind’s technology to be developed under the protective gaze of a robust governance committee. Meanwhile, Hassabis checked in with Larry Page, who was also at Asilomar. Page had always favored Alphabetization. What changed, Hassabis wondered? Page declared that he still supported the old plan. As far as he was concerned, spinning out DeepMind remained a logical option. But the idea would require Sundar Pichai’s buy‑in. Seizing what seemed like an opening, Hassabis said he would visit Pichai at once. He packed his bag, left the conference early, and headed off to Mountain View. He was eager to wrestle the negotiations to a close. He was sick of back‑and‑forth and lawyers. A couple of days later, on January 9, 2017, Hassabis sat down with Pichai at Google’s headquarters. Suleyman dialed in from his hotel room in Asilomar. Hassabis began the conversation in a conciliatory fashion, telling Pichai that the spin‑out should be designed so as to allay all Google’s misgivings. He floated the idea that Pichai, Page, and Eric Schmidt could represent Alphabet on DeepMind’s 3‑3‑3 board, with Angela Kane and other distinguished figures filling the independent seats. He added that the months of negotiation were distracting him from his responsibilities at DeepMind. He wanted to focus on science. Pichai responded in a friendly way. He sounded open to everything. There were a few details to be ironed out. Hassabis and Suleyman should resolve these with Drummond. Hassabis and Suleyman now suffered a repeat of their November experience. Drummond showed up to meet them the next day and announced that the DeepMinders had failed to understand Pichai: He was entirely against Alphabetization. According to a DeepMind document, Hassabis and Suleyman offered “every single mechanism” to assuage Google’s concerns. But Drummond was unmoved. The talks were at a standstill. A few days later, Hassabis and Suleyman emailed Pichai. “The Alphabetization process has been dragging on for far too long (more than a year now), and it is really starting to impact our ability to manage the company,” they told him. The two DeepMinders proposed to return to Mountain View “to finally resolve this”—they were willing to cancel their plans to attend the World Economic Forum in Davos. They proposed that Pichai and Drummond, as well as Larry Page and Sergey Brin, should attend the next meeting. They were tired of the good cop/bad cop seesaw. PHOTO BY BLOOMBERG VIA GETTY IMAGES Suleyman in Paris, May 2018. When the two sides met again, the conversation underscored the gulf between them. Hassabis and Suleyman argued that DeepMind did not fit under Google’s umbrella: Its mission was AGI, not consumer‑internet products. Pichai objected that AI was central to his vision for Google, and that he would not allow his scientific bench to be depleted. Hassabis had hoped that Larry Page would weigh in on his side and push the Alphabet plan to a conclusion. But Page showed up for the meeting two hours late, and Sergey Brin was even later. Their version of what later came to be known as “founder mode” was that they were nowhere to be found, disproving the Silicon Valley mantra that founders deserve the right to control their companies indefinitely. With Page and Brin effectively checked out, Pichai was the man DeepMind had to deal with. The following week, Pichai tried to break the deadlock. His goal was to preserve Google’s lead in AI; alienating AI leaders was a bad way to do that. At a one‑on‑one dinner at his home in Silicon Valley, Pichai served Suleyman a vegetarian curry and a tasty proposal, perhaps hoping to drive a wedge between his guest and Hassabis. Rather than having all of DeepMind become a semi‑independent bet, the company should split in two, Pichai now suggested. Hassabis could spin out his research operation and go after AGI—who knew if that would work, Pichai remarked, somewhat dismissively. Meanwhile, DeepMind’s Applied team, which was building immediately useful algorithms in healthcare, should be folded into Google. As part of the shake‑up, Suleyman would run all Google’s applied AI from California. Through the spring of 2017, Pichai’s plan made grinding progress. It had its appeal: Hassabis could pursue AGI as the leader of a semi‑independent spin‑out; meanwhile, Suleyman could deploy practical AI, leveraging Google’s global empire to distribute it. Every few weeks, Hassabis and Suleyman made the 11‑hour plane trip from London to San Francisco and sat through another interminable meeting: “We would push back on stuff, they would push back on stuff,” Suleyman said later. Then they would head back to the airport to re‑scramble their body clocks. Small wonder that, in his dealings with his lieutenants in London, Suleyman could seem distracted and preoccupied. Small wonder that, when the transformer architecture appeared that summer, Hassabis was less alert to its potential than he might have been. In the first week of June 2017 , just about everyone on DeepMind’s 500‑strong staff left London on a pair of chartered jets, bound for the Scottish Highlands. The company had outgrown the conference centers in easy reach of London, so the organizers had sought out a venue with abundant space, settling on a resort called Aviemore, not far from the royal castle of Balmoral. “If you want a lot of accommodation, there’s Scotland and there are private islands,” the chief planner explained. “Private islands are a bit much, I think.” Notwithstanding that expression of sobriety, Aviemore’s vast banquet hall was decked out with trees and foliage, like the enchanted forest of Narnia. Hassabis and Suleyman led an expedition to a go‑karting racetrack. It was hard to say which founder was the more competitive. The go‑karting was not the riskiest event. At one point in the proceedings, Suleyman appeared onstage to lay out his vision for DeepMind’s Applied side. He surveyed the real‑world problems that AI would tackle: In addition to health, there was climate change. Suleyman had recently hired Jim Gao, a Google engineer who had come up with an AI system that cut electricity consumption at data centers. By harnessing DeepMind’s reinforcement‑learning know‑how, Gao now planned to take his innovation to the next level, ushering in an era of intelligent buildings—structures that learned for themselves to conserve energy. Suleyman got to the climax of his presentation. He put up a slide on a large screen. The title said, “DeepMind: A Global Interest Company.” In the weeks leading up to Aviemore, Google had seemed to indicate that it was ready to sign off on some version of the Pichai plan. The company off‑site was the moment to break the news to employees. The several hundred onlookers were taken aback. Rumors of a DeepMind spin‑out had circulated for months, together with speculation about the amount of stock the staff might get in the new entity. But the slide on the screen showed an org chart with two boxes, and these suggested something different. The first box, labeled “Alphabet/Google,” showed Suleyman and Applied AI at the heart of the mothership in Mountain View. This was not a spin‑out; it was a spin‑in. The second box, labeled “DeepMind,” showed an independent Global Interest Company, focused on AGI research and connected to Google only by a dotted line representing a technology licensing agreement. Apparently, the plan was for a spin‑in and a spin‑out. People’s heads were spinning. Ten days later, DeepMind’s leaders felt equally dizzy. Google sent back its latest negotiating position, consisting of an updated document with red lines all over it. Pichai was clearly nowhere near approving the plan announced at Aviemore. Hassabis and Suleyman faced the prospect of having to walk back the vision that had been laid out to the entire company. The crisis hit at an important time. That same week, OpenAI chief scientist Ilya Sutskever leapt out of his chair. He had just read the transformer paper. Hassabis did his best to push Pichai into rethinking his position. To signal his anger about the red‑lined document from Google, he canceled his next call with the chief executive. Pichai pinged Hassabis at once. He wanted to chat as soon as possible. After keeping the boss hanging, Hassabis eventually agreed; then he hotly emphasized his disappointment. Four days later, Suleyman piled on. He emailed Drummond and canceled another meeting. The relationship between Google and DeepMind had hit bottom. Google saw too much commercial potential in AI to let it slip out of its control. DeepMind saw too much existential risk to let commercial priorities dictate AI’s deployment. Each side recognized that it needed the other. A fractious dialogue continued. Unbenknownst to Hassabis and Suleyman , a parallel fight was playing out over OpenAI’s future. By the summer of 2017, the upstart’s leaders, realizing that they needed far more capital than could be raised as a nonprofit, began discussions about grafting on a for‑profit structure. It was the mirror image of the DeepMind conundrum. DeepMind existed as a for‑profit but wanted to wrap nonprofit governance around powerful AI. OpenAI existed as a nonprofit but needed some capitalist machinery to raise money. Both saw salvation in a capitalist/post‑capitalist hybrid. Like Hassabis and Suleyman, OpenAI’s leaders were discovering that restructuring talks led quickly to quarrels. A month or so into the discussions, OpenAI’s day‑to‑day leaders, Ilya Sutskever and Greg Brockman, fell out with the chief business visionaries and fundraisers, Elon Musk and Sam Altman. At the same time, Altman wanted to become chief executive of OpenAI, and was maneuvering to get Musk out of the way—even though it was he who had drawn Musk into the project in the first place. The sheer potential of artificial intelligence discouraged compromise. Altman whispered to Brockman that Musk was too erratic to be entrusted with AGI. Brockman relayed that to Sutskever. Sutskever worried that both Musk and Altman wanted absolute control of AGI. To add to the climate of mutual suspicion, Musk poached one of OpenAI’s key scientists to run Tesla’s AI division. On September 20, 2017, Brockman and Sutskever emailed Musk and Altman with what sounded like an ultimatum. “This process has been the highest stakes conversation that Greg and I have ever participated in,” Sutskever declared, writing on behalf of both himself and Brockman. If OpenAI succeeded, “it’ll turn out to have been the highest stakes conversation the world has seen,” he added. Addressing Musk, Sutskever observed, “The current structure provides you with a path where you end up with unilateral absolute control over the AGI. “You stated that you don’t want to control the final AGI, but during this negotiation, you’ve shown to us that absolute control is extremely important to you. “You are concerned that Demis could create an AGI dictatorship,” Sutskever went on. “So it is a bad idea to create a structure where you could become a dictator if you chose to.” Next, Sutskever addressed Altman. “We don’t understand why the CEO title is so important to you. Your stated reasons have changed, and it’s hard to really understand what’s driving it. “Is AGI truly your primary motivation? How does it connect to your political goals?” Altman’s stated desire to lead OpenAI and his simultaneous dalliance with a California gubernatorial run struck Sutskever as contradictory. “There’s enough baggage here that we think it’s very important for us to meet and talk it out,” Sutskever declared. “Can all four of us meet today?” It was not just Hassabis and Suleyman who wanted to resolve an internal governance fight urgently. Musk was less emollient than Pichai. “Guys, I’ve had enough,” he responded brusquely. “I will no longer fund OpenAI until you have made a firm commitment to stay or I’m just being a fool who is essentially providing free funding for you to create a startup.” Two days later, Sutskever and Brockman caved. The discussion of a for‑profit mechanism was shelved, leaving OpenAI to soldier on with its nonprofit structure, which Musk dominated. Altman quickly cozied up to the big man, deftly ensuring that his own role in inciting the rebellion went unsuspected. “I remain enthusiastic about the non‑profit structure!” he announced in an email. He threw Brockman and Sutskever under the bus, telling Musk’s trusted lieutenant, Shivon Zilis, that their remonstrations had been “childish.” The truce would be only temporary. To build AGI, OpenAI still needed to restructure itself in order to raise money. Altman explored three possible solutions—two of which precisely matched the parallel deliberations at DeepMind. He called Reid Hoffman and asked for money. He considered turning OpenAI into a public‑interest corporation. Venturing giddily off trail, he thought about funding OpenAI with a cryptocurrency. Sure enough, on the last day of January 2018, the calm ended. Musk sent Brockman, Altman, and Sutskever a dispiriting chart, showing that DeepMind and Google Brain generated the lion’s share of AI research. “OpenAI is on a path of certain failure relative to Google,” Musk declared. The startup had to change what it was doing. Brockman emailed back the same day. He objected that conference papers were a poor measure of OpenAI’s progress. “Our biggest tool is the moral high ground,” he went on. “AI is going to shake up the fabric of society, and our fiduciary duty should be to humanity.” Pushing back against Musk’s obsession with the race against Google and DeepMind, Brockman added, “It doesn’t matter who wins if everyone dies.” Musk responded the next morning at 3:52am. He confronted Brockman with a proposal that recalled Pichai’s pitch: OpenAI should spin into Tesla. Initially, OpenAI’s team could accelerate Tesla’s development of autonomous vehicles. Next, it could use the profits from self‑driving cars to fund its AGI moonshot. “Tesla is the only path that could even hope to hold a candle to Google,” Musk declared. “Even then, the probability of being a counterweight to Google is small. It just isn’t zero.” Back in 2014, Musk had Skyped Hassabis from a closet in LA, proposing that Tesla or SpaceX should absorb DeepMind. Almost exactly four years later, the new version of this proposal played into Altman’s hands: It proved Musk’s power hunger. With little difficulty, Altman now persuaded Brockman and Sutskever to take his side. Together, the three told Musk that OpenAI would not attach itself to Tesla. At an all‑hands meeting on the top floor of the converted truck factory that housed OpenAI, Musk announced to the employees that he was quitting the lab, scornfully adding that OpenAI would have to sprint faster to stay relevant. Hoping to lure away some researchers, he declared that there was a much better chance of building AGI at a strong business like Tesla. Showing courage, or perhaps just youthful innocence, an intern asked Musk if speed might be reckless from a safety perspective. Besides, wasn’t developing AI at a for‑profit company like Tesla the same as creating it at a for‑profit company like Google? “Isn’t this going back to what you said you didn’t want to do?” the intern demanded. “You’re a jackass!” Musk retorted. Then he stormed out of the meeting. At the beginning of 2018 , DeepMind’s version of this governance battle seemed to reach a resolution. The company’s leaders presented a thick slide deck to the Alphabet board, stressing that “an unprecedented technology requires an unprecedented structure.” To light a fire under the Alphabet directors, one slide quoted rival tech leaders on the awesome potential of AI, while another cited Russia’s Vladimir Putin and China’s Xi Jinping—“The one who becomes leader in AI will be the ruler of the world,” Putin had said ominously. The presentation also warned of a gathering storm. “Technology has crossed over to the dark side,” a columnist from The New York Times had written. “It’s coming for you; it’s coming for us all, and we may not survive its advance.” The two‑part message to Alphabet was clear. You better empower DeepMind to sprint for AGI. And you better create a governance structure that is robust enough to withstand skeptical public scrutiny. In the weeks after the presentation, the two sides finally converged on a fleshed‑out version of the Pichai plan. Suleyman would lead DeepMind’s Applied side from within Google, while Hassabis would run Research as an independent global interest company. For Suleyman, this was a triumph: Google had finally signed a complex term sheet granting most of what he wanted. Hassabis was equally pleased. The plan guaranteed him an astronomical $15 billion in Google funding to sustain AGI research over the next decade, and it would put an end to the meetings on corporate structure, which he found screamingly boring. After two years of negotiations, he had hit his limit. “I don’t want this part of my brain to grow,” he often said, when asked to get his mind around another legal document. Then, all of a sudden, the hope of resolution shattered. In April 2018, in yet another demonstration of how the prize could slip away, Apple poached a senior Google executive named John Giannandrea, who had supported the idea of Suleyman moving to Mountain View. In the ensuing commotion, Jeff Dean was promoted, eliminating the space in the org chart that Suleyman thought he would occupy. Repeating the Aviemore debacle, Suleyman was forced to un‑promise what he had promised: He had already told his deputies to prepare their move to California. The relationship between Google and DeepMind had hit bottom. Google saw too much commercial potential in AI to let it slip out of its control. DeepMind saw too much existential risk to let commercial priorities dictate AI’s deployment. Each side recognized that it needed the other. Some Suleyman lieutenants remember this as the moment when their leader lost his footing. After the setback of Aviemore, and persistent confusion and uproar at DeepMind Health, he couldn’t juggle any more, and the balls crashed all around him. The metaphors came thick and fast. “I remember being with Moose and he was like, what do I do now?” one colleague recalled. “That’s when he ended up wearing no clothes. He was up the cloud in a banana. “And so then he goes back to Demis and he’s like, oh, well actually I think we’ll just stay here,” this person went on. “And at this point, Demis says, no way.” The true story is subtler, and more revealing about Hassabis. Despite his differences with Suleyman, one side of him remained loyal to his co-founder. He valued long‑term friendship, not just with Suleyman but with everyone: DeepMind employed multiple figures from his past, stretching back to his time at Cambridge and Elixir Studios, the computer-game developer he founded in the late 1990s. It was partly that he wanted to do right by his comrades: The desire to be good was lodged deeply inside him. But there was something else as well. For DeepMind’s research operation, Hassabis hired the world’s most dazzling scientists from the most celebrated PhD programs. But when it came to non-technical hires, he was leery of recruiting managerial stars—in a scientific culture like DeepMind’s, non-scientists had to be humble. Rather than hiring outsiders, Hassabis relied on internal comrades. Suleyman was undoubtedly the most capable of them. After Suleyman canceled his move to Mountain View, Hassabis doubled down on his relationship with his co-founder. Together, they revived the idea of a walk‑away option, inviting the Hong Kong tech mogul Joe Tsai to match Reid Hoffman’s offer of a $1 billion investment. When Tsai politely waved them off, the two pivoted back to Pichai’s plan for a spin‑out of DeepMind Research, and Hassabis encouraged Suleyman’s efforts to wrestle the talks to a conclusion. Hassabis also forged a pact with Suleyman to avoid recriminations during meetings of DeepMind’s executive committee, not least because colleagues couldn’t get a word in edgewise when the top dogs started going after one another. Most Sunday evenings, Hassabis kept up an old tradition of meeting Suleyman at a pub. The comrades avoided alcohol, preferring mint tea. They ordered food at the last moment, right before the kitchen closed, and talked into the evening. At the start of 2019 , Suleyman’s troubles took on a new dimension. A handful of DeepMind employees alleged that he reduced subordinates to tears with capricious and bullying behavior. The complaints involved no claim of physical violence or sexual harassment. But he was said to have used harsh language, to have fired off intimidating text messages, and generally to have frightened people. Hassabis faced a dilemma. Of course, he abhorred bullying. But it was hard to know whether Suleyman was egregiously tough, or whether DeepMind employees were too sensitive. Besides, Hassabis’s feelings of loyalty to Suleyman remained. This was his younger brother’s friend. This was his poker companion. This was the talented kid who had been fed and housed and occasionally employed by Hassabis’s own parents. With his fierce insistence on social justice, Suleyman may even have felt to Hassabis like a voice in his head—the voice that ensured that, as he chased AGI, Hassabis remained tethered to the values of his North London upbringing. There were limits to loyalty, however. Hassabis liked to say that the worst thing in the world was to control someone. The insistence was sincere: Unlike some leaders, who become intoxicated with celebrity or power, Hassabis took no pleasure in dominating people. But his hatred of domination also ran the other way: He was determined not to be dominated. By challenging Hassabis’s control over the direction of DeepMind, Suleyman repeatedly crossed the boss’s red lines. DeepMind was Hassabis’s creation, his identity. Moreover, Suleyman’s grand experiment in building governance around AI was failing. And whatever the merits of the bullying allegations, there was clearly a faction within the company that wanted Suleyman out. Perhaps this might be a convenience? Hassabis made his decision. He wanted more than anything to focus on science; he was tired of Suleyman’s machinations. But his dislike of confrontation—his self‑image as a person who did not control others—led him to express his decision indirectly. A more forthright company founder might have informed his junior co-founder that it was time to part: This is a fairly standard event in the maturation of startups. Instead of having that conversation, Hassabis allowed his lieutenants to look into the allegations of bullying. The chief operating officer and the chief counsel took charge. An outside lawyer was retained. An investigation was opened. As Suleyman’s old friend, Hassabis was told to recuse himself. Unlike some leaders, who become intoxicated with celebrity or power, Hassabis took no pleasure in dominating people. But his hatred of domination also ran the other way: He was determined not to be dominated. After three months, the outside lawyer produced a report that ran to about 25 pages. It concluded that Suleyman’s management style amounted to misconduct. Some complainants said that they had been humiliated in front of their peers. Others alleged that Suleyman had told them to communicate with him only via non‑Google channels. Later, many of Suleyman’s colleagues would say that his behavior had been standard for a mission‑driven startup founder. But the charge sheet was still serious. Suleyman was summoned to a review meeting. Precisely what transpired at this session is disputed, but Suleyman says he understood that, if he accepted the complaints, he could keep his reputation and a role at DeepMind. He would take a voluntary sabbatical, reflect on his managerial shortcomings, and work with a coach to fix them. If all went well, he could return to a new position at DeepMind. He would not be managing a big team; he might be a company ambassador. On the other hand, if he disputed the complaints, he understood that DeepMind would move from an “informal fact finding” about his conduct to a formal procedure. He would probably be found guilty of bullying. In which case he would be fired. And he would forfeit compensation. Suleyman was granted a couple of hours to make his decision. He left the office and paced furiously around Coal Drops Yard, the trendy development of restaurants and boutiques at the heart of King’s Cross. Later, he would wish that he had used that time to call a lawyer. Instead, he phoned Marilyn, his girlfriend from the time when his best friend had been George Hassabis, Demis’s kid brother. Around noon, Suleyman walked back into the review meeting and said he accepted the charges. A few days later, he sent out an all‑staff email, announcing that, after a decade of relentless efforts at DeepMind, he was taking some time out to recharge his batteries. Many colleagues replied with messages of good wishes. At this point, almost nobody inside DeepMind knew of the investigation. On August 21, 2019, DeepMind’s communications director, Ruth Barnett, was on a French beach. Her phone rang. It was a journalist at Bloomberg. The news site was about to publish a story about Suleyman’s departure. Well‑placed sources were saying that Suleyman had been “placed on leave.” The story would go out in an hour or so. Barnett rushed to notify her colleagues. A hasty conference call ensued with DeepMind’s other top lieutenants. “We need to agree on a strategy—do we want to fight this?” Barnett wanted to know. Not that Bloomberg seemed likely to change its story. The voices on the call went back and forth without answering Barnett’s question. On the one hand, “placed on leave” was not quite true, since Suleyman was taking a sabbatical. On the other hand, it wasn’t quite untrue, since there had been an investigation and he wasn’t given much alternative. From a tactical viewpoint, if DeepMind failed to defend Suleyman, it might be damaging itself, given that Suleyman was supposed to be returning to the company. At the same time, if DeepMind did defend Suleyman, it might also harm itself. More details of the internal investigation might come out, making its defense look dishonest. “They couldn’t decide whether they had or hadn’t placed Mustafa on leave,” one person recalled. “Nobody said let him burn, take him down. No one briefed against him. There just wasn’t a plan, and they were caught with their pants down.” PHOTO BY HINDUSTAN TIMES VIA GETTY IMAGES Pichai and Hassabis at the India-AI Impact Summit in New Delhi, February 2026. On the other side of the world, Suleyman was at work in a conference room in Mountain View. The handover of his health projects to Google was underway, and he was there to coordinate the details. A message popped up on his screen. He put his face in his hands and walked out of the meeting. The message came from a colleague in London, alerting him to the Bloomberg article. “Google DeepMind Co‑Founder Placed on Leave from AI Lab,” the headline stated. Suleyman couldn’t believe it. He had thought that if he accepted the complaints against him, his reputation would survive intact. The Bloomberg headline shredded that implicit contract. There was no way that this could have happened, Suleyman reckoned, without Hassabis’s approval. Bloomberg had gone with the story either because DeepMind had planted it, or because it had failed to deny it convincingly. The first suspicion was false, but the second one was accurate. Suleyman spent the next few months recharging his batteries. He threw himself into his management coaching with the same intensity he brought to everything. He grappled with how he could lead team members through encouragement rather than pressure. But after the humiliating Bloomberg story, he felt there was no way he could return to the company that he had helped to build, and with the health group gone, there was not much left of Applied anyway. In the summer of 2019, Jim Gao added to the exodus, quitting with his climate team and launching a startup. It was the close of an experiment. DeepMind might still supply commercial applications to Google, but it would no longer aspire to market its own products. At the end of 2019, Suleyman went back to work, but not at DeepMind. Google’s top managers in Mountain View had reviewed the details of his conduct and decided that it fell in the gray zone—somewhere between tough and unacceptable. Perhaps as a way of placating Suleyman, and to ensure that he wouldn’t start an ugly public fight, Google made him a vice president, and he moved at last to California. But Suleyman was now a prince without a court. Despite his grand title, he was not allowed to manage others. Looking back , the marathon governance talks held an ominous lesson for AI oversight. Hassabis and Suleyman had pushed for the safety meeting at SpaceX. The experiment had failed because of the participants’ skewed incentives. They had also spent three years pushing for various iterations of a 3‑3‑3 DeepMind oversight board. Those efforts had hit a wall, partly because Google’s leaders foresaw that the independent directors’ incentives would be equally suspect. If you couldn’t negotiate safety mechanisms inside one company—a company that, because of its extreme profitability and unconventional founders, was more open to governance experiments than most—what chance would there be to negotiate common safeguards among multiple labs in multiple countries? It was hard to imagine a counterfactual history with a happier ending. When Google absorbed DeepMind Health, it had shuttered Suleyman’s initial experiment in post‑capitalist transparency: an Independent Review Panel that had watched over its work. The dispiriting truth was that Pichai had good reason to close the review panel. Even though Suleyman had done everything possible to stock it with reputable experts, their incentives had proved to be distorted. In June 2018, for example, the panelists had issued a report at a time when DeepMind had long since bulletproofed its data-sharing contracts; when all patient data was known to be shielded from Google; and when DeepMind was well on its way to producing multiple lifesaving diagnostic algorithms. But rather than celebrating DeepMind’s achievements, and reassuring the public that artificial intelligence would benefit Britain’s National Health Service, the panelists felt obliged to demonstrate their independence by dinging the tech sector. “It is hardly surprising that the public should question the motivations of a company so closely linked to Google,” the panelists declared, bowing meekly to the technophobic zeitgeist. At an Alphabet board meeting a little while later, Sergey Brin rounded on Suleyman. The panel’s behavior had been predictable, he said. If you gave outsiders a platform, they would use it for their own ends: to burnish their careers, to bolster their own reputations. Google’s projects, no matter how virtuous, would not be their priority. Evidence from beyond DeepMind also removed the space for optimism. In 2019, Google tried to set up its own Advanced Technology External Advisory Council to guide its choices on AI ethics. To achieve political diversity, Google included the president of the conservative Heritage Foundation, Kay Coles James, who had doubts about the advance of rights for trans people. As soon as her appointment was made public, a chorus of social media critics swooped in; the attacks quickly drove other advisory council appointees to withdraw their participation. The public square was dominated by activists who were out to crush opponents, not encourage broad debate. Google’s understandable response was to disband the advisory group. The story of OpenAI offered another cautionary lesson. Following Musk’s ouster, in 2018, OpenAI appeared to show that a nonprofit/for‑profit hybrid might be workable. The company retained its original nonprofit governance board, while Altman leveraged its for‑profit structure to raise billions of dollars. But in 2023, when the governance board tried to assert its authority by firing Altman, its weakness was exposed. Altman rallied OpenAI’s financiers to his side, staging a countercoup in which three of the nonprofiteers were defenestrated. The failure of company‑level safety oversight was especially dispiriting given the bleak prospects for government regulation. Reid Hoffman had been correct in 2017. It was worth risking his fortune on corporate‑governance experiments because governments were unlikely to take action. With their faith in governance mechanisms shattered, Hassabis and Suleyman had come to see salvation, paradoxically, in their own personal power. They believed in their capacity to shape AI for the good. Their new safety agenda therefore involved securing personal influence within their companies. Reflecting on this saga in 2024 and 2025, Hassabis and Suleyman attempted to draw lessons. By now they both occupied new jobs. Hassabis was the chief executive not just of DeepMind but of Google DeepMind: He had absorbed Google’s AI researchers in Mountain View along with multiple related teams, greatly expanding the army that reported to him. For his part, Suleyman had quit Google after two years, launched an AI startup with Reid Hoffman, then become the chief executive of Microsoft AI, overseeing teams in Seattle, Silicon Valley, London, and Zurich. Two North Londoners with immigrant parents headed AI operations at two American tech giants. Although they were now rivals, with bitter memories of their parting, the two men delivered similar verdicts on the governance negotiations with Google. The exercise, they both agreed, had been futile. The negotiations had achieved nothing; they had been bound to achieve nothing; they had consumed vast quantities of energy and goodwill, making them positively harmful. With their faith in governance mechanisms shattered, Hassabis and Suleyman had come to see salvation, paradoxically, in their own personal power. They believed in their capacity to shape AI for the good. Their new safety agenda therefore involved securing personal influence within their companies. “When we were negotiating with Google, we wanted to ensure safety in a way that would be trustless,” Hassabis said. “That’s actually very difficult to do in reality. “Safety isn’t about governance structures,” he went on. “I mean, even if you have a governance board, it probably wouldn’t do the right thing when it came to the crunch. “Same thing with a safety charter. You can try to negotiate one. But it’s not realistic to create bright‑line principles years in advance because you’ll probably draw the lines in the wrong places. “So discussing these things didn’t really help,” Hassabis continued. “It made it harder to build useful trust, because when you are negotiating a trustless structure, it implies that you can’t trust the other person. “So then I thought, why don’t I go the other way? Take the energy that was going into the trustless negotiation and put it into creating real trust—trust that was actually useful. Try leaning into Google rather than leaning out. “And then of course two things happen. First, you are now at the table, so when a safety issue comes up, you can help to decide it. Second, you get to know the Google people and you rack up successes together. You can’t just talk about trust. You have to earn it. “And I think for me, and maybe for Mustafa, too, it’s about us growing up,” Hassabis mused. “We went through those negotiations and we matured. Things aren’t black and white, especially when you are dealing with a technology with unknown consequences. “So you have to be adaptable. You have to move from idealist to realist, but hopefully still with your values.” I thought and thought about this verdict. On the one hand, Hassabis and Suleyman clearly had compromised their original values, adjusting their thinking as the world changed around them. In selling DeepMind to Google, they had extracted a promise that their technology would never be used for weapons or surveillance; by 2025, Google, like Microsoft, was eager to supply AI to the national security complex. But on the other hand, Hassabis was right—his youthful ideals had indeed been unrealistic. A technology as transformative as artificial intelligence was never going to be the product of a singleton effort, and once multiple labs in multiple countries joined the race for powerful AI, it would be impossible to resist the rush to deploy it in both civilian and military settings. The notion that a well‑meaning individual had a seat at the table offered a flimsy scaffolding of reassurance to an alarmed world. But perhaps it was the best comfort available. From THE INFINITY MACHINE: Demis Hassabis, DeepMind, and the Quest for Superintelligence by Sebastian Mallaby, published by Penguin Press an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright (c) 2026 by Sebastian Mallaby. Available March 31, 2026. Sebastian Mallaby is the author of several books, including the bestselling More Money Than God . A former Financial Times contributing editor and two-time Pulitzer Prize finalist, Mallaby is the Paul A. Volcker Senior Fellow for International Economics at the Council on Foreign Relations. Read more We Have Learned Nothing Startup pundits sold us a failed science of entrepreneurship. The Red Queen offers something better. By Jerry Neumann The Patriot For two decades, Shyam Sankar has been the most important but hidden figure behind Palantir. The question now is whether he will win his wars. By Jeremy Stern Presented by Back to top Subscribe to Colossus Receive our quarterly print magazine and private audio experience. Subscribe Contact Get in touch at [email protected] Email * Message * Submit Log in to Colossus Continue with Google Email * Password * Forgot Password? Don't have an account? Create one here . Create your account Continue with Google First Name * Last Name * Email * Password * Confirm Password * Already have an account? Log in here . Forgot Password Email * Log in Register