AI, Nuclear and Crypto: The Big Power Question
An update on how the technology sector is overhauling its infrastructure
PREVIOUSLY: Inside Big Tech’s nuclear dream
Here’s a simple equation: More powerful generative AI tools need more compute and to get more compute you need more resources. Got it? Right.
That’s the core issue why big technology companies are backing nuclear power, sometimes for the first time. The energy (and water) demands of large language model (LLM) infrastructure is massive. Funny enough, it’s a problem another frontier technology faced recently.
Crypto miners were consistently criticised in the media and by politicians for their use of data centres. The industry, particularly in North America, decided to get together, form the Bitcoin Mining Council in 2021 and promise to use renewable energy sources.
At least two things have happened since then: the crypto industry has increasingly turned to proof-of-stake projects (Ethereum, Solana and so on). Without getting into the technicalities too much, proof-of-stake is seen as more sustainable and differs from Bitcoin’s proof-of-work consensus mechanism. The second major event is that after the pandemic generative AI took off from around March 2023 thanks to OpenAI’s ChatGPT tool.
There has been very little thinking, at least publicly, from policy makers about the knock-on effects from this form of technology. But the corporates, as ever, are well ahead of the game. They are ploughing billions of dollars of CapEx into generative AI infrastructure, R&D and development teams. They have also long known about the resource demands of the technology.
As I previously wrote, this has led OpenAI’s Sam Altman and others to support nuclear fusion companies. Unlike nuclear fission, the technology is unproven and is still being trialled and tested. Perhaps that’s why big technology companies and leaders, including Altman, have also invested in nuclear fission.
Some of the latest announcements include:
Microsoft’s plan to revive the Three Mile Island nuclear power plant (link)
Amazon’s decision to back a small modular reactor initiative (link)
Google’s agreement with Kairos Power on small modular reactors (link)
Interestingly, and unlike the crypto industry, there’s little discussion about alternative generative AI technologies beyond the current form of LLMs. These might be more energy efficient, they might be more accurate. It’s a conversation worth having.
It’s important to remember that Geoff Hinton, who’s often described as the ‘godfather of generative AI’, has argued that LLM ‘hallucinations’ are a feature of the technology, not a bug. That’s because LLMs are trying to mimic the neural networks of human brains, which also make things up.
One of his students, Yann LeCun, chief AI scientist at Meta, has also argued that LLMs won’t help us reach human intelligence or the ill-defined “artificial general intelligence”. In fact, he thinks autoregressive LLMs are doomed (full slide deck here).
Ilya Sutskever, the now former chief scientist at OpenAI, has argued that LLMs shouldn’t be underestimated today because they were wrongly underestimated in the past (video link). But on the apparent limitations of the technology, he has conceded:
“These neural networks do have a tendency to hallucinate. But that’s because a language model is great for learning about the world but it’s a little bit less great for producing good outputs.”
That’s why LLM developers have used reinforcement learning – a mimic of the human trial and error process – and supervised learning – training LLMs on labelled data so it can spot patterns – to improve the technology’s outputs.
But LeCun has claimed that reinforcement learning is not practical because it will require “an instance amount of trials”, while supervised learning requires too many samples. He thinks a form of self-supervised learning – effectively filling in the blanks yourself, as humans and animals do – is the way forward.
This debate is fundamental to the future of AI and it’s still unresolved.
In the meantime, there have been some advancements in understanding why LLMs can be so bad at instruction taking. Check-out this paper (link) from researchers at Apple, The University of Cambridge and The University of Pennsylvania.
It’s GOTV, stupid. There seems to be a total lack of reporting on Trump’s and Harris’ get out the vote campaigns. US and international media are obsessed with the ‘horse race’, speeches and interviews.
But the Presidency will ultimately come down to the big issues, the ground games and digital targeting. Some people are asking why Trump appeared at a McDonald’s? Because of jobs, jobs, jobs: the economy is a top concern for US voters alongside the state of the government and immigration. Here’s what Gallup has to say on the matter (link).
Elsewhere, Semafor has looked at Democrats' flirtations with Fox News after Harris did a rare sit-down interview for the network.
A failed revolution. I spent last week in beautiful Budapest, the Hungarian capital which has seen its fair share of atrocities and totalitarianism. Nowadays, it’s clean, safe and everyone seems happy to see you – other European capitals can take note.
As I was reading through the extensive materials in the House of Terror Museum, located at the old AVH (the secret police of the ‘People’s Republic of Hungary’) headquarters, it dawned on me that the 23rd of October was approaching – the 67th anniversary of the start of the Hungarian Revolution.
Students had enough of Russia’s domination of the country, took over the main radio station for Budapest and issued 16 demands, including a call for freedom of expression and the freedom of the press. When the AVH tried to stop the revolt, Hungarian citizens took up arms and formed militias.
The uprising lasted 12 days before being crushed by the Soviets as part of ‘Operation Whirlwind’, which saw thousands of tanks and 200,000 troops swarm into Hungary. It wouldn’t be until 1989 when Soviet troops began to leave the country — the last one left in June 1991. The ‘Evil Empire’ even tried to make the Hungarians foot the bill for the withdrawal.
Disney’s succession planning. Bob Iger’s new replacement as CEO of Disney will be named in 2026, the company has announced. The development comes after Iger fought off an attempted boardroom coup from activist investor Nelson Peltz, who got a late backing from Elon Musk.
Iger, who led Disney between 2005 and 2020, came out of retirement in 2022 to take the helm of the entertainment giant once again. Even before Peltz’s failed campaign, questions surrounded Disney’s succession planning after Iger’s return. The other news is that James Gorman, Executive Chairman of Morgan Stanley, will become Chairman of Disney in January 2025.
🎙️ The Political Press Box
Find my latest long-form audio interviews here.
🎥 Video essays
📖 Essays
How disinformation is forcing a paradigm shift in media theory
Welcome to the age of electronic cottages and information elites
Operation Southside: Inside the UK media’s plan to reconcile with Labour
📧 Contact
For high-praise, tips or gripes, please contact the editor at iansilvera@gmail.com or via @ianjsilvera. Follow on LinkedIn here.
202 can be found here
201 can be found here
200 can be found here
199 can be found here
198 can be found here