Why AI Doesn't Deserve Its Own Moral Status
How policy-makers are losing the wood from the trees when it comes to technology laws
Sir Keir Starmer and Anthony Albanese have at least one thing in common: they’re very unpopular at the moment. Both UK and Australian voters have a strong case of buyer’s remorse (UK polling here, AUS polling here).
At least Starmer has around three years to rectify his administration’s reputation, while Albanese faces a federal election later this year. Since there are around three-points between Labor and the Liberals in a two-party run-off, perhaps that’s why Australian workplace relations minister Murray Watt is hoping to win over the country’s powerful trade unions, some of which launched a broadside against Albanese’s government late last year (link).
Watt was quoted in The Australian (link) over the holidays demanding that employers should consult their workers before rolling out powerful generative AI tools. He also said the Australian government was considering whether the Fair Work Act, first introduced in 2009, was keeping pace with technological developments.
If the Australian government did promise to update the legislation as part of the federal election campaign, there’s no doubt the EU’s AI Act would provide a template for such a move. Like Brussels, Canberra has been strict on technology companies, most recently becoming the first country in the world to ban children from accessing unfiltered social media platforms (link).
Amongst other things, the EU’s AI Act bans the use of AI systems considered harmful or discriminatory in the workplace. The body’s AI Pact, a voluntary code unveiled to complement the Act, also suggests that workers should be informed by their employers if they plan to use new AI technologies.
In contrast, the UK, much like the US, has taken a pro-innovation approach to AI regulation. Rishi Sunak’s Conservative government specifically said it wanted to regulate the frontier technology through existing bodies and institutions. The Financial Conduct Authority, for example, would keep an eye on AI technology deployed by financial services companies, while OfCom would do the same for the media.
Starmer’s administration, however, has started to move away from this trans-Atlantic approach and his plans are starting to look more European in design. The UK’s Secretary of State for Science and Technology Peter Kyle has promised that the country will legislate against AI risks in 2025 (link) and businesses are still awaiting Labour’s promised AI Bill, which didn’t feature in the King’s Speech of legislative priorities last year.
As it stands, there is very little public detail on what the legislation will look like. The expectation is that it will focus on the regulation of advanced AI models and the draft law will formalise the voluntary codes big technology companies have signed up to at the Bletchley Park summits. It’s also expected that the AI Safety Institute, currently a directorate of the Department for Science, Innovation and Technology, will become an arms-length body.
Clauses related to workers’ rights could hypothetically make their way into the AI Bill, but it’s all rather vague at the moment (much like Labour’s manifesto) as Starmer seeks to reset his premiership. His party’s promised New Deal for Working People, a flagship policy of Deputy Prime Minister Angela Rayner, is still going through Parliamentary approvals and potential reforms as the Employment Rights Bill (link).
Meanwhile, as we wait for more clarity from the UK government, an AI-focused Private Members Bill has been tabled in the House of Lords. These types of Bills have a low chance of passing into law unless they gain government sponsorship, but they are good for setting the narrative inside and outside of Westminster on any given subject.
Put forward by Conservative Baron Christopher Holmes, the draft legislation (link) proposes to create a government-backed body called the AI Authority, which would assess AI-related risks, ensure that relevant regulators take account of AI and support testbed and sandbox initiatives “to help AI innovators get new technologies to market”.
Another lift peer, Liberal Democrat Lord Clement-Jones, has tabled his own Bill (link). He’s focused on the public sector in the wake of the Post Office Horizon scandal. Amongst other requirements, the draft legislation would force public authorities to publish impact assessments of any automated or AI algorithms they use to make decisions.
All of these actual and proposed laws come as the hype around artificial general intelligence (AGI) cools and policy-makers are able to re-think the true impact of generative AI, especially amid the ongoing debates about the limitations of LLM models (link).
For the past couple of years the common argument for specific AI legislation ran like this: If we reach AGI, we will need to have sufficient measures and controls in place to avoid catastrophes. We are reaching AGI quicker than we initially thought, therefore we need to introduce those safeguards via legislation now.
The end result was that AI was given a moral status separate from other frontier technologies. You don’t hear the same arguments being made about IoT, for example, do you? In one way this was a good thing, in that policy-makers were getting ahead of the game. But in other ways it was bad.
We are only starting to understand the impact of mainstream generative AI technologies and we risk snubbing out further innovations by legislating from a position of ignorance. Furthermore, by giving AI its own special status, we are failing to update our existing codes, practices and laws for other emerging technologies.
In other words, much time, resource and energy is being devoted to just one frontier technology, instead of taking a more holistic approach. What does quantum computing + AI + spatial computing look like, for instance?
My own argument for broad technology-focused legislation would run like this: We need to have sufficient safeguards and pro-innovation policies to accommodate for the next generation of technologies in order to promote economic growth whilst protecting consumers.
We don’t have these measures in place because the environment is so fast-moving. To meet this challenge at such a pace, policy-makers should seek to update the remits, powers and personnel of existing regulators, enabling for in-built flexibility and interoperability. This approach should give economies sufficient safeguards and pro-innovation policies to accommodate for the next generation of technologies.
This is one of the few things Rishi Sunak got right in government, but others, I fear, are getting lost in the AI hype.
Cleggix. Nick Clegg’s transformation from ex-Deputy Prime Minister to one of Silicon Valley’s most powerful public affairs operators has been impressive. The former Liberal Democrat leader is now planning to leave Meta after almost seven years. Clegg’s announcement comes as Donald Trump, who is close to Elon Musk (link) and Clegg rival Nigel Farage, prepares to take office.
The now outgoing President of Global Affairs for Meta has been a very effective communicator for Mark Zuckerberg. That’s been especially true in Europe, where Clegg used to serve as an MEP. But a new administration in the White House means a changing of the guard at the top of Meta.
Clegg will be replaced by Republican Joel Kaplan, who is Clegg’s current deputy and who previously served as deputy chief of staff in the White House during President George W Bush's administration.
It’s unknown where Clegg will go next, but he hinted at "new adventures" as part of his resignation announcement.
Bluesky Bots. I’m enjoying my time on Bluesky (follow me here), the X competitor which feels a lot like old school Twitter in all the best ways. But despite the platform’s success — it now has 25 million users — it can’t seem to avoid the bots which plague all other social sites. I’m not the only one that has noticed them. With success, comes problems (link).
Flat Earth News. What percentage of your newspaper is wire content? I remember when The Guardian’s Nick Davies commissioned Cardiff University to look into this in the U.K. way back in 2008. The average was around 60%. Since so much has change since then in the media industry, perhaps it’s as high as 70% or even 80% now? It’s a great shame as original reporting is becoming a rarity in certain areas of the news media.
But the phenomenon does present an opportunity for others, especially smaller and more sustainable operations which are relying on subscription-generated monthly revenues rather than advertisements. Mill Media and
’s London Centric are two good examples on this platform.🎙️ The Political Press Box
Find my latest long-form audio interviews here. The latest edition includes a candid conversation with Reform’s former Director of Communications, Gawain Towler.
📧 Contact
For high-praise, tips or gripes, please contact the editor at iansilvera@gmail.com or via @ianjsilvera. Follow on LinkedIn here.
210 can be found here
209 can be found here
208 can be found here
207 can be found here
206 can be found here
205 can be found here