The Personhood Data Dilemma is Here
Should we hand over more personal info to prove we're not AI bots?
More powerful people are taking the deepfake threat more seriously, just not the publicly-elected politicians. Since my last post on the issue (primer on US, UK and EU policy can be found here), we’ve had some substantial developments. The first comes from Meta and the company’s president of global affairs, Nick Clegg.
Clegg and Meta are now promising to label AI-generated audio-visual content “in the coming months”. Such a move actually goes beyond what the EU AI Act has asked for - that creators label such content themselves, not the platforms. But Meta's collaborative approach does have its limitations. The key quote from Clegg: "...it’s not yet possible to identify all AI-generated content...".
This limitation is also reflected in the deepfake accord coming out of the Munich Security Conference this week. Some of the world's biggest tech companies, including Microsoft, Meta, Google and Amazon, have signed the voluntary agreement to combat deepfakes and other 'deceptive AI content'.
Here’s what the framework actually commits the businesses to:
Set expectations for how signatories will manage the risks arising from Deceptive AI Election Content.
A recognition of a "whole-of-society" response to deepfakes.
Commitment to upholds human rights, including freedom of expression and privacy.
Framework includes seven principles: Prevention, Provenance, Detection, Responsive Protection, Evaluation, Public Awareness and Resilience.
In pursuit of these principles, the companies will "work towards" attaching machine-readable information, as appropriate, to realistic AI-generated audio, video, and image content.
They will also seek to provide transparency to the public on their policies relating to the issue and "explore pathways" to share best-in-class tools and/or technical signals about.
Since this is a broad voluntary agreement amongst 20 different companies, the accord doesn't go into great detail about the technical aspects of such work and no timelines are mentioned on delivering the goals.
From an organisational perspective, there are no further details on how the companies might collaborate and if an independent body should be established to help such a cause. Likewise, there is no mention of potential penalties for failing to meet these goals.
The biggest development in the global regulatory space comes from the US’ communications watchdog the FTC. It’s promised to update existing rules so that impersonating anyone could be outlawed. It has made the move in light of a “surge” in complaints around fraud.
“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” said FTC Chair Lina M. Khan.
“Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.”
Amongst other things, the new rule would also see the FTC seek monetary relief in federal court from scammers that:
Use government seals or business logos when communicating with consumers by mail or online.
Spoof government and business emails and web addresses, including spoofing “.gov” email addresses or using lookalike email addresses or websites that rely on misspellings of a company’s name.
Falsely imply government or business affiliation by using terms that are known to be affiliated with a government agency or business (e.g., stating “I’m calling from the Clerk’s Office” to falsely imply affiliation with a court of law).
But it’s unclear how these proposed rules will match-up against the US’ first amendment. Where do you draw the line at illegal impersonation and satire, for instance?
The announcements from Munich and the FTC came in the week that OpenAI launched Sora. The latest product from Sam Altman’s company turns text into 60-second-long videos. The tech has so far been limited to a small group of early testers. The videos look great, but let’s see how it works in the wild like DALL·E.
Speaking of Sam, I was reminded by journalist-turned-tech founder Shiv Malik to check-in on the Worldcoin project. The plan is to create a global digital ID and financial network. Why? Because in an AI-age you will have to continually prove you’re not a machine (proof of personhood) and you will want to interact with decentralised economies.
So far an impressive 3.4 million ‘unique humans’ have signed-up to the project across 120 countries since its official launch last July. But only 2,000 orbs, which allow people to become verified through iris scans, have been manufactured and are available in limited locations, including the US, Spain, Portugal, South Korea, Germany, Mexico, Chile, Argentina and Singapore.
On the software side, the project has recently rolled-out its World ID 2.0 upgrade. The “human passport for the internet” introduces a new Worldcoin App Store, including integrations with Reddit, Discord, Shopify, Minecraft and Telegram.
The perennial question for Digital ID apps like Worldcoin is whether they can (or should) do multiple things at once or just stick with doing one thing, such as providing verification, and do it very well.
Here’s what Cambridge University’s Ross Anderson told MPs way back in 2004 when Tony Blair was exploring a government-back ID card (something he was partially successful with following the introduction of the 2006 Identity Cards Act, only for David Cameron’s first government to scrap it):
“The smart card industry has had over the last 15 years a number of projects to persuade people that a multi-function smart card might be a good thing.
“I have been involved peripherally in one or two of these, for example, trying to design a system that was simultaneously a banking card and a card for prepayment of electricity meters. The experience of these attempts and pilots was almost uniformly negative.
“Technically it is usually not a big deal to have a card with two applications on it but from the administrative point of view and the point of view of legal liability and issues such as whose logo is on the card, who is liable when something breaks, things are very much more difficult. If you are a banker the last thing you want to do is to be held liable for a power cut or for somebody being unable to get electricity if they suffer as a result.
“For these reasons the experience of industry is that everybody wants their own card, they want their own customer database and they want control of their own mechanisms to access that database.”
The debate has obviously moved on since Anderson gave his evidence, namely because Web 2.0 companies have extracted our meta-data and now own/use digital profiles of us.
That’s why the idea of ‘digital self-sovereignty’, where individuals have more control over their online dealings, seems so attractive. Blockchain technology makes a decentralised (aka non-government run) Digital ID possible, à la Worldcoin, and the rise of AI arguably makes it a priority.
But assurances are clearly needed on whether such schemes are effective and safe. For all of their meta-data harvesting, Web 2.0 companies shouldn’t have your most sensitise biometric data — that is what some Web 3.0 organisations now want.
🎥 Video essays
📖 Essays
How disinformation is forcing a paradigm shift in media theory
Welcome to the age of electronic cottages and information elites
Operation Southside: Inside the UK media’s plan to reconcile with Labour
📧 Contact
For high-praise, tips or gripes, please contact the editor at iansilvera@gmail.com or via @ianjsilvera. Follow on LinkedIn here.
174 can be found here
173 can be found here
172 can be found here
171 can be found here
170 can be found here