Identity

(Meredith Whittaker) On Signal, Encryption and AI

Wired has an interview with Meredith Whittaker from Signal - her stances on Surveillance Capitalism, Signal’s not-for-profit structure and AI make for very interesting reading.

Yeah. I don’t think anyone else at Signal has ever tried, at least so vocally, to emphasize this definition of Signal as the opposite of everything else in the tech industry, the only major communications platform that is not a for-profit business.

Yeah, I mean, we don’t have a party line at Signal. But I think we should be proud of who we are and let people know that there are clear differences that matter to them. It’s not for nothing that WhatsApp is spending millions of dollars on billboards calling itself private, with the load-bearing privacy infrastructure having been created by the Signal protocol that WhatsApp uses.

Now, we’re happy that WhatsApp integrated that, but let’s be real. It’s not by accident that WhatsApp and Apple are spending billions of dollars defining themselves as private. Because privacy is incredibly valuable. And who’s the gold standard for privacy? It’s Signal.

I think people need to reframe their understanding of the tech industry, understanding how surveillance is so critical to its business model. And then understand how Signal stands apart, and recognize that we need to expand the space for that model to grow. Because having 70 percent of the global market for cloud in the hands of three companies globally is simply not safe. It’s Microsoft and CrowdStrike taking down half of the critical infrastructure in the world, because CrowdStrike cut corners on QA for a fucking kernel update. Are you kidding me? That’s totally insane, if you think about it, in terms of actually stewarding these infrastructures.

So you’re saying that AI and surveillance are self-perpetuating: You get the materials to create what we call AI from surveillance, and you use it for more surveillance. But there are forms of AI that ought to be more benevolent than that, right? Like finding tumors in medical scans.

I guess, yeah, although a lot of the claims end up being way overhyped when they’re compared to their utility within clinical settings.

What I’m not saying is that pattern matching across large sets of robust data is not useful. That is totally useful. What I’m talking about is the business model it’s contained in.

OK, say we have radiological detection that actually is robust. But then it gets released into a health care system where it’s not used to treat people, where it’s used by insurance companies to exclude people from coverage—because that’s a business model. Or it’s used by hospital chains to turn patients away. How is this actually going to be used, given the cost of training, given the cost of infrastructure, given the actors who control those things?

AI is constituted by this mass Big Tech surveillance business model. And it’s also entrenching it. The more we trust these companies to become the nervous systems of our governments and institutions, the more power they accrue, the harder it is to create alternatives that actually honor certain missions.

Just seeing your Twitter commentary, it seems like you’re calling AI a bubble. Is it going to self-correct by imploding at some point?

I mean, the dotcom bubble imploded, and we still got the Big Tech surveillance business model. I think this generative AI moment is definitely a bubble. You cannot spend a billion dollars per training run when you need to do multiple training runs and then launch a fucking email-writing engine. Something is wrong there.

But you’re looking at an industry that is not going to go away. So I don’t have a clear prediction on that. I do think you’re going to see a market drawdown. Nvidia’s market cap is going to die for a second.

On Worldcoin, DAOs and digital identity black markets

Molly White has a great essay on Sam Altman’s iris scanning orb and its purported use cases.

Much of Worldcoin’s promises are predicated on the questionable idea that highly sophisticated artificial intelligence, even artificial general intelligence, is right around the corner. It also hinges on the “robots will take our jobs!” panic — a staple of the last couple centuries — finally coming to bear. Worldcoin offers other use cases for its product too, like DAO voting, but it is not the promise to solve DAO voting that earned them a multi-billion dollar valuation from venture capitalists.

Other use cases that Worldcoin has offered seem to assume that various entities — governments, software companies, etc. — would actually want to use the Worldcoin system. This seems highly dubious to me, particularly given that many governments have established identification systems that already enjoy widespread use. Some even employ biometrics of their own, like India’s Aadhaar. There’s also the scalability question: Worldcoin operates on the Optimism Ethereum layer-2 blockchain, a much speedier alternative to the layer-1 Ethereum chain to be sure, but any blockchain is liable to be a poor candidate for handling the kind of volume demanded by a multi-billion user system processing everyday transactions.

What will happen when you promise people anywhere from $10 to $100 for scanning their eyeball? What if that’s not dollars, but denominated in a crypto token, making it appealing to speculators? And what if some people don’t have the option to scan their own eyeballs to achieve access to it?

A black market for Worldcoin accounts has already emerged in Cambodia, Nigeria, and elsewhere, where people are being paid to sign up for a World ID and then transfer ownership to buyers elsewhere — many of whom are in China, where Worldcoin is restricted. There is no ongoing verification process to ensure that a World ID continues to belong to the person who signed up for it, and no way for the eyeball-haver to recover an account that is under another person’s control. Worldcoin acknowledges that they have no clue how to resolve the issue: “Innovative ideas in mechanism design and the attribution of social relationships will be necessary.“ The lack of ongoing verification also means that there is no mechanism by which people can be removed from the program once they pass away, but perhaps Worldcoin will add survivors’ benefits to its list of use cases and call that a feature.

Relatively speaking, scanning your iris and selling the account is fairly benign. But depending on the popularity of Worldcoin, the eventual price of WLD, and the types of things a World ID can be used to accomplish, the incentives to gain access to others’ accounts could become severe. Coercion at the individual or state level is absolutely within the realm of possibility, and could become dangerous.

On Facial Recognition and Identity Proofing

Wired has a good piece on the IRS in the US caving to public outcry and ditching its integration with ID.me - a service that was supposed to verify identities (by matching video selfies to existing records). It’s understandable why this would cause concerns given that facial recognition is rife with false matches, biases and a reputation for invasiveness. With fraud being a pressing issue now when a majority of us (at least in Australia) access nearly every civic service online, governments are going to want to think about how they balance policy, privacy and messaging.

Unfortunately, the landscape at the moment is messy and populated by a number of third-party vendors still finding their feet in an area where privacy and policy concerns are outweighed by sexier usability and convenience use cases.

“The fact we don’t have good digital identity systems can’t become a rationale for rushing to create systems with Kafkaesque fairness and equity problems.” - Jay Stanley, ACLU

It’ll be interesting to see how Australia’s Trusted Digital Identity Framework (TDIF) will look to address some of these inherent problems through a continuous expansion of its standards.