With help from Sam Sutton
Meta has had its fair share of human rights issues in the company’s history, from the Rohingya massacre to Cambridge Analytica.
So it’s only natural that the human rights community would be skeptical of its promise to revolutionize the way we use the internet itself, via the 3D overlay on the world the metaverse promises. Whether or not they can prove those skeptics wrong might come down to what tradeoffs the company and its fellow virtual worldbuilders are willing to make.
For now, they’re somewhat predictably keeping their cards close to their chest. Yesterday evening Meta’s director of human rights Miranda Sissons shed some light on the subject during a panel discussion disquietingly titled “Human rights and the metaverse: are we already behind the curve?”
Sissons first touted the potential for AR/VR technology to improve quality of life in the real world, through its uses in fields like automotive safety or medical diagnostics. But that’s not the “metaverse.” And when it comes to the rules for the new virtual spaces Meta is building, well… they’re contingent.
“Many of the salient risks are related to our behaviors as humans,” Sissons said. “And many of those behaviors can be mitigated or prevented through guardrails, standards and design principles, as well as design priorities.”
But what are those principles, exactly? The human rights community provides a slew of formal tools through which to evaluate the impact of any given technology and prevent harms like those in the Rohingya and Cambridge Analytica cases, and Sissons argued that companies should follow the frameworks for human rights compliance put forth by groups like the United Nations and the World Economic Forum.
The Electronic Frontier Foundation’s Katitza Rodriguez, who has called for companies to place strict regulations around the kind of data their devices might collect and store, including potential “emotion detection,” attended yesterday’s session virtually as well. She said Sissons’ vision might require Meta to make some uncomfortable trade-offs.
“You have to educate and train engineers, marketing teams, etc., on the importance on human rights and the consequence of their product to society,” Rodriguez said. “It’s hard, but it’s important… How to mitigate human rights risks? Avoid including face recognition in the product. These are difficult choices to make.”
And there’s no shortage of examples of what happens when these choices don’t get made early on in a tech’s lifespan.
“What we’ve learned from other immersive worlds like video gaming is that the norms that are set early on really define the culture of the space,” Chloe Poynton, the panel moderator and an advisor at the consulting firm Article One, told me afterward.
Daniel Leufer, a senior policy analyst for Access Now, argued passionately on the panel against the frequent refrain that it’s not possible for regulators to keep pace with the development of new technologies, saying “often very basic things like data protection, transparency, access to information, do so much work.”
Brussels, where Leufer is based, has clearly caught onto this notion with its slew of regulations around data privacy and AI in recent years. As hazy as Meta’s promises might be at the moment, however, there are in fact signs that regulators stateside might be catching up — this week’s surprise bipartisan draft bill on privacy is beginning to provide clarity around who has the power to set and enforce privacy laws.
Today the National Endowment for Democracy released a report on the “Global Struggle Over AI Surveillance,” that addresses “both the democracy implications of new technologies and vectors for civil society involvement in their design, deployment, and operation.”
The authors emphasize the threat posed by AI-powered surveillance to civil liberties and privacy, homing in on the global threat it poses in potentially giving autocratic regimes a leg up on their ability to repress various societal groups — as well as freedom of expression itself.
The report offers a slew of potential remedies as such, including establishing more concrete rules and regulations around AI development and use (like Europe is doing) and creating a body of global oversight.
The report pays special attention to China, whose fearsome high-tech surveillance state is already a model for oppressive regimes across the globe.
“Beijing is moving rapidly to write rules for AI systems,” the authors note. “These efforts will give Beijing substantial sway when it comes to shaping global rules around AI surveillance technology, which could in turn diminish the role of human rights norms in these frameworks” — all the more reason, they argue, to move more quickly in the free(-r) world.
First in Digital Future Daily: The attorney who defended Obamacare in front of the U.S. Supreme Court is getting into bitcoin.
Grayscale Investments has tapped former Solicitor General Don Verrilli to assist in a looming legal battle with the Securities and Exchange Commission over plans to convert its $20 billion Bitcoin trust into an investment fund whose shares would trade on the NYSE.
The SEC rejected similar bids for a Bitcoin-based exchange traded fund, including a high-profile effort led by former Trump adviser Anthony Scaramucci, on the grounds that they’re too risky for retail investors.
Grayscale is hoping to tip the scales with its fund by bringing in Verrilli, who was the Obama administration’s top litigator on landmark health care and same-sex marriage cases, to hone their pitch to the markets regulator and provide back-up if they take their argument to court.
The firm’s been laying groundwork for a legal challenge for months, arguing that a rejection of their bid would be unfair given the SEC’s approval of ETFs tied to bitcoin futures contracts — a more indirect financial instrument that’s regulated by the Commodity Futures Trading Commission. They’ve also sought to build a grassroots movement around their efforts through an aggressive public relations and advocacy campaign that blanketed Washington’s Union Station with ads and inundated the SEC with support letters.
“Hopefully, we won’t need to be going to court because we’re doing everything we can to convince the commission that approval is the right answer here,” Verrilli, a partner at Munger, Tolles & Olson, said in an interview. He added that the SEC is going to have “a great deal of difficulty distinguishing the futures ETF from the spot [market] ETF.” — Sam Sutton
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Konstantin Kakaes ([email protected]); and Heidi Vogt ([email protected]).
If you’ve had this newsletter forwarded to you, you can sign up here. And read our mission statement here.