|
Programming note: Platformer will be off on Thursday. Call it the invasion of the stream snatchers. In November, Platformer reported on a strange phenomenon taking place on the world’s biggest music platform. Artists who had left Spotify found that, within a few months, some of their tracks and albums had reappeared within the app. At times, the tracks sounded like bad imitations of the original. At others, they sounded nothing like the original at all. Regardless of their fidelity to the original, though, imitation tracks were racking up millions of streams — and generating revenue in the name of the authentic artist. As I wrote last year, this was particularly annoying to groups like the Australian psychedelic rock band King Gizzard and the Lizard Wizard, which had quit Spotify on principle last year after news that then-Spotify CEO Daniel Ek led a €600 million investment in Helsing, which makes military drones and AI tools for weapons systems. They wanted their name and works off Spotify for good. They thought they had left. How were they supposed to police their presence on the platform, when they weren’t supposed to have a presence there at all? Spotify removed a host of impostor Gizzard tracks last year after Platformer inquired. Among the challenges, the company told me, is that AI-powered music apps are making it increasingly easy to generate convincing fakes. But the larger question remained. Spotify creates strong financial incentives for this kind of domain squatting. Would the company intervene to stop it? On Tuesday, we got our answer. The company introduced a new feature in beta called artist profile protection designed to prevent what happened to King Gizzard from happening to anyone else. The company described how it works in a blog post: For the first time on any music streaming service, we’re giving you the ability to review and approve or decline releases delivered to Spotify from most providers. To protect your artist identity and prevent listener confusion, only the releases you approve will appear on your artist profile, contribute to your stats, and show up in recommendations to your listeners.
Artist Profile Protection isn’t necessary for every artist, but could make sense if you’ve experienced repeated incorrect releases, have a common artist name, or want more control over what appears on your profile. It requires you to actively review releases before they go live, so may delay or block your legitimate releases if you forget to take an action. It's best for those who are comfortable very actively managing their catalog.
The company will also assign artists a unique code, called an artist key, to share with the teams that manage their catalogs. They can include the key when submitting music to Spotify so that it is automatically approved to show up on the artist’s profile. On one level, this is one of those features that makes you wonder how the platform hadn’t thought to do this already. In hindsight it seems crazy that you could just pretend to be another band and upload songs to their profile without anyone noticing the scam. (Spotify does have systems in place designed to prevent this from happening; the company told me last year that they don’t work as well for artists who have left the platform.) On another level, though, the problem really does seem to have grown more complicated over the last year. Recording a passable cover of a relatively arcane back-catalog song from a niche artist used to take a modicum of effort. Today, tools like Suno and Udio allow them to be created with a text prompt. This will not be the last story we hear this year about new AI tools enabling spammers and scammers to overcome defense systems that until now were mostly working fine. At the same time, I’m struck by just how slippery questions about identity are becoming in the AI age. Increasingly, identity is a raw material that scammers, spammers, and even platforms can remix without your permission — unless you show up to claw it back. A few months after my story about King Gizzard, the British soul singer Jorja Smith found that her voice appeared to have been cloned and then used by dance act Haven to provide vocals for a song that then went viral on TikTok. Streaming services banned the track after an outcry; when her label asked for royalties, Haven swapped out “her” vocals for someone else’s. The band acknowledged using Suno to create the vocals but said they were based on the voice of Haven’s Harrison Walker and heavily modified. From Smith’s perspective, though, I imagine the experience was surreal and not a little depressing: having a hit song in what sounded like her voice, created without her knowledge or consent, which she made no money from. Experiences like these extend well beyond music. I got my own taste of it earlier this month when I was one of dozens of people conscripted into Grammarly’s since-deleted AI editor program. As with the other cases, something AI-generated was pretending to be me, or at least to be “inspired” by me, and any economic value that generated would be claimed by someone else. There are even darker versions of this, of course, and they can have dire consequences. xAI’s Grok caused outrage around the world earlier this year when it began to generate an estimated 1.8 million sexualized images of people, including minors. To Grok, even children no longer had any right to their own identity; Elon Musk called it a “free speech” issue. And while Meta actually paid some celebrities to become part of its own chatbot product, actors like John Cena and Kristen Bell still found last year that their bots could be used in sexual roleplay scenarios, including with minors. Even when these systems have the permission of those involved, they can still behave in ways no one involved would want. Spotify’s move to protect artists is the right one, even if it’s arriving a bit overdue. The rest of us will need protections of our own. The good news is that at least 45 states have passed laws addressing deepfakes. Last year Congress managed to pass (and President Trump signed) the TAKE IT DOWN Act, which criminalizes the spread of sexualized deepfakes and forces platforms to remove them within 48 hours of being notified. The bad news is that, as Spotify has been learning, the threats are proliferating. And while we may have found a way to stop the stream snatchers from colonizing the major platforms, the pods are still opening everywhere else. You can still maintain a grip on your digital identity. But increasingly, you’re going to have to fight for it. Sponsored Remove your personal data from Google and ChatGPTHave you ever searched for your personal information on Google or ChatGPT? You'd be shocked to find out what people can find out about you. Your name, phone number, and home address are just the beginning. Anyone deeply researching you can find out about your family members and relationships, SSN, health records, financial accounts, and employment history. Incogni's Unlimited plan puts you back in control of your online privacy, keeping you safer from harmful scams, identity theft, financial fraud, and other threats impacting your physical safety. Exclusive deal for tax filing season: try Incogni here and get 58% off your subscription with code PLATFORMER FollowingJudge in Anthropic case finds the Pentagon’s actions “troubling”What happened: U.S. District Judge Rita Lin called the Pentagon’s actions against Anthropic “troubling” during a Tuesday hearing on Anthropic’s civil suit against the Department of Defense. The Pentagon’s actions in designating Anthropic a “supply chain risk” over its refusal to amend a contract “don't really seem to be tailored to the stated national security concern,” she said. Instead, this “looks like an attempt to cripple Anthropic," Lin said. Anthropic is asking the U.S. district court for a preliminary injunction against the administration after their relationship soured over Anthropic’s concerns about domestic mass surveillance and lethal autonomous weapons. The company hopes the judge will reverse a directive by President Trump that government contractors stop using Anthropic, which he posted to Truth Social. The second is a directive from Defense Secretary Pete Hegseth that DoD contractors cannot “conduct any commercial activity with Anthropic,” posted to X. The third is the supply chain risk designation, a classification previously only used for foreign adversaries. During the hearing, the Pentagon’s counsel argued that Hegseth’s directive wasn’t legally binding. Judge Lin said she found that argument “pretty surprising.” Why we’re following: I, too, found it surprising that the DoD’s lawyer argued a directive from the head of the DoD (which ended “this decision is final”) had no legal effect. I genuinely burst out laughing while listening to the court hearing, as the DoD’s lawyer struggled to justify why Hegseth posted that given that Hegseth supposedly had no legal authority to make the command in the first place. This episode looks like another example of the Trump administration playing fast and loose with the law in order to intimidate its opposition. Today’s hearing offered a sign that this time, it might not work. What people are saying: Jessica Tillipman, associate dean for government procurement law at George Washington University, told Breaking Defense that the administration’s public statements against Anthropic weakened their case. If they hadn’t publicly attacked Anthropic, it would have been easier to argue that the administration acted on rational grounds based on the law. But “they have statements on the record right now that feel very much like this is a punitive, retaliatory act,” she said. “Trump saying ‘I fired [them] like dogs,’ that’s Exhibit No. 1. [And] when I saw the Secretary’s statement I was like, I’m sure the lawyers for Anthropic could have it framed.” —Ella Markianos What happened: A jury in New Mexico found that Meta wilfully violated the state’s consumer protections laws and misled users about the safety of its platforms, ordering the company to pay $375 million in damages. The lawsuit, brought by New Mexico Attorney General Raúl Torrez in 2023, followed an undercover investigation in which Meta platforms allegedly inundated a fake profile of a 13-year-old girl with “images and targeted solicitations” from child abusers. The suit in New Mexico is the first to reach trial in a series of social media lawsuits. Jurors are still deliberating over a separate trial in Los Angeles over whether Meta and YouTube knew the design of their platforms inflicted harm on their younger users. TikTok and Snap settled in that case rather than risking a trial. Why we’re following: Social platforms have long been able to avoid liability in cases where users experience harms by citing Section 230 of the Communications Decency Act, which shields them from the consequences of what other users post. The New Mexico case represents an effort to target the design of the system rather than the individual posts. It’s not any one Instagram Reel that causes you to develop an eating disorder, this argument goes; it’s that Meta’s recommendation algorithms see that you enjoy watching them and serve you more of them (including via push notifications) until you do. This was a relatively fringe legal theory until recently. The New Mexico verdict suggests it may be a winner — which has huge implications for liability for other platforms that host user-generated content. What people are saying: “We respectfully disagree with the verdict and will appeal,” Meta spokesman Andy Stone wrote on X. “We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.” Stone also noted that the damages are “just a fraction of what the State sought.” While the New Mexico trial deals with a slightly different issue than the Los Angeles one, the “jury finding for the state across the board is, however, a big moment for the crowd arguing that product liability offers a way around Section 230,” Reuters’ Meta reporter Jeff Horwitz wrote. —Lindsey Choo and Casey Newton Side QuestsThe State Department launched a new entity, the Bureau of Emerging Threats, to anticipate and respond to advanced cyberattacks from Iran and other US adversaries. A look at how connections to Palantir have become a bane for political candidates, who are facing pressure not to take donations from the company. The FCC banned the import of all new foreign-made consumer routers, a move seemingly targeted towards China, which controls an estimated 60 percent of the US market. Senators demanded the Commerce Department suspend Nvidia’s exports of AI chips to China and south-east Asian countries in response to the Super Micro smuggling scandal. There was a 260-fold increase in AI-generated CSAM over the past year, the Internet Watch Foundation said. Baltimore became the first major US city to sue xAI over Grok’s deepfake porn issue. Finland shelved a plan to move its election platform to AWS amid worries about US stability. AWS services in Bahrain were disrupted due to drone activity in the area. OpenAI shut down Sora as part of its pivot back to making enterprise software. (Another one of my 2026 predictions comes true.) Disney canceled plans for a $1 billion investment in OpenAI tied to a now-useless Sora deal. Sam Altman will no longer oversee OpenAI’s safety and security teams so he can focus on fundraising and infrastructure; the company has also finished pre-training a new model codenamed “Spud.” OpenAI asked UK regulators to force Google to offer ChatGPT as an alternative search engine on Android and Chrome on mandatory choice screens. OpenAI hired former Meta ad executive Dave Dugan to lead ad sales. The company is overhauling the shopping experience in ChatGPT after its Instant Checkout flop. Meanwhile, it’s reportedly nearing a deal to raise about $10 billion as part of a round valuing at $730 billion. OpenAI released a set of open-source teen safety prompts developers can use when training their models. The OpenAI Foundation announced key hires and committed to investing $1 billion in AI causes. Co-founder Wojciech Zaremba will lead AI resilience; Jacob Trefethen, who previously worked at Coefficient Giving and is the co-host of the Hard Drugs podcast, will oversee life sciences work, including an initiative to cure Alzheimer’s disease. Claude can now take control of your computer and perform tasks, just like OpenClaw. Anthropic announced a new |