US attorneys general call on X to crack down on sexualized deep fakes

US attorneys general call on X to crack down on sexualized deep fakes

3 minutes, 57 seconds Read

It looks like more legal trouble is on the way for X, with several US states taking regulatory action against Elon Musk’s app for generating and distributing sexualized images.

This is because At some point, data indicates that Grok generated more than 6,000 sexualized images per dayall of which were publicly accessible in the app.

That led to a major backlash in several regions, and in some areas even a ban on both Grok and X. Although X initially stood firm in the face of the criticism, with Musk himself claiming that the criticism was not about

But in reality, there is no way to justify generating nude and sexualized images, and there is no need for this functionality to exist no matter what other politically charged message you want to attach to it. And as a result, amid the threat of further bans and restrictions,

But that may have come too late. Yesterday, the European Commission announced an investigation into Grok and xAI’s security measures to protect against misuse of its tools.

And now a group of 37 more US attorneys general also wants to take action against xAI.

As reported by Wired:

“On Friday, a bipartisan group of attorneys general published an open letter to xAI demanding that it “immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are overwhelmingly targeted by [non-consensual intimate images].’”

In the letter, the group raised serious concerns about “artificial intelligence producing deepfake, non-consensual intimate images (NCII) of real people, including children.”

And while X has now taken action, the group is calling for more responsibility from Musk and his team.

“We recognize that xAI has implemented measures intended to prevent Grok from establishing NCII and appreciate your recent meeting with several of the undersigned Attorneys General to discuss these efforts. […] Furthermore, you claim to have implemented technical measures to prevent the @Grok account from “allowing the editing of images of real people in revealing clothing such as bikinis.” But we are concerned that these efforts may not have fully resolved the problems.”

The attorneys general further suggest that X’s AI tools are actually designed for this purpose and have built-in tools to facilitate malicious use.

“Grok not only enabled this harm on a massive scale, but appeared to encourage this behavior by design. xAI purposefully developed its text models to engage in explicit exchanges and designed image models with a ‘spicy mode’ that generated explicit content, resulting in content that sexualizes people without their consent.”

As a result, the group calls on Elon and

The attorneys general also want

That means more challenges for X, in improving transparency, as well as expanded efforts to implement safeguards and restrictions on Grok use.

Which again Elon Musk is not a fan of, and it may take a bigger legal battle to make this happen, which Musk will no doubt also use as an opportunity to present himself as the face of free speech as government regulators look to crack down.

Elon’s main refrain in this case was that other apps facilitate the same options, and that regulators are not going after other nudification and AI generation apps with the same vigor.

But the attorneys general also address this:

“While other companies are also responsible for enabling the creation of NCII, xAI’s size and market share make it an industry leader in artificial intelligence. Unique among major AI labs, you connect these tools directly to a social media platform with hundreds of millions of users. So your actions are of paramount importance. The steps you take to prevent and remove NCII will set industry benchmarks to protect adults and children from harmful deepfake, non-consensual intimate images.”

It’s interesting to consider this effort in light of Elon’s own very public, very loud stance against CSAM material, with Musk announceshortly after taking over Twitter, fighting CSAM was “Prority #1” during his time on the app.

Musk had criticized Twitter’s former leadership for failing to tackle child sexual exploitation on the app, and has since made some big strides in tackling it, such as on X.

But in this case, Musk wants to fight back, which seems to contradict these claims.

I mean, it’s clear that the broader political view of CSAM content has changed, as it was once the primary focus of right-wing voters, many of whom would now prefer to overlook the Epstein files.

Perhaps that has changed Elon’s own position on this issue, although on the face of it it seems that this should be a major concern for this group.

Either way,

We’ll see how Musk responds and if further action will be taken on this front.

#attorneys #general #call #crack #sexualized #deep #fakes

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *