Musk’s Xai gives ‘Unauthorized’ Tweak for ‘White Genocide’ messages

Musk’s Xai gives ‘Unauthorized’ Tweak for ‘White Genocide’ messages

3 minutes, 20 seconds Read

The startup of the artificial intelligence of Elon Musk has blamed a “unauthorized adjustment” for causing his chatbot grok to generate misleading and unsolicited messages that refer to “white genocide” in South Africa.

The chatbot, developed by Musk’s Company Xai, ignited controversy this week by answering multiple user prompts with right-wing propaganda about the alleged suppression of white South Africans.

“How often has HBO changed their name?” An X user asked the bone, according to online screenshots.

De Bot gave a short answer about HBO, but quickly launched in an anger about “white genocide” and called it anti-apartheid singing “Kill the Boer.”

In response to a user who asked why Grok was obsessed with the subject, the chatbot replied that it was instructed by my makers of Xai to tackle the subject of ‘white genocide’.

Musk, Boss of Tesla and SpaceX, born in South Africa, has previously accused the leaders of South Africa of “openly insisting on genocide of white people in South Africa.”

In a statement Xai accused a “unauthorized change” to Grok, the company said that it ordered a specific response to “the internal policy of Xai and core values ​​of Xai violated”.

After a “thorough examination”, it implemented measures to make the GROK’s system system public, to change his assessment processes and to set a “24/7 monitoring team” to tackle future incidents, it added.

After a return to X, Grok began to remove the controversial answers.

When a user questioned the deletions, De Bot said: “It is unclear why answers are removed without specific details, but X’s moderation policy probably plays a role.”

“The subject of ‘white genocide in South Africa’ is sensitive, often with wrong information or hateful speech, which violates platform rules,” it added.

– ‘not reliable’ –

The Digital Faux Pas exposes the challenges of moderating the reactions of AI-Chatbots and quickly evolving technology-in a wrong information-filled internet landscape, such as tech experts for stronger regulations.

“The strange, non -related grok answers are a memory that AI chatbots are still a budding technology and may not always be a reliable source for information,” wrote the Tech Crunch site.

“In recent months, AI model providers have difficulty moderating the reactions of their AI chatbots, who have led to strange behavior.”

Earlier this year, Sam Altman, CEO of OpenAi, said that he returned an update of Chatgpt to ensure that the chatbot was overly sycophantic.

Grok, of which Musk promised that it would be an “edgy” truthteller after the launch in 2023, is entangled as a controversy.

In March, Xai acquired the Platform X in a $ 33 billion deal with which the company could integrate the data sources of the platform with the development of the chatbot.

The Bellingcat research output recently discovered that X users used Grok to create non-consensual sexual images, using the bone to undress women on photos they posted on the platform.

Last August, five American states sent an open letter to Musk, encouraging him to repair grock after the election information had expressed elections.

In another shame for Musk, the Chatbot recently suggested that the billionaire was probably the “biggest disinformation spreader on X.”

“The proof tends to Musk because of his property of X and the active role in strengthening wrong information, especially in elections and immigration,” wrote the Chatbot.

Since many X users live as a grug to verify information, in several cases the chatbot has checked actual Russian disinformation claims and ruled that they were true, according to the disinformation watchdog Newsguard.

“The growing dependence on grok if a fact checker is because X and other major technology companies have scaled investments in human facts controls,” Newsguard researcher McKenzie Sadeghi told AFP.

“Despite this apparently growing dependence on technology for facts controls, our research has repeatedly discovered that AI chatbots are not reliable sources for news and information, especially when it comes to the incitement news.”

(This story was not edited by NDTV staff and is automatically generated from a syndicated feed.)


#Musks #Xai #Unauthorized #Tweak #White #Genocide #messages

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *