EU Act for Genai will come into effect this week

EU Act for Genai will come into effect this week

7 minutes, 21 seconds Read

It is just over four years since the European Union First proposed legislation To rule technology companies that build AI systems and how users implement them. A lot has changed since then.

In November 2022, OpenAi Chatgpt launched that – and also to be able to write convincing poems – Seduced technical companies to wonder how they would earn money with the software.

A few years ahead quickly, and almost every enterprise application builder includes generative AI, promising improved productivity. Once it became clear how models such as Chatgpt were trained – including those used for visual and musical content – the holders of copyright started to wonder if they were rewarded for their creations, which led to a series of lawsuits.

At the same time, Europe and America started to deviate on their approach to technical regulation with the re -election of Donald Trump as the US President at the end of 2024.

The EUs AI Act was hired in March last yearThis makes it the first legislation that is specifically designed to tackle the risk of artificial intelligence, including biometric categorization and manipulation of human behavior, as well as stricter rules for the introduction of Genai.

The legislation comes into effect in phases. Later this week (2 August) a series of rules will be present for builders of generative AI models, such as chatgpt, which means that developers must evaluate models, assess systemic risks, have to carry out and restrict opponents, report serious incidents for the European Commission, ensure cyber security and report on their energy effective.

To facilitate compliance with the new rules, the EU has launched a code of conduct and guidelines for developers of large language models (LLMS) and other types of genai. Instead of directly complying with articles set out in the law, Genai developers can refer to the code in their handling of the EU AI office, set up to supervise the introduction of the law.

But not everyone is happy or willing to participate. Social Media Giant Meta, who built the LLAMA LLM, said The guidelines introduced “legal uncertainties” outside the scope of the law and refused to register at the practical code.

Joel Kaplan, Chief Global Affairs Officer at Meta, said in one LinkedIn post That “Europe is on its way to the wrong path on AI.”

The guidelines were delayed – the European Commission was planning to publish them in May – and it still needs to be approved by the EU member states and the committee, so that some developers have been worried about how little time they should take the code and whether it could change.

Nils Rauer, Pinsent Masons partner and joint lead of his global AI team, said The register That makers of Genai models have the obligation to ensure that they adhere to the action, and they are generally well prepared for it. However, there are reservations for the guidance, he said.

“It must be suitable for the goal. It must be very practical. If a large number of lawyers work on such an important document, many views come together, and you can see that by reading the practical code. They have done their best, but it remains quite generic about a number of issues, including copyright problems.

Monika Sobiecki, partner at law firm Bindmans, said that although users may not have much sympathy with meta and other AI builders, the fact that the guidance was landed on 10 July gave them about three weeks to understand it before the law enters into force.

“The complaint that comes from some of these large AI producers of general purposes is:” Well, you gave us about three weeks. ” Also, the guidance was not fully completed by the process [discussion between the European Parliament, the Council of the European Union, and the European Commission]. Although we have most of the form, you expected it to be approved by the European Commission and the parliament before it is implemented and compliance is expected, “she told us.

However, the EU was propelled by a sense of urgency when introducing the AI legislation to distinguishing the US as the technology is assumed.

“There is an idea that two wider trends take place at the same time: one is the proliferation of AI tools, especially Genai tools and a feeling of a regulatory gap around it. For example, the EU committee has stated that they want to see the EU market as a place of developing safe AI,” Sobiecki said.

Last week, the Trump administration has introduced an AI action planThat supports a more hands-off approach to regulations compared to the EU.

“When we talk in geopolitical terms, you have the American market, which wants to move quickly and break things,” said Sobiecki. “The vision of Donald Trump on American technical dominance is to deregulate everything and to give AI a free rein. The EU has given priority to the idea of AI safety.”

Although critics claim that the AI act only tackles transparency and bias and not the broader effects of AI, it at least pushes AI producers to ensure that they have fully explained what their AI does, she said.

The EU and the US announced a trade agreement during the weekend. It is expected that it will impose 15 percent rates for European exports of cars, pharmaceutical products and semiconductors to the US, but it is still a high -level political agreement with the details to be completed.

While the Rauer of Pinsent Masons agreed that the AI Act can be dragged into commercial negotiations between the EU and the US – under the current deal or later regulations – developers should not be further influenced in their approach to compliance.

“If there is a huge gap in the regulatory framework between the US and the EU, this causes trade barriers, and therefore they will negotiate certain elements of water for somehow [the AI Act]”He said.” That said, as we stand now, the large American companies that deal with AI, they all did their homework, and they [will not] such as waiting until the last minute to see every type of appeasement [from the EU]. They can’t run their companies [assuming] There will be a deal. “

At the same time, companies that have used AI have tried to understand how the law on them and their supply chains applies.

Rauer also said that customers asked his team to produce more tailor -made guidelines on how to meet the law, to the level of applying AI on drug discovery or HR and recruitment. And there has been a larger study of delivery contracts.

“You have standard contracts that do not necessarily reflect that AI is used in the supply chain,” he said. “We have drawn up clauses that use questions or suppliers AI when they deliver goods and services and how these AI systems are trained.”

Others were worried about whether they could be produced by marketing and advertising agencies with the help of AI.

“For Example, big brands in the automotive sector, have bone confronted with lots of queries from their marketing agencies asking if they can use ai to create campaigns. The video footage using ai can be fantastic, for intel africa, and sheaper, dreaf, draying, down, dready, dready, dready, dreaher, dreaher, theater, theater, theater, theater, theater, theater, theater, theater, theater, theater, theater, theater, theater, than, draying, theater, theater, than, draying, theater, theater, than, draying, theater. Coastline.

“Het lastige ding hier is, als je iets maakt door middel van AI, is het geen menselijke creatie, en daarom krijg je er geen auteursrechten voor. Er is een enorme discussie, hoeveel moet je doen in termen van menselijke invloed, menselijke input om de output van het werkproduct te waarborgen van het werkproduct dat in aanmerking komt voor de auteursrechten … en dat is zeer belangrijk en beĆÆnvloedt de industrie van de farmaceutische industrie als Automotive. “

The AI law is introduced in phases. In February it prohibited activities, including biometric categorization systems that claim to sort people in groups based on politics, religion, sexual orientation and race. For example, the non -oriented scraping of facial images of the internet or CCTV, and emotion recognition in the workplace and educational institutions, was forbidden.

After the rules for AI come into effect later this week in the general purposes, the following category are risky systems. From August 2026, systems with the potential to cause significant damage to health, safety, fundamental rights, environment, democracy and the rule of law.

Although the introduction of Genai guidance has been welcomed by some – and rejected by others – it is only one milestone in a long journey. Since the US has chosen a different path than the EU, organizations can build and use AI only to strive to follow their progress and at the same time meet the law. Ā®

#Act #Genai #effect #week

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *