The AI Act is set to become the most comprehensive regulation of AI in the western world.
The European Union has taken a historic step in regulating artificial intelligence (AI) by reaching a provisional agreement on the landmark AI Act. This legislation marks the first attempt by a major regulatory body to establish comprehensive rules governing the development and deployment of AI technology, with significant implications for tech companies operating in the EU and beyond.
The Negotiations
After more than 24 hours of negotiations, representatives from the European Parliament and 27 member countries had made significant progress but remained divided over several issues, including biometric surveillance. Negotiators then resumed their discussions, with EU internal markets chief Thierry Breton leading the talks until an agreement was reached.
Key Provisions of the AI Act:
Risk-based approach: The Act categorizes AI systems based on their potential risks and imposes stricter requirements on high-risk systems, such as those used in healthcare, law enforcement, and critical infrastructure.
Transparency and explainability: Developers and users of high-risk AI systems will be obligated to provide relevant information about their algorithms and decision-making processes, allowing for greater oversight and accountability.
Prohibition of certain uses: The Act bans the use of AI for social scoring, mass surveillance, and manipulative practices, aiming to protect fundamental rights and freedoms.
Enforcement and compliance: The Act establishes a framework for enforcement, including the ability to impose fines on companies that violate its provisions.
Implications for ChatGPT and Similar Technologies
ChatGPT, a large language model capable of generating human-quality text, falls under the category of high-risk AI systems due to its potential for misuse. Under the AI Act, developers of ChatGPT and similar technologies will be required to:
Conduct risk assessments: Evaluate the potential risks associated with their technology and implement appropriate mitigation measures.
Provide transparency: Disclose information about their algorithms and training data to enable understanding and oversight.
Mitigate bias: Take steps to prevent their technology from generating biased or discriminatory outputs.
Implement safeguards: Ensure that their technology is used in a responsible and ethical manner.
Global Impact of the AI Act
While the AI Act only applies directly to companies operating in the EU, it is expected to have a significant global impact. As the first major piece of AI legislation, it sets a precedent for other countries and regions to follow. Additionally, large technology companies operating globally will likely be forced to comply with the Act's requirements even outside the EU to avoid disrupting their operations.
Future Actions
The AI Act must still be formally adopted by the European Parliament and Council before it can come into force. This is expected to happen in early 2024, with the Act taking effect two years later. The success of the AI Act will depend on its effective implementation and enforcement. The EU will need to ensure that it has the resources and expertise necessary to oversee the development and deployment of AI technology and hold companies accountable for any violations.
The AI Act represents a significant step forward in regulating AI technology and addressing the potential risks associated with it. While the details of the Act are still being finalized, it is clear that the EU is committed to ensuring that AI technology is developed and used in a responsible and ethical manner. The success of the AI Act will be closely watched by other countries and regions considering similar legislation, potentially paving the way for a more global framework for AI governance.
The common threads woven throughout all of these changes include an increasing demand for employee flexibility and the use of legal technology tools.
An initiative of Coinbase, Coin Center, Union Square Ventures, and Consensys
The California legislature has passed Bill AB 2013, mandating developers of artificial intelligence systems to disclose the data used to train their models. The bill is now going to Gov. Gavin Newsom for approval.
Published weekly on Friday, the Legal.io Newsletter covers the latest in legal, talent & tech
The move signals strategic shifts in LA's legal landscape.
Published weekly on Friday, the Legal.io Newsletter covers the latest in legal, talent & tech.
The Federal Trade Commission's non-compete ban has been blocked nationwide after Texas Federal Judge Ada Brown ruled that the agency lacked the authority to enact the “unreasonably overbroad” regulation.
The U.S. in-house counsel population has surged to 140,800 in 2023, reflecting an 80% increase in 5 years and underscoring the growing importance of in-house legal departments.
Published weekly on Friday, the Legal.io Newsletter covers the latest in legal, talent & tech.