Get in touch now!

Hey! We have something special for you. Discover the digital potential of your business with a 30-minute introductory meeting

Go to blog

EU AI Act 2023: Detailed Analysis on Successes and Areas for Improvement

EU AI Act 2023: Detailed Analysis on Successes and Areas for Improvement

If you are curious to learn more about the AI Act, you are in the right place!

We will provide you with a clear and detailed overview of this groundbreaking legislation. You will explore the various aspects and implications of this regulation, gaining a better understanding of how the EU is shaping the future of AI. Keep reading to immerse yourself in a world where innovation meets responsibility!

Let's start with the basics... what is the AI Act?

The AI Act (Artificial Intelligence Act) is a legislative proposal of the European Union aimed at regulating the use of artificial intelligence (AI) within the member states.

This act represents one of the world's first attempts to create a comprehensive regulatory framework for AI, addressing a range of challenges and risks associated with this rapidly evolving technology.

The AI Act is a fundamental part of the EU's strategy to shape the development and use of AI in a safe, ethical manner, in line with European values and regulations.

Introduction: The European Union and the AI Act

The adoption of the AI Act in the European Union represents a crucial moment for the future of artificial intelligence.

This unique legislation aims to navigate the delicate balance between technological progress and the safeguarding of fundamental societal values.

As AI continues to permeate every aspect of our lives, from national security to everyday life, the AI Act emerges as a beacon of regulation, establishing standards that could shape the responsible use of AI globally.

In this dynamic context, we explore how the EU is at the forefront of this revolution, setting a path that others may follow.

At the heart of the EU's AI Act is its commitment to responsibly and carefully regulate the applications of artificial intelligence, with specific emphasis on those considered high-risk.

This section of the law critically addresses the use of AI in sensitive areas such as law enforcement and the management of essential services, highlighting the need for careful oversight.

It is a focal point that reflects the EU's desire to balance technological progress with safety and ethics, ensuring that AI is used in ways that enhance society without compromising its fundamental values.

EU Priorities: Privacy Protection

The protection of citizens' privacy has always been a priority for the EU, as demonstrated by the well-known GDPR regulation for online data protection.

With the evolution of AI, which potentially threatens the security of personal data, the EU has acted promptly.

One of the most controversial issues concerned the use of AI in surveillance and law enforcement, such as real-time biometric recognition or the use of algorithms to predict crimes.

Ultimately, the ban on biometric recognition was decided, with exceptions for three specific cases:

  1. In the presence of an imminent threat of a terrorist attack.
  2. For the search of missing persons.
  3. For the investigation of serious crimes.

Regulation of Language Models and Deep Fakes

In addition, regulation has been introduced for advanced language models such as Google's GPT-4 and LaMDA, which will require strict regulations in terms of transparency and cybersecurity.

Furthermore, the issue of deep fakes – audiovisual content created by AI that realistically imitates human appearance and voice – will need to be marked with a specific digital watermark, along with a text string informing the user of their non-authentic nature.

As for AI chatbots, they will be allowed to use only sources for which copyright has not been revoked.

Companies that violate the rules risk fines of up to 7% of global sales.

Protection of Fundamental Rights

The creators of the largest generic artificial intelligence systems, such as those powering the ChatGPT chatbot, will face new

transparency requirements. According to EU officials and previous drafts of the law, chatbots and software that create manipulated images like "deepfakes" should clarify that what people see has been generated by artificial intelligence.

The AI Act aims to safeguard the fundamental rights of individuals, with particular emphasis on preventing discrimination, privacy violations, and other potential harms caused by the improper use of AI.

Although the political agreement on the AI Act has been reached, the process for its full implementation and enforcement is still ongoing.

This includes finalizing technical details and formal approval from both the European Parliament and the European Council, u

p until the final moments of the negotiations, there was heated debate among politicians and nations on the terminology to use and on finding a balance between promoting innovation and the need to protect society from potential risks.

The conclusion of the agreement in Brussels led to an intense three-day negotiation phase, with an epic inaugural session of 22 hours that began on Wednesday afternoon and continued until the following day.

The final text of the agreement was not immediately released, as it was expected that discussions would continue behind closed doors to refine the technical details, which could delay the final passage.

Final decisions will need to be made by the European Parliament and the European Council, which includes representatives from the 27 member countries of the Union.

Let's summarize together and see what the successes have been and what areas need improvement.

  1. Risk-Based Regulation: The AI Act has introduced a risk-based approach to classify AI systems. This method allows for the identification and proper management of risks associated with the use of AI, ensuring safety and respect for fundamental rights.
  2. Transparency and Accountability: The focus on increasing transparency and accountability of AI systems is a significant step forward. Requirements for clear identification of chatbots and deepfakes increase user awareness of the artificial origin of the content.
  3. Limitations on Facial Recognition Use: The regulation imposes strict restrictions on the use of facial recognition by police and governments, protecting privacy and preventing potential abuses.
  4. Safeguarding Fundamental Rights: The AI Act aims to protect the fundamental rights of individuals from the unethical use of AI, particularly in the areas of non-discrimination and privacy protection.
  5. International Positioning: Europe has positioned itself as a leader in global AI regulation, establishing a model for other countries and regions.

Areas for Improvement

  1. Implementation Timelines: There are concerns about the implementation timelines of the AI Act, with some aspects of the policy potentially not being implemented for 12-24 months, a considerable period given the rapid development of AI.
  2. Clarity and Definitions: There is a need for greater clarity in the definitions and terms used in the AI Act, particularly regarding risk classification and technical definitions.
  3. Balancing Innovation and Security: There is ongoing tension between the need to promote innovation in AI and the necessity to protect society from its potential harms.
  4. Coordination and Enforcement: The need for coordination among member states for the effective application of the AI Act remains a challenge. Additionally, the lack of clear mechanisms for enforcement and oversight could limit the effectiveness of the regulation.
  5. Impact on Small and Medium Enterprises (SMEs): There is concern that the requirements of the AI Act may be burdensome for SMEs, limiting their ability to innovate in the field of AI.

Impacts on Technological Development

This new regulation, called the AI Act, has elicited mixed reactions in the technology sector.

Some of the largest tech companies have expressed concern, fearing that it could significantly slow down progress in the field of AI.

However, companies now have a period ranging from 6 to 24 months to comply with these new rules.

In conclusion, the EU's approach highlights a strong commitment to protecting the privacy and security of its citizens, even at the cost of potential slowdown in technological development. This decision, although it may seem like a brake, could actually represent a measured and necessary step towards a safer and more controlled future in the field of artificial intelligence.

We hope that this overview of the EU AI Act has been clear and helpful. We have sought to provide a balanced view of its successes and areas in need of improvement, highlighting its significant role in regulating artificial intelligence.