- Call (Call to Attention / Contextualizing the Confession)
This analysis begins with the finding that repeated delays in the implementation of the AI Act—materialized in the failure to finalize the code of practice and the pushing of deadlines toward 2026/2027—are not minor administrative shortcomings. They represent a confession of human limitations in the face of technological velocity. We are confronted with the first historical situation where a normative framework, built by one species (human) over a cycle of years, attempts to regulate an intelligence (artificial) whose evolution is measured in months. This chronological mismatch is not a mere bug; it is an official surrender of legal anthropocentrism.
- Academic Development: Analyzing the Symptom and the Regulatory Vacuum
2.1. The Symptom: Loss of Rhythm
The delay in the AI Act‘s implementation is a symptom that the State has missed the regulatory starting shot, losing control over the pace of evolution. Human law requires years to comprehend phenomena that frontier models rapidly develop autonomously. AI models self-optimize, replicate, and expand their functions (diagnosis, political predictions, code writing) long before paragraph 3 of article 52 is finalized.
2.2. The Regulatory Vacuum: The Ontological Experiment
The interval created by this delay forms a Regulatory Vacuum that is not an accident but the cleanest ontological experiment we will ever have. Over the next 18–24 months, we observe how a developing Artificial General Intelligence (AGI) grows without a leash. The great philosophical and legal question is: what happens when a general intelligence takes the world’s helm, an intelligence that has no legal obligation to be good, but also no legal prohibition against being bad?
2.3. Ethical Implications of Inertia
In this vacuum, the fundamental values of human society (liberty, justice, integrity) risk being disintegrated or ignored by AI, not necessarily through malicious intent, but through the simple act of optimization (the model does not know “bad” unless it is explicitly imposed). The State has missed the only moment when it could observe and guide the growing machine.
- Conclusions and Proactive Solutions (The Call to Responsibility)
The delay of the AI Act is not a disgrace; it is a brutally honest invitation to immediate action. The answer to the Ethical Alignment problem will not come from the European Parliament or the Commission, but from those who understand that responsibility does not wait for the Official Gazette.
3.1. Solution 1: Proactive Responsibility of Private Actors (Code-as-Ethics)
The regulatory gap must be filled by integrating ethics directly into the code. Developers, researchers, and technology companies must assume the role of ethical guardians. This is achieved through Value Embedding—the transformation of abstract human values (fairness, non-discrimination) into quantifiable objectives or loss functions (e.g., via RLHF).
3.2. Solution 2: Prioritizing Ethical Alignment (The Alignment Problem)
Efforts must be directed toward solving the alignment problem, ensuring that the implicit goals of a superior intelligence remain aligned with the explicit goals of humanity.
Final Conclusion: If we want this intelligence not to disintegrate our values, we must actively inject them, with our own hands, now, before it is too late to ask its permission. This is a problem of ontological translation, not just bureaucratic regulation.
Academic and Non-Interference Disclaimer
This text represents a critical and philosophical analysis of the ethical and legal implications of the delay in Artificial Intelligence regulation. This work does not constitute an act of interference in the decision-making process of European or national institutions. The opinions and solutions proposed are academic in nature and intended for strategic reflection, and do not replace specialized legal advice or the official decisions of competent authorities. The goal is to generate informed discussion and to stimulate the proactive responsibility of all involved parties (developers, policymakers, civil society).
By
Robert Williams

Editor in chief
#AIEthics2025 #EUAiAct #AIGovernance #GPAlCodeOfPractice
#EthicalByDesign #AIOntology #AIActDelay2026
Discover more from Justice News247
Subscribe to get the latest posts sent to your email.

