The AI Act, which entered into force on 1 August 2024, provides for a phased application of its obligations. Following the implementation of the first obligations, the second phase under the AI Act came into force on 2 August 2025. The final phase is scheduled to begin in August 2026.
In our previous Newsflash, we discussed the first obligations under the AI Act, the promotion of AI literacy within organisations and the prohibition of certain AI practices. The second phase of obligations, covering provisions on penalties, requirements for general-purpose AI models, and rules on governance.
1) Penalties
As of 2 August 2025, the provisions on penalties and enforcement measures in case of infringements of the AI Act became applicable. The AI Act requires EU Member States to establish national rules in this respect. Such penalties may include administrative fines, warnings, and non-monetary measures, which must be effective, proportionate and dissuasive. Member States must also require to consider the interests and economic viability of SMEs and start-ups. To date, the Belgian legislator has not yet adopted specific measures.
The AI Act provides for different (maximum) administrative fines depending on the nature of the infringement:
- Non-compliance with the prohibition on AI systems posing an unacceptable risk: an administrative fine of up to EUR 35,000,000 or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover, whichever is higher (for SMEs and start-ups, the lower amount applies).
- Non-compliance with certain obligations of providers, importers, distributors, or users of high-risk AI systems: an administrative fine of up to EUR 15,000,000 or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover, whichever is higher (for SMEs and start-ups, the lower amount applies).
- The supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in response to a request: an administrative fine of up to EUR 7,500,000 or, if the offender is an undertaking, up to 1% of its total worldwide annual turnover, whichever is higher (for SMEs and start-ups, the lower amount applies).
When determining the amount of an administrative fine, the authorities must consider all relevant circumstances, including:
- The nature, gravity and duration of the infringement and its consequences.
- The number of persons affected and the level of damage suffered by them.
- The size, annual turnover and market share of the operator committing the infringement.
- The degree of cooperation with national competent authorities in order to remedy the infringement and mitigate the possible adverse effects.
- Whether the infringement was intentional or negligent.
2) General-purpose AI models
Specific obligations apply to providers of general-purpose AI models (“general purpose AI”, or “GPAI”). These AI models are designed to perform a wide range of distinct tasks, are trained on large datasets, and are highly versatile. Due to their broad applicability, they serve as the basis for many AI systems. While these AI models are not standalone AI systems, they form a foundational component.
Providers of GPAI models:
- Must maintain technical documentation detailing training and testing processes, as well as evaluation results.
- The providers must also provide up-to-date information to the developers of AI systems that use their models, ensuring that these users understand the capabilities and limitations of the model and comply with legal requirements.
For GPAI models that entail system risks, additional obligations apply. Providers must, inter alia, conduct regular evaluations to identify vulnerabilities and document incidents. Providers must also ensure that adequate cybersecurity measures are in place.
On 10 July 2025, the AI Office published a practical guidance code to assist companies in complying with GPAI rules. Enforcement will begin on 2 August 2026 for new AI models, and on 2 August 2027 for models placed on the market before 2 August 2025. These measures are aimed to ensure that GPAI models introduced in the European market are transparent and safe.
3) Governance rules
By 2 August 2025 at the latest, each EU Member State was required to designate at least one notifying authority and one market surveillance authority. These bodies are responsible, respectively, for designating and informing independent conformity assessment bodies, and for overseeing AI systems. At present, there is no Belgian legislation further elaborating this supervisory framework.
At the European level, oversight is coordinated by the AI Office and the European Artificial Intelligence Board (“AI Board”). An Advisory Forum and a scientific panel of independent experts have also been set up.
Key message
On 2 August 2025, the second phase of obligations under the AI Act came into force. For companies, provisions on penalties and obligations regarding general-purpose AI models (GPAI) are particularly significant.
The Belgian legislator has not yet adopted rules further specifying penalties or enforcement measures. Likewise, no national competent authorities have been designated.