The copyright trap: will the AI Code of Practice fix the problem?
The EU’s AI agenda leans toward technological optimism. At the AI Summit in Paris, Ursula von der Leyen announced the plan to mobilise €200 billion for AI investments. Yet, the lack of focus on AI safety did little to inspire confidence in the AI future that respects human rights, human creativity, and intellectual property.
Tensions between rightsholders and AI providers are reaching a breaking point as the European Commission’s AI Office refines the second draft of the Code of Practice. This document is intended to guide AI providers in complying with the AI Act. Initially, there was hope that this process would clarify obligations and offer real solutions. However, the latest draft falls far short of these expectations.
The deadlock is embedded in the AI Act itself. By default, the Act classifies data collection for AI training as an exemption that does not require prior authorisation from rightsholders. Creators’ calls for stronger safeguards face opposition from big tech, which argues that the Code of Practice should not impose obligations beyond the AI Act.
Major European rightsholder organisations voiced their dissatisfaction with the second AI Code of Practice.
Culture Action Europe member, the Spanish Federation of Dance Companies (FECED), along with other Spanish cultural organisations, has issued a Manifesto on Culture and AI. The manifesto calls for excluding the TDM exception from AI training, granting authors the right to know if their works have been used in AI development, and banning AI-generated content from public tenders, grants, subsidies, and competitions. |
Culture Action Europe breaks down the key sticking points
- From legal obligations to reasonable efforts
First, the Code misrepresents EU copyright law and reduces it to the Copyright Directive while ignoring complementary national laws. It portrays copyright compliance as simply respecting exceptions and does not even clarify that exceptions come with strict conditions—particularly the prerequisite of lawful access.
Rightsholders argue that the Code must explicitly require providers, including SMEs, to document lawful access. Currently, the Code only asks AI providers to make ‘reasonable efforts.’ By framing legally binding copyright obligations as mere reasonable efforts, the Code weakens enforcement and creates ambiguity where the law is already clear.
- The SME loophole
The draft Code exempts SMEs from key KPIs (documenting lawful access and copyright compliance assessment of external datasets, keeping track of piracy websites excluded from crawling, handling copyright-related complaints, etc.). Rightsholders argue that SMEs should not be entirely exempt, as many of the most active AI providers engaged in large-scale data scraping are SMEs. The AI Office insists that obligations apply to all, but without KPIs for SMEs. But if KPIs are optional for SMEs, how will compliance with the obligations be monitored?
Rightsholders questioned how AI providers can comply with EU copyright law without keeping full records of their training data. The Chair admitted that strict data retention is challenging but noted that the AI Act requires providers to have policies preventing copyright infringement. The emphasis, he said, should be on proactive measures to stop unlawful data use from the beginning. However, the key question—what happens if a provider does infringe?—remained unanswered.
- More than robots.txt
Rightsholders argue that the robots.txt protocol is outdated and ineffective for opting out, yet it’s the only option mentioned in the draft. The AI Office downplays it as a minimal obligation, but rightsholders worry that without alternatives, it could become the default standard for copyright compliance. They propose expanding the list to include methods like C2PA, the ISCC standard, and opt-outs via website terms. Some suggest the AI Office should maintain a list of widely used rights reservation standards that AI providers must follow.
There is also a call for the Code to explicitly mention asset-based metadata and require signatories to preserve it at every stage of data collection and processing. This is especially important for creators who have little control over where their works—photos, texts, music, or images—end up online.
How do tech associations respond?
They argue that the Code of Practice should not reinterpret EU law or impose new obligations but simply clarify existing ones. This is their main defense against measures, such as requiring AI providers to assess the copyright compliance of third-party datasets, ensuring that robots.txt exclusions don’t negatively impact content visibility in search engines, enforcing lawful access to copyrighted content, and prohibiting copyright-infringing uses of the model.
Big tech opposes the requirement to prohibit copyright-related overfitting—where AI models excessively replicate their training data. AI providers claim the term is too vague and that some level of overfitting is necessary for proper model performance. Meanwhile, the Chair stated that similar outputs are not necessarily or automatically considered copyright infringement.
Will these issues be addressed in the third and final draft of the Code of Practice?
We expected it this week, but the AI Office announced a delay. Given the importance of this draft, the Chairs have requested additional time to ensure it accurately reflects stakeholder feedback and is legally robust.
Hope is the last to die, but the cultural sector is far from optimistic. Some options were discussed at the AI ‘counter-summit’ at Théâtre de la Concorde in Paris. It gathered cultural professionals, journalists, and educators—many of whom have firsthand experience of AI’s negative impact on their fields.
Marco Fiore from Michael Culture Association, CAE member and co-facilitator of the Action Group on Digital and AI, attended the event and shared his reflections.
‘The Anti-Summit reflected widespread unease among representatives of the French cultural sector who spoke on stage. There was a strong call for unity and for raising awareness—both within and beyond one’s organization—about the risks faced by cultural workers. Speakers agreed on two key points. First, any compensation for the use of data in machine learning would be mere crumbs compared to the losses suffered by artists. That’s why authors should educate themselves on opt-out options and push for an opt-in model. Second, a call for internal sabotage and resistance against fatalism.’ |
Sabotage and resistance could take many forms. Refusing to ‘clean up’ AI-generated content. Using Nepenthes, a tarpit trapping crawlers that ignore opt-outs. Initiating legal action against AI providers who infringe on copyright. Going on strike and building a broad interprofessional movement to counter unbalanced AI adoption.
Under Articles 50(7) and 56(9), the Commission can also deem the code inadequate and adopt an implementing act specifying common rules for compliance through the comitology procedure, which would give more weight to Member States.
The cultural sector is mobilising and pushing for stronger protections. After explicit statements and formal demands on the table, the ball is now in the EU institutions’ court. The third draft will reveal whether the EU safeguards creativity or erodes the foundation of intellectual property.