Skip to content

European Parliament Adopts the AI Act: Implications for Culture

The European Parliament has approved the AI Act, setting the stage for the world’s first comprehensive regulation of artificial intelligence (AI). After three years of deliberations, the Act will undergo final linguistic checks and technical procedures and enter into force later this spring. Since the European Commission presented its proposal back in 2021, the world has witnessed the exponential growth of AI. ChatGPT co-authored five Pulitzer-nominated works this year, the Writers Guild of America strike opposed the use of generative AI, and Midjourney faced a lawsuit for allegedly using artists’ works to train AI without their consent.

The Act’s implications extend far and wide. Although its relation to the cultural sector may seem not obvious, the Act contains articles about marking AI-generated content and copyright requirements for the use of data to train AI. Both of these issues are relevant to creators, some of whom have already endorsed the AI Act. Commissioner Thierry Breton commented, ‘We are regulating as little as possible, but as much as needed.’ Let’s explore new regulations provided by the AI Act and how they might apply to the cultural sector. 

  1. Labelling AI-Generated Content

According to Article 50, providers of the AI systems that produce audio, image, video, or text content must ensure that it is marked in a machine-readable format. This rule doesn’t apply when the AI systems just edit your input or do not substantially change it. Your Grammarly edits will remain unnoticed.

The provision will help recognise whether a work is created by a human or by AI. No embarrassment when an AI-produced photo is awarded a prize at the photo competition will be repeated! 

How will this obligation be implemented? Most likely, through watermarking techniques. However, watermarking AI-generated content is problematic, especially in the case of texts, since it is easy to remove or work around by paraphrasing the text so it doesn’t resemble ChatGPT syntax and lexicon. Now it is the challenge for AI providers—those who develop an AI system and introduce it to the market: the future will show what techniques will emerge to comply with the AI Act.

The Act also imposes obligations on the deployers—essentially users but for more professional purposes—when they produce deep fakes with the help of AI. Deep fakes are content that mimics real people, objects, and events but is, in fact, not real. In these cases, deployers must disclose that the content has been artificially generated or manipulated.

If the deep fake is part of an evidently artistic, creative, satirical, fictional analogous work or programme, then the ‘disclosure of the generated or manipulated content must be done in an appropriate manner that does not impede the display or enjoyment of the work.’ So, if you’re planning an exhibition of AI-generated caricatures of your local politicians cutting funding for culture or a video of Jay-Z rapping ‘To be or not to be,’ make sure you non-invasively indicate the use of AI. The authors clarify in recital 134 that this approach safeguards our ‘freedom of expression’ and ‘the right to freedom of the arts and sciences.’

Interestingly, earlier drafts excluded ‘creative’ deep fakes from the transparency obligations. The European authors’ and performers’ organisations advocated for the labelling of all deep fakes and even obtaining consent from the person depicted or otherwise concerned. The nearly final text has found some middle ground between the first version and the requirements of the community.

  1. Copyright Protection and Data Sources

Artists have raised concerns about the unauthorised use of their work by AI developers, often without acknowledgement or compensation. In principle, the AI Act does not prohibit AI developers from using datasets, including creative works, to train their models. It refers to the Copyright in the Digital Single Market Directive, Article 4(3), which allows for the use of works for text and data mining.

However, Article 53 of the AI Act also reminds the AI providers that according to this Directive, rightsholders can ‘expressly reserve the use of works’. If they do so via ‘machine-readable means in cases where content is made publicly available online’ (in practice it means updating your website’s terms and conditions), the AI providers must not use this data. If they wish to conduct text and data mining on such works, they should secure permission from the rightsholders. And negotiate the price.  

If you are a photographer with a beautiful portfolio posted online and don’t want it used to train AI systems, you should explicitly state on your website that you forbid the use of your works for text and data mining.

Another copyright provision requires AI providers to publicly disclose a ‘sufficiently detailed’ summary of the data used in training general-purpose AI models. The AI Office will set a standardised format for such data summaries. But if you think you’ll see the title of all your poems in the AI’s technical documentation, don’t be too quick to celebrate. The descriptions will most likely be high-level, outlining datasets rather than providing a list of all the works. The lawmakers’ intention was to facilitate the verification of data used for AI training and ensure compliance with EU copyright rules. 

  1. Next Steps

The AI Act outlines the main rules for AI providers and deployers. More practical information on the implementation of the rules will follow from the new European Commission’s AI Office. The Act is a Regulation applicable to all member states, but it will take approximately two years to become fully applicable (except for some provisions on prohibited practices, codes of practice and general-purpose AI rules that will be implemented earlier).

The EU is often criticised for overregulating, which can stifle creativity and innovation. This time, the lawmakers have opted for a comparatively balanced approach. Over the next two years, AI will develop even faster, and new cases and practices will puzzle artists. Meanwhile, it would be good to learn not just how to restrict, label or avoid AI but also how to use it to enhance the arts’ visibility, accessibility, heritage protection, and co-creation. That’s one of the tasks of the Digital Action Group CAE is launching—we’ll keep you posted about our findings.

* The text is purely informational and should not be regarded as legal advice.