Joint Statement on Artificial Intelligence and the Draft EU AI Act
Authors’, Performers’ and Other Creative Workers’ Organisations Joint Statement on Artificial Intelligence and the Draft AI Act
We represent several hundred thousand professional authors, performers, and other creative workers and artists, who rely entirely on their ability to license and control the use of their work, as well as their voice, likeness, and other personal data, to make a living. We all share a common concern as generative AI rapidly spreads in a legal environment which is poorly enforced and lacks adequate safeguards regarding the use of our members' works and personal data for AI training purposes. Equally problematic are the numerous unauthorised, abusive, and deceptively transformative uses of our members' protected works and personal data by AI-powered technologies.
Our eyes are on the EU AI Act, which represents the first attempt by a major regulator to establish a legal framework for the advancement of this technology, while safeguarding fundamental societal and individual rights. As the negotiation of this Proposal enters its final “trilogue” stage, we must reiterate our position and insist on the absolute need for ahuman-centric approach to regulating generative AI. This approach should recognise, secure and enforce the right of our members to control the use of their artistic creations during the machine-learning process. To make sure it protects human artistry and creativity, it must be built upon principles of informed consent, transparency, fair remuneration and contractual practices.
We acknowledge that AI represents an extraordinary technological advancement with immense potential to enhance various aspects of our lives, including in our sectors. However, it is crucial to recognise that alongside these benefits, there exists a darker aspect to this technology. Generative AI is trained on large sets of data and huge amounts of protected contents scraped and copied from the internet. It is programmed to deliver outputs that closely mimic and have the ability to compete with human creation. This technology poses several risks to our creative communities:
Firstly, the protected works, voices, and images of our members are often used without their knowledge, consent and remuneration to generate content. Some of these uses may harm their moral and personality rights and prejudice their personal and professional reputation. Additionally, there is a risk that their own work may become displaced, forcing them to compete against their digital replicas, with dire economic consequences. There is also a broader societal risk, as people may be led to believe that the content they encounter—whether in text, audio, or visuals—is a genuine and truthful human creation, when it is the mere result of AI generation or manipulation. This deception can have far-reaching implications for the spread of misinformation and the erosion of trust in the authenticity of digital content.
AI cannot be permitted to develop in a manner that disregards fundamental rights, such as authors and performers rights, image, and personality rights, and it should not be employed in ways that may deceive the general public. As the AI Act approaches its final stage of negotiations, the creative professionals we represent request absolute transparency to be prioritised. This is essential to ensure that informed consent and fair remuneration can be agreed upon, effectively implemented and enforced in relation to both the input(protected contents and data used by machine-learning) and the output (results generated).
Authors, performers and other creative workers should be informed and have accessible means to give or withhold authorisation when their protected contents or personal data are used, or are planned to be used, to train AI. This is essential for them to be able to engage on fair terms with those using and benefiting from their creative contents and their value, determining aspects such as the scope, purpose and length of usage and how they may be remunerated for such use. At present, neither the CDSM directive (and in particular Article 4 and its so-called “opt-out” mechanism) nor the GDPR are adequately enforced in this radically new technological environment. It is crucial to acknowledge that none of the protections built into these legal instruments has a slightest chance to work if strict transparency requirements are not placed upon developers of generative AI. We welcome the European Parliament proposals to include specific transparency requirements for AI foundational models, but it is paramount to further enhance these safeguards by encompassing the reproduction of any protected works and any personal data for purposes of training these models. Scraping and mining to train AI were initially permitted for research and trend analysis purposes; today, this has become an integral part of generating content: legislation must reflect this change in the use of protected works and personal data.
The AI Act should also impose strict visible and/or audible labelling obligations to all deployers of generative-AI powered technologies, warning the general public about the fact that what they are watching, listening to or reading has been altered or generated by AI. While these obligations may be adapted to the nature of the content in order not to hinder its exploitation, we firmly reject broad exceptions that would render labelling obligations practically meaningless, such as when it is deemed “necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU”, or “where the content is part of an evidently creative, satirical, artistic or fictional work”.
We urge the European institutions to agree on a balanced regulation that not only forwards the advancement of AI technologies but also promotes original human creativity in our societies and preserves the rights and livelihoods of the authors and artists we represent.