Joint letter to Members of the European Parliament on the impact of Artificial Intelligence on the European creative community
On 23 July 2024, a coalition of 13 international and European organisations representing creators, authors,
artists, performers and creative workers, including ECSA, sent a letter to newly elected Members of the European Parliament on the impact of Artificial Intelligence on the European creative community. Find the letter below.
Brussels, 23 July 2024
Dear Member of the European Parliament,
We are writing to you on behalf of a coalition of organisations representing the collective voice of hundreds of thousands of writers, translators, performers, composers, songwriters, screen directors, screenwriters, visual artists, journalists, and other creative workers. First of all, we would like to congratulate you on your election as a Member of the European Parliament.
During this next five-year mandate, you will have to make key decisions on pivotal EU policies related to Artificial Intelligence (AI).
For our members, AI represents an extraordinary technological advancement with immense potential to enhance various aspects of our lives, including in the cultural and creative sectors. However, as AI technology is deployed more and more widely, it is also important to acknowledge a darker aspect of this technology: all generative AI models in existence today have been trained in full opacity on enormous amounts of copyright-protected content and personal data which have been scraped and copied from the internet, without any authorisation nor any remuneration for the creators we represent. In addition, the use of AI-generated deep fakes and other AI-manipulated content is a real threat to our democracies, our members’ reputation and to citizens’ trust in the veracity of digital content.
So far, the relevant EU legal framework (in particular the 2019 Directive on Copyright in the Digital Single Market, the General Data Protection Regulation, and the AI Act) applying to the relationship between artificial intelligence, authors’ rights, and data protection is misinterpreted or simply incomplete. When some rules do exist, they are either not yet applied, not enforced or ignored by generative AI models. In a nutshell, the EU legal framework does not sufficiently protect the rights of our creative communities and the value of their cultural works.
In sharp contrast to current practices, we firmly believe that authors and performers must be able to decide whether their works should be used by generative AI and, if they do so, be fairly remunerated. As a new EU policy cycle is about to start, we urge you to support a clearer legal framework preserving the rights of creators and the integrity of their works.
As representatives of the creative community in Europe, here is where we stand on this existential issue:
1. Time to question the flaws and misinterpretation of the current EU Copyright framework in relation to generative AI
In 2019, years before the sudden rise of generative AI technologies, the European Parliament and the Council of the EU adopted the Directive on Copyright in the Digital Single Market (CDSM Directive). Article 4 of the Directive introduced an exception to copyright for the purposes of text and data mining (TDM), on the condition that creators and other rightsholders have not expressly reserved their rights.
Article 4 does not mention or define “Artificial Intelligence” and “Generative AI” and was not conceived with large scale generative AI models in mind. However, generative AI providers have interpreted this exception a posteriori in an extensive manner to cover the systematic and extensive use of creators’ protected works and performances without authorization.
As AI-generated outputs enter the market and compete with human creations on unfair terms, it is hard to conceive how this exception would satisfy the three-step test, a fundamental safeguard enshrined in EU and international law intended to strike a fair balance between rightholders and content users by limiting copyright and neighbouring right exceptions to certain special cases that do not conflict with the normal exploitation of the works or other subject matter and do not unreasonably prejudice the legitimate interests of rightholders.
In the absence of any relevant Court decisions, this broad interpretation of the TDM exception does not meet any public policy objectives and appears to retrospectively justify the massive scraping of our members’ works and performances, to the sole benefit of AI companies. In addition, five years after the adoption of Article 4 of the CDSM Directive, none of our members has been able to reserve their rights in an efficient manner and there is still a great level of uncertainty on the reservation of rights and how authors and performers can exercise it. Last but not least, since generative AI providers exploit copyrighted works without any transparency, it is virtually impossible for creators to take legal action against them.
As a result, generative AI providers have largely taken advantage of such an exception and the legal uncertainty surrounding the reservation of rights – without even giving a chance to creators to provide their consent and exercise their right of reservation. Such a situation is not acceptable for the authors and performers we represent. In our view, their consent must be required for any AI-related use of their works and personal data.
Therefore, we urge you to question the applicability of this TDM exception to generative AI and promote consent, transparency and remuneration at the heart of all EU initiatives related to the use of AI.
2. The AI Act is a step in the right direction but still needs to be properly enforced
Over the past two years, our organisations have been advocating for a human-centric approach to regulating artificial intelligence (AI) and have been closely involved with the European AI Act, which was published on 12th of July 2024 after three years of negotiations.
We welcome the AI Act, in particular the requirement for providers of general-purpose AI to comply with EU copyright law and publish sufficiently detailed information about the data used. We also strongly support the strengthening of transparency obligations around deep fakes and stress the importance of developing technical tools that may reliably and accurately differentiate authentic content from AI-generated, or manipulated, content. However, the AI Act must now be implemented in an effective way to preserve fundamental rights, promote the highest level of transparency, and enable authors and performers to exercise their rights.
In this context, we urge all MEPs to remain vigilant about its implementation and ensure that the AI Office puts transparency at the heart of its future initiatives, such as codes of practice and templates for AI providers. Without ambitious and detailed transparency obligations, it will be impossible for our members to know if their works have been used and avail themselves of the protection provided by EU law related to intellectual property, the protection of personal data or other relevant provisions.
However, even with a proper implementation, the AI Act will only serve as a temporary fix for a much larger problem unless ambitious policies are designed to ensure informed consent and remuneration for authors and performers.
3. The way forward
In a context where the current EU legal framework is unbalanced, does not protect creators, and ignores the specificities of the cultural and creative sectors, we call for determined action by MEPs to face head-on the profound disruption caused by the uptake of generative AI in the cultural and creative sectors, and its potential impact on creation and cultural diversity.
As a new EU policy cycle is about to start, we urge MEPs to engage in a comprehensive and democratic debate leading to a clear legal framework preserving the rights and the integrity of the works of creators, addressing the numerous open issues linked to the TDM exception today, and clarifying the terms of its extension to generative AI. In such a debate, it is essential to consider that using works and performances in the context of AI models is radically different from traditional forms of exploitation. As the integrity of their work and their personal reputation may well be jeopardised by generative AI, authors and performers should retain the ability to consent or refuse such usage of their work. As such, they must be involved in the exercise of their rights reservation, the design of the technical protocols used to this end, as well as any policy discussions regarding generative AI.
To the extent that AI-generated content draws its value from human creations exploited on a large scale, it is also appropriate to consider effective and enforceable mechanisms to remunerate the creative community for the AI-generated output. Such mechanisms, however, must not operate to normalise or unduly encourage the supplanting by generative AI of the work of individual human beings.
In conclusion, we urge all MEPs to place both the notion of transparency, consent and remuneration of authors and performers at the heart of all their initiatives related to the use of AI. We also look forward to working with the European Parliament to build a framework that forwards the advancement of AI technologies to serve and enhance human creativity, whilst continuing to promote original content and protecting the hundreds of thousands of authors and performers we represent and whose livelihood depends on the recognition, and fair reward, of their creative work.