Joint statement: Europe’s creative community urges EU policymakers to put transparency back at the heart of the EU AI Act
For an innovation and creator friendly AI Act: Europe’s creative community urges EU policymakers to put transparency back at the heart of the EU AI Act
We represent the collective voice of hundreds of thousands of writers, translators, performers, composers, songwriters, screen directors, screenwriters, visual artists, journalists, and other creative workers whose human artistry lies at the core of the creativity that our societies cherish and enjoy on a daily basis.
As the AI Act is entering into the final round of negotiations, we urge all policy makers to prioritise maximum transparency on training data and artificially generated content to provide guarantees to citizens, authors and performers that their rights are respected: transparency is a prerequisite for both innovation and creation to continue to grow for the benefit of all.
Today, our members’ protected works, voices, and images are used without their knowledge, consent and remuneration to generate content. Such uses may harm their moral, economic and personality rights and prejudice their personal and professional reputation and livelihood. They also present a broader societal and political risk as artificially generated or manipulated content can play a significant role in spreading misinformation and eroding trust in the authenticity of digital content.
We therefore strongly feel that the datasets used by generative AI should be informed by the highest level of transparency and that the deployers of these technologies should prove that the training of their AI has been carried out in compliance with applicable EU and national law, whether related to intellectual property, the protection of personal data or other relevant provisions. None of the protections built into the GDPR (General Data Protection Regulation) and CDSM (Directive on Copyright in the Digital Single Market) has the slightest chance to work if appropriate transparency requirements, including strong record-keeping and transparency obligations regarding the use of copyright-protected content by generative AI models, are not included in the EU AI Act. This is the precondition for authors, performers and other creative workers to seek to avail themselves of these protections, should they become aware of an unauthorised use.
We do believe however that the AI Act should also withstand the test of time and, for this reason, that it should not explicitly mention specific mechanisms, such as the one in art. 4 of directive 2019/790, that are yet to prove their legitimacy and effectiveness in this evolving technological field.
The AI Act should also impose strict visible and/or audible labelling obligations to all deployers of generative-AI powered technologies, warning the public about the fact that what they are watching, listening to or reading has been altered or generated by AI. While these obligations may be adapted to the nature of the content in order not to hinder its exploitation, we firmly reject broad exceptions that would render labelling obligations meaningless in practice.
We therefore welcome and support the European Parliament and Spanish Presidency efforts to address these key issues, both with respect to the training of generative AI and the mandatory labelling of all content artificially generated or manipulated. By contrast, we are deeply alarmed by the attempts to replace meaningful obligations with light touch self-regulation for foundation models and generative purpose AI.
As time is of the essence, we urge European institutions to agree on a balanced regulation that not only fosters the development of AI technology and related businesses, but also guarantees a human-centric approach to creation that protects the rights and livelihoods of the authors and artists we represent.