This website uses cookies to ensure you get the best experience. By clicking or navigating the site you agree to allow our collection of information through cookies. Check our Privacy policy.

Posted on Friday September 5, 2025

Updated on Monday September 8, 2025

Europeana Network Association Cross-Community Workshop on Culture for AI

This workshop invites 30 Members of the Europeana Network Association to share their perspectives and contribute to a sectoral position on the responsible use of AI. It is organised by EuropeanaTech, in partnership with SMK - the National Gallery of Denmark, AI4LAM, Sound and Vision and the Europeana Initiative.

Banner for "Cross-Community Workshop on Culture for AI," 27-28 Oct, SMK Copenhagen, with a raised hand sculpture.
Title:
Wire Bound
Creator:
Hanna Barakat & Archival Images of AI + AIxDESIGN
27 — 28 October 2025
14:00 — 13:00
(CET)
SMK- National Gallery of Denmark Copenhagen, Denmark

As part of developing the common European data space for cultural heritage, the Europeana Initiative and the Netherlands Institute for Sound & Vision launched the Alignment Assembly on Culture for AI in May 2025. This collective intelligence and participatory exercise engaged around 400 professionals, highlighting areas of consensus, friction and uncertainty and identifying topics for further exploration. Its outcomes will help to shape a shared vision for the use of AI in the data space and the wider sector, guide decision-making and promote the responsible development and use of AI in cultural heritage.

To build on the insights of the Alignment Assembly, the EuropeanaTech community of the Europeana Network Association, in partnership with SMK - the National Gallery of Denmark, AI4LAM, Sound and Vision and the Europeana Initiative are holding a cross-community workshop. The workshop will bring together 30 ENA members to review emerging insights from the Assembly, share perspectives from their areas of expertise and contribute to a sectoral position on the responsible use of AI. Together, participants will explore desirable and undesirable AI use cases, exchange experiences and practices, and strengthen collective knowledge around AI in our sector. The workshop is for ENA members exclusively.

This workshop will help position the Europeana Network Association and cultural heritage institutions and professionals as proactive, values-driven voices in the European and global AI debate. It will sustain the momentum initiated through the Alignment Assembly, gather specialised input, and lead to the development of a paper summarising key insights and putting forward actionable recommendations. The outcomes will also inform ENA community work plans, support future advocacy work within the Europeana Initiative and the data space, and contribute to ongoing policy conversations.

Join us in shaping the future of culture for AI and AI for culture. Be one of 30 forward-thinking participants helping to define how the data space and the Europeana Initiative engage with AI, and contribute to building our shared vision. Your experience matters - let’s shape the future together!

You can apply for the programme through the link below. Please add a ticket to your cart, and you will be able to fill out the application form when you 'check out'. The deadline for applications is 18 September. 

All submissions will be evaluated by a Selection Committee made up of members of the ENA Management Board, the EuropeanaTech community Steering Group, the Europeana Foundation and the Netherlands Institute for Sound & Vision.

You will be notified of the results shortly after the deadline. The Europeana Network Association will cover travel, accommodation and subsistence costs of all participants, up to a total of €500 per person.

We aim to bring together a diverse group of ENA members from different fields and communities, with varying levels of expertise and practical experience in AI, including both critical voices advocating caution and enthusiastic voices supporting its uptake. What unites us is a shared commitment to work towards responsible AI futures.

top