Composing trustworthiness #2: Artificial Intelligence and European values

In this second episode of the miniseries entitled Composing trustworthiness, we ask our experts from different cultural and diplomatic fields: How are European values reflected in its regulatory approach? Is this approach to AI compatible with other cultures and does it sufficiently consider the cultural and creative sector?

The miniseries explore the impact that Artificial Intelligence has on art, language, and culture more broadly, as well as on international relations. We examine the EU’s aspiration for trust and excellence in AI, in light of new developments and growing debates around diversity, fairness, values and risks. 

Methodology, sources of inspiration, and transcript

Methodology:

This podcast miniseries is based on a research project that included 9 interviews with cultural practitioners and researchers from different fields, including cultural institutes, artists and diplomats. Their names and positions have been pseudonymised as follows: 

  • Adam, Researcher specialised in multimedia
  • David, Representative of a national cultural institute
  • Kevin, Researcher and practitioner specialised in digitalisation of culture
  • Lucas, Representative of the private sector working on AI for language
  • Marta, Representative of a national cultural institute
  • Rose, Researcher specialised in linguistics
  • Sophia, Artist and practitioner
  • Tania, Representative of the private sector working on AI for privacy
  • Tom, Diplomat

The content of the 2021 interviews was transcribed, translated where necessary, carefully curated and re-recorded with the help of AI tools: 

  • Transcript (speech-to-text): trint.com
  • Translation: deepl.com 
  • Audio (text-to-speech): animaker.com 
  • Image generation: openart.ai
  • Music generation: aiva.ai

Sources of inspiration and references:

Damien Helly: You are listening to the Composing trust podcast, by culture Solutions – a series on European cultural action with the world. Is Europe still attractive? How is it perceived by outside the EU? How do Europeans promote culture together in the world, with which partners? What have they learned, what is their experience? Our Composing Trust podcast series will address these issues.

Welcome to you all! My name is Damien Helly, the co-author of this Composing trust series, by culture Solutions. Today’s podcast is a special one, we are hosted by the artificial voice of our colleague Ina Koknova in a mini-series of several episodes on the relations of Artificial Intelligence (AI) with culture and trustworthiness.

Annie: Welcome back to the second episode of the podcast miniseries entitled Composing trustworthiness, that examines the interaction between Artificial Intelligence and International Cultural Relations with a focus on the EU’s role as a global norm-setter. After pondering upon the mutual and multifaceted influence that AI and culture exercise on each other in the first episode of the series, we now continue by asking our experts from different cultural and diplomatic fields: How are European values reflected in its regulatory approach? Is this approach to AI compatible with other cultures and does it sufficiently consider the cultural and creative sector?  

Before digging in, let’s start by quickly going over the prolific timeline of EU action on AI. While the EU is still lagging behind in technical terms, that is, development and sale of AI products, it has been rather active in the legal arena. In the last 5 years multiple policy documents have been adopted and work groups were set up. The first steps were taken in April 2018 with the Declaration of cooperation on AI and the Communication Artificial Intelligence for Europe. In 2019 and 2020 the EU produced Ethics Guidelines for Trustworthy AI, and a White Paper on the European approach to excellence and trust in AI. As you may have noticed, this self-proclaimed focus on Trustworthiness is the origin of the title of this podcast miniseries. The most significant development came with the release of the Commission’s legislative proposal for an AI Act in April 2021. However, two years later, the regulation is not yet adopted, in contrast to the Digital Markets Act and the Digital Services Act which were proposed around the same time. The Council agreed on its negotiating mandate in December 2022, and the European Parliament has drafted several reports and held a couple of debates without arriving at a final position. This highlights the elevated complexity of finding the EU way to regulating an emerging technology like Artificial Intelligence. Only after the general approach is clearly set can the EU turn its attention to normally less priority  areas like culture and creativity, because currently they are absent from the Commission documents. 

With this in mind, we open the episode with Kevin, a practitioner specialised in digitalisation of culture, in order to find out if the EU has developed a specific model on AI, different from those of other global players.

Kevin: Obviously there is a lot going on in this area. I would say that this indeed holds true because there is a policy-based approach. There is a consideration of European principles, for example, for access and participation, as well as data sovereignty. So that’s, I think, something which is specific. Also, for instance, adhering to the European laws regarding data protection. So in that sense, I think there is a distinct model that is maybe more policy-driven than market-driven. And that tries to bridge the gap between AI development and European values or regulations. 

Annie: Continuing in the same vein, Adam, researcher specialised in multimedia, speaks about the existence of a distinctly European view on Artificial Intelligence.  

Adam: Yes, indeed. I think the EU has attempted to replicate what has happened around GDPR by trying a different approach to the technological sector. What might make it more difficult than GDPR is that there was a kind of long tradition in the area of data protection, while with AI, a lot of things are happening in other places of the world and Europe is still catching up. And now it is trying to find an approach that sets it apart from other stakeholders. It probably can’t catch up anyway in all respects. 

I think the EU approach reflects both the positive and the negative aspects of the European attitude in this area. So in the positive sense, being more aware of the impacts that technology may have and trying to have safeguards in place, including more awareness around privacy and human-centred aspects. But on the negative side, we see Europe being more hesitant towards new technologies and slower in adopting them.

Annie: Hear more on the attractivity of the EU’s cautiousness from Tania, a representative of the private sector working on AI for privacy. 

Tania: I think the European approach isn’t yet finished. So it will most likely develop. I also believe probably areas of the Global South will also begin thinking about, you know, regulation. I do think that the cautious approach of the EU towards some of these technologies and their effects on our world, on our privacy and so forth, is an important step and hopefully one that others will follow as well.

The EU approach includes this ability for people to define what the boundaries are for their rights and for their safety, privacy and security, as many European countries have these rights as a default in their constitutions. I think that if we do not ask these questions regarding, let’s say, the use of data and the use of machine learning systems now, then we could end up in a quite different future where we might not want to be. 

Annie: Moving on to Rose, a researcher specialised in linguistics, with the same question – has the EU defined its approach to AI?

Rose: Not yet, but the task itself is incredibly difficult and I appreciate the effort. I also appreciate the hesitation of the EU, as well as the emphasis on citizens’ privacy and digital safety in Europe. We have to keep in mind that not everything which could be implemented would be implemented, as would be the case in other states like China or Saudi Arabia. The reason for this hesitation by the EU are European values and a partition between the public and the private spheres, with individual freedom as a core principle in human philosophy itself. Silicon Valley AI solutions or Chinese governmental ones stand in harsh contrast to these principles. 

To increase AI solutions for the EU, we have to address, in my opinion, two main problems. First, the data infrastructure. So where are the servers located? Who owns the servers and who owns the data? In the context of e-government and Intellectual Property law. The other aspect is in what way AI plays part in a future European society. Not the question of citizenship for an AI, as Sophia in Saudi Arabia, but as an agent in the public and private sphere. This has to be further discussed, and then  citizen rights or legal rights.

Annie: And now David, a representative of a national cultural institute, touches upon the fact that AI policy reflects specific European worldviews and values.  

David: Yes, because there is a European idiosyncrasy that is more or less clearly distinguishable from what China or the United States postulate for the development of these technologies.  In the case of the Union, it is also embedded in a very specific legal framework, within general principles linked to citizenship, such as ethical considerations and data protection. Other countries where perhaps these values are not so present in social development, will have a different approach.

Annie: The diplomat Tom offers us an insight into the connection between these European values and the EU’s global ambitions. 

Tom: Clearly, in terms of regulating frontier technologies like artificial intelligence, the EU has become a global standard-setter. It has developed stances earlier than other similar or like-minded countries. We can see that in the case of artificial intelligence, there have been several documents and even pieces of legislation that the European Union has come forward with. The General Data Protection Regulation in 2018 has some references to artificial intelligence already. 

I think the approach is very much based on principles, it’s based on human rights. Also, it is based on a desire by the European Union to exploit the uptakes and the opportunities of artificial intelligence. It is a fact that the European Union, as an economic area and as an ecosystem of innovation, is lagging behind other players, namely the United States, but also China and other countries when it comes to AI in terms of research, but more importantly in terms of applications and particularly platforms. Therefore, its regulatory approach is different from, for example, the one of the United States. 

Annie: Although we can see there is no consensus on the finality of the EU’s AI approach, it is clear that it is conditioned by a distinct ethical and legal context, with data protection, individual freedom, sovereignty coming up several times. The Commission’s proposed AI Act defines it as a human-centric and risk-based approach, with focus on safety and fundamental rights. Yet, promotion of cultural and linguistic diversity is one of the aims of the EU, as listed in Article 3 of the Treaty of the European Union, while its motto is United in Diversity. But does the European model of AI regulation allow for diversity and respect for other value systems? We ask David whether the EU’s approach is compatible with different cultures.  

David: Yes, I think so. Of course, these technologies will always have to be adapted to the local legal framework, such as the one defined by European legislation. In principle, it should not be a problem to apply it to other countries. Certain things would obviously have to be restricted in some cases and extended in others. But it is possible with clear rules of the game and legal certainty.

Annie: We turn to Adam for an explanation of the exportability of the EU AI regulation reminiscent of the so-called Brussels effect. 

Adam: I don’t think it’s fully compatible with other systems and it probably doesn’t have to be. What we’ve seen with GDPR is that having a certain policy in a market as large as the EU can have an impact beyond it. I think that can also work for AI, at least in part. In other areas, if you think of what’s happening in China, there may be a different story. The impact of Europe there might not be that large because they are following their own path, and maybe it will only impact some companies that try to be active on the European market. Thus, the reach beyond Europe will be very different depending on how tight economic relations are with Europe and how important Europe as a market is for companies from these countries. 

Annie: Marta, a representative of a national cultural institute, calls for increased respect for diversity and listening effort by the EU. 

Marta: I think it’s a start, but I think it has to be much more. I feel that’s definitely potential also in our work to be more inclusive and to allow in more perspectives. That could be scary because they are critical voices that are becoming stronger and more vocal because there’s more education available, there are more connections available. I sometimes sense a kind of fear about letting these voices in and hearing them because it’s obviously, you know, putting the status quo in danger. But I believe we have to make that happen right now. I don’t think we can keep running from it. The EU can only learn and win from it if we let those voices in right now, if we listen more. To a certain extent, we still follow certain structures that have been put in place a while ago, and that kind of actually emphasised the power dynamics. So we have to be super aware of this and see: How can we break them down? How can we invite different voices in? Why are some voices present and others are not? That kind of openness and approach would be quite useful. 

Annie: This is a good moment to recall the need for ensuring diversity in AI which was discussed in episode #1 of the podcast series. Listen carefully to Rose.

Rose: The problem of AI in the context of diversity management, in the EU or beyond, is that the technologies themselves have a strong bias towards the male white worldview. Not only because the developers have this social background mainly, but the technology itself can only function through oversimplification and stereotypes. AI relies on keywords or networks of keywords, but not on cognitive concepts as in human cognition, which are sensitive to context and to change and adapting dynamically in an ever changing cultural context. The technology is oversimplified, still deterministic

Annie: Lucas, a representative of the private sector working on AI for language, reflects on the concept of different cultures in opposition to country borders.  

Lucas: I think that most researchers don’t really split themselves up into countries because you see that a lot of the AI research that comes out is from collaborative teams that are multinational. I don’t see national divisions in academia or research. In the private sector as well, I don’t believe that people think in terms of borders when they are building products and innovating. In marketing you have to build for the customer. But even within one country you have many different kinds of customers. So people don’t think in terms of borders, they think in terms of, well, what language do I need to communicate with the customer first of all. So companies market to these kinds of people differently, but you’ll find the same people in every country. It’s a psychological difference rather than a border that separates people. The need to adapt to the different cultural characteristics is what marketing is about –  marketing personas usually carry across many different countries. 

Annie: This traditional collaborative working method characteristic of non-governmental actors is more and more in tension with the rising global competition. The EU’s approach to AI is one example of its search for a middle ground, as explained by Tom.

Tom: I think that the attempt of the European Union is not to create a world, a bipolar world, where the European Union will need to choose between an American-based world and a Chinese-based world. If it had to choose, I think we are far more aligned in terms of values and human rights with our transatlantic partner, the United States.  But for the moment, the European Union has absolutely no interest in this increasing polarisation. I think the European Union has very clear principles and is very clear in human rights, but it is also not interested in a geopolitical confrontation. 

Annie: We would like to quickly remind our listeners that the interview with Tom was recorded before the outbreak of the war in Ukraine. Moving on, Kevin sees promotion of diversity and respect for differences as a unique selling point of the European model on Artificial Intelligence. 

Kevin: This is probably something that we’re just starting with. This could, however, be one of the main benefits of a European strategy on AI compared to other nations because thinking, for example, of language, this is a very unique thing. Most of the development is coming from the Anglo-Saxon countries, they only work for English-language materials. And then there is recent work that is coming out of Chinese universities. But in the EU we have more than 20 languages that we officially support, and we also have information and resources in all these languages. That’s a bit of a reflection of the European culture. There is consideration also of these smaller languages and smaller numbers of speakers and generally more diversity in terms of the data and available resources.

Annie: Having identified this potential but also a need already in the first podcast episode of the series, Rose’s view on the use and support of multilingualism through AI in the EU is not rosy.  

Rose: The EU is doing less than it could do to promote language processing. Since it works in different languages, translation technologies are very important. They do function quite well on a very basic level, but a more nuanced translation that captures all the details of a cultural context is still lacking. The EU again has this hesitant attitude also towards translation tools and using them on a larger scale. As I previously said, I appreciate the EU’s hesitant attitude because the systems are not as good as Silicon Valley companies or corporations want to make us believe. That’s the problem. 

Annie: And now we are back to Kevin for a quick idea on getting funding for the development or research of AI tools.

Kevin: I agree we could establish machine translation services for EU languages to increase efficiency. There are some initiatives in the Connecting Europe Facility, for example, to apply machine learning. 

Annie: Such programmes are crucial to the cultural sector, usually in dire need for financing its ongoing activities and thus unable to invest in new projects like Artificial Intelligence. David talks about it first hand. 

David: We are in very preliminary stages of the digitalisation process of our institution. So we have not yet matured for those kinds of issues related to artificial intelligence, and we are still looking at what are the needs and possibilities in the future to work along these lines. We see future lines of collaboration in the AI framework, which could be very, very fruitful, but we are not ready for it yet. We have to be much more aware of the European initiatives in order to be able to get involved in them.

Annie: One such initiative is being promoted by the Europeana Tech Community. Adam briefly introduces the importance of knowledge sharing among the digital cultural professionals.  

Adam: Among cultural organisations across Europe the level of the use of AI is quite diverse. So some are at the forefront and are very, very active in trying new technologies, and others are just struggling with their more day-to-day problems. And thus the Europeana idea is also to help the community by sharing best practices, tools, resources, as well as to make sure that organisations can make use of AI in a way that fits them. This is done through different publications and webinars. Probably the tools and the processes they will need will be quite different, but we are creating a European community of cultural practitioners working with AI.

Annie: Another network in the field is the AI4LAM community focused on artificial intelligence for archives, libraries and museums. For some examples of cultural institutions engaging with AI, we hear from Kevin.  

Kevin: Yes, so I want to bring up the German Research Center for Artificial Intelligence, the UK project Living with Machines which involves the British Library, and then there is a large project in France at the National Library of France. And in the Netherlands, there is network called Cultural AI Lab, which I think is a very interesting initiative because it’s a network of universities and cultural heritage institutions, like museums, archives and libraries, and it has set out to explore the gap between cultural heritage institutes, the humanities, and informatics. 

Annie: I would also like to share two instances of EU-funded projects in the cultural and creative sector. Firstly, AI4Media is funded by Horizon 2020 with the aim of becoming a Centre of Excellence delivering AI advances and training to the media sector. Curiously, it is tasked with ensuring that the European values of ethical and trustworthy AI are embedded in these developments. The project has a large partner network and has resulted in several reports and papers, as well as datasets and use cases. 

Secondly, a recent research project investigated the relationship between AI and the Videogames, Visual Arts, Performing Arts, and the Museums and Heritage sector, with a special focus on opportunities and challenges. 

And to finish off, Sophia, as an artist and practitioner, highlights an additional European action nurturing AI for culture. 

Sophia: The S+T+ARTS initiative of the European Commission presents itself as the innovation at the nexus of science, technology and the arts. It supports projects and people that have the potential to make meaningful contributions to the social, ecological and economic challenges we face. Naturally, some of them include Artificial Intelligence. This programme is managed by Ars Electronica, which also organises an annual Festival for Art, Technology and Society, and a worldwide competition for CyberArts.

Annie: And like this we can see that there are a multitude of initiatives regarding Artificial Intelligence in the cultural sector, both grass-roots and supported by public funding. However, at the policy level there is no specific focus on AI in culture, as urgency is given to other sectors like banking, biometric identification, medicine or autonomous vehicles. In fact, the only EU document dealing with AI and culture is the resolution adopted by the European Parliament on AI in education, culture and the audiovisual sector. It must be noted that manipulative content or deep fakes are proposed to be prohibited as contrary to EU values. 

So, why is this omission of culture in AI regulation a problem? As we discussed in the last episode, all the data that any AI learns from is cultural, including images, text and sound, and then AI is rebuilding culture. And bias can easily be traced back to a lack of cultural diversity, but it also reinforces it. It is therefore yet to be seen how the European values-based approach to AI will impact International Cultural Relations and the promotion of worldwide cultural richness, considering that it does not specifically address culture. This is precisely what we will tackle in our next podcast episode. Stay tuned! 

Damien Helly: Thank you for listening to today’s episode of our Composing trust podcast by culture Solutions! If you liked it, you can subscribe and follow us on your favourite podcast platforms, and contact us at culturesolutions.eu. 

The views expressed in this podcast are personal and are not the official position of culture Solutions as an organisation.
Musical creation credits: Stéphane Lam