Composing trustworthiness #3: Artificial Intelligence in International Cultural Relations

In this third episode of the miniseries entitled Composing trustworthiness, we dive deep into our core topic: the impact of Artificial Intelligence on International Cultural Relations, and the way this disruptive technology drives or impedes relations with partners around the globe. 

The miniseries explore the impact that Artificial Intelligence has on art, language, and culture more broadly, as well as on international relations. We examine the EU’s aspiration for trust and excellence in AI, in light of new developments and growing debates around diversity, fairness, values and risks. 

Methodology, sources of inspiration, and transcript


This podcast miniseries is based on a research project that included 9 interviews with cultural practitioners and researchers from different fields, including cultural institutes, artists and diplomats. Their names and positions have been pseudonymised as follows: 

  • Adam, Researcher specialised in multimedia
  • David, Representative of a national cultural institute
  • Kevin, Researcher and practitioner specialised in digitalisation of culture
  • Lucas, Representative of the private sector working on AI for language
  • Marta, Representative of a national cultural institute
  • Rose, Researcher specialised in linguistics
  • Sophia, Artist and practitioner
  • Tania, Representative of the private sector working on AI for privacy
  • Tom, Diplomat

The content of the 2021 interviews was transcribed, translated where necessary, carefully curated and re-recorded with the help of AI tools: 

  • Transcript (speech-to-text):
  • Translation: 
  • Audio (text-to-speech): 
  • Image generation:
  • Music generation:

Sources of inspiration and references:

  • Council Conclusions on Digital diplomacy and EEAS page on Digital diplomacy: reaffirming commitment to multilateralism and promotion of universal human rights, fundamental freedoms, the rule of law and democratic principles. 
  • the International outreach for human-centric artificial intelligence initiative aims to promote the EU’s vision on sustainable and trustworthy AI through the mobilisation of strategic communication and technology diplomacy actions.
  • Technology or AI as a tool, topic and environment for diplomacy: there are three fields of digital transformation that affect profoundly diplomacy.
  • Freedom Online Coalition: partnership of 37 countries working for internet freedom. 
  • Global Partnership on Artificial Intelligence (GPAI): multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI.
  • UNESCO Recommendation on Ethics of AI: first-ever global agreement on the ethics of AI reached by the 193 UNESCO Member States.
  • OECD AI Principles: focus on how governments and other actors can shape a human-centric approach to trustworthy AI.
  • Council of Europe work on Artificial Intelligence: while a binding treaty is being negotiated, there are more than 50 texts so far.
  • GDPR as an example of Brussel’s Effect: EU legal traditions being echoed in the legal systems of many countries around the world, especially in the
    domain of digital markets.
  • TTC Joint Roadmap for Trustworthy AI and Risk Management: outcome of the EU-United States Trade and Technology Council that informs approaches to AI risk management and trustworthy AI on both sides of the Atlantic.

Damien Helly: You are listening to the Composing trust podcast, by culture Solutions – a series on European cultural action with the world. Is Europe still attractive? How is it perceived by outside the EU? How do Europeans promote culture together in the world, with which partners? What have they learned, what is their experience? Our Composing Trust podcast series will address these issues.

Welcome to you all! My name is Damien Helly, the co-author of this Composing trust series, by culture Solutions. Today’s podcast is a special one, we are hosted by the artificial voice of our colleague Ina Koknova in a mini-series of several episodes on the relations of Artificial Intelligence (AI) with culture and trustworthiness.

Annie: In this third episode of the miniseries entitled Composing trustworthiness, we dive deep into our core topic: the impact of Artificial Intelligence on International Cultural Relations. Just a quick recap of the previous two episodes: we heard from our cultural practitioners and experts that AI and culture are intrinsically tied in a variety of ways, that there are persisting issues with representativeness, diversity and bias in the data and decision-making, but also numerous opportunities. In the second episode we traced the EU regulatory approach to AI, that can be defined as values-based, and gave multiple examples of specific projects at the intersection of AI and culture. 

Going one step further, today we look at the way this disruptive technology drives or impedes relations with partners around the globe. In foreign policy, AI can be seen as a tool, a topic or the surrounding environment or context. The EU’s ambition to establish itself as a global leader in AI as well as to collaborate internationally is spelled out in documents like the Digital Compass, the AI Act itself, as well as the Council Conclusions on Digital diplomacy. 

The European Commission Directorate General in charge of AI policy, that is DG CNECT, has recently set up a Unit on policy outreach. Similarly, the External Action Service now has a Unit specialised in Digital Diplomacy and a dedicated page on their website. However, where does culture fit this increasingly geopolitical picture? 

Annie: When asked how AI is affecting relations with other countries, the artist Sophia talks about Artificial Intelligence and power dynamics.

Sophia: AI just undermines existing power structures. None of these questions can be intertwined from geopolitical considerations and how AI will profoundly unravel the existing international system. Supremacy in technology means being superior in the future. And so that goes hand in hand with every cultural manifestation, every aspect of our society.  

Annie: In the same vein, Rose, our researcher specialised in linguistics, highlights the threat to democracy, but she also sees huge potential for citizen-led AI initiatives, potentially across borders.

Rose: Data infrastructure is a prerequisite for digital safety and democracy, as well as for the democratisation of data capitalism. AI retrieves and analyses user data for online marketing in a very sneaky way in Silicon Valley companies. The problem with commercial social bots is that they were perfected by companies as virtual carriers that advertise and retrieve user data at the same time and on a huge scale that was never seen before. So every single digital move of a user is not only monitored and archived together with the IP address, but also analysed and interpreted by the AI, which is in its core, a biased system. No government of a free democratic state should rely on such kind of data infrastructure and it needs development and it needs a shift towards a democratised infrastructure. We need huge server packs and maybe something like citizen data unions, which are founded by the citizens themselves. AI is becoming a social and intersectional agent in modern society. Thus, investigating the actual conditions in which human-computer interaction takes place is a must for any discussion on the pros and cons of AI applications in democratic societies.

Annie: Continuing with the topic of democracy and safety, we have the reflections of Tania, representative of the private sector.

Tania: I feel like AI is affecting international relations in warfare more than it is in anything else. I wouldn’t call it espionage, but, you know, social warfare. So when we look at something like Cambridge Analytica and when we can see misinformation as a portion of warfare, that we can definitely see then there’s an increase in state actors or state-sponsored actors across the world, as well as computer vision drones. So I think it’s creating surreptitious ways and also overt ways for people to manipulate and maliciously influence other systems, not only by being able to essentially bomb places where you have no risk of losing a soldier, but also being able to viciously influence political thought in other countries.

On the other hand, there’s probably room for cooperation. Machine translation is something that is getting faster and better. I would imagine that it could indeed help things over time. I also think that there’s ways that we could use AI responsibly to improve and increase production, supply, medicine, research, cooperation.  

Annie: Cooperation is the word I want to focus on now, with David’s view on the positive use cases of AI in international relations from the perspective of a national cultural institute.

David: Yes, it will obviously affect everyone and in all possible aspects of cultural relations. Technology has to be used to facilitate, especially in our European environment. We have to try to somehow create more common awareness. These technologies can contribute to that, if they are used in the right way, to bring easier or better communication between citizens of different nations. 

Annie: Staying within the realm of cultural institutes, Marta ponders on AI as a tool as well as the surrounding uncertainties. 

Marta: I do see in departments at work that they really want to hear about the impact of AI on international cultural relations. But we don’t know yet how to actually make best use of, for example, big data sets. They can be used by a certain community, or by a government agency. I feel it’s actually interesting to look into: how would an artist work with a big dataset? How would they visualise it, in what categories would they look into, and what influences does that give

So, it is a big question right now and everyone is looking, what can we do? What are the questions we should be asking? What is happening? What are other countries doing? And I think the more we break down silos between different approaches (like splitting into one that is academic, another one that is the tech companies and yet another for politics). The more we break down these silos, the more we try to talk to each other, the more chance we have to make this really a good AI development. Talking is the only way to become aware of the risks and the blind spots.

Annie: Lucas, from a private company, shares his experience of living outside Europe and brings some additional examples of positive uses of AI. 

Lucas: In the gym that I attend here in the city, they use face recognition for entering the facility. And that’s pretty normal for everybody, I suppose. Even if there was surveillance, I think it would probably be for the benefit of society. It’s just keeping society safe.

But for national security, like cyber issues, I guess for such reasons AI would be important in government relations. 

As I previously mentioned, people from academia don’t care about borders, they cooperate to produce research together. And businesses develop products for their clients, regardless of the country.  

On the other hand, AI applications in languages can facilitate communication between speakers of different languages. Other AI tools assist in making decisions, including in relation to the things that actually humans are not able to make these decisions very well sometimes.

Annie: In more concrete terms, the diplomat Tom tells us how the relations between different countries in the particular domain of AI unfold.

Tom: So I would first of all say that international cultural relations is a term that we use in foreign policy, and we mean something similar to cultural diplomacy by it. And so the question here is, if I look at it from a mere policy side, does Artificial Intelligence play an important role in international diplomacy? And I would say it plays in many ways an important part.

First of all, increasingly, as a tool for treaty negotiations. And of course, as a topic in international relations, it is omnipresent in geopolitical terms and clearly also in terms of the ethical principles underlined. And they need to be negotiated on all sorts of levels, you know, between like-minded countries for sure, but not only.

And we need to rally around our common values. When we talk about human rights, then we have different organisations like the Freedom Online Coalition or forums like the UNESCO where to negotiate with countries from the global South or countries that maybe do not share the exact same values as we do. Since UNESCO is responsible on behalf of the United Nations for education, journalism, freedom of speech, but also culture, it is the organisation where you can, kind of look at international cultural relations on a global scale and the type of documents and resolutions UNESCO’s produces are evidence of this becoming an important topic of diplomacy, and in a sense of cultural diplomacy. 

Annie: So how likely is it to actually reach an international agreement on Artificial Intelligence, that suits the ethical and legal systems of all states? Hear from Tania. 

Tania: I would love it if global regulation on AI was feasible, but I’m just not sure. I think even minimally having it on the list of things that the UN wants to approach or think about would be a really nice first step. And I think that the steps that Europe is making now towards kind of leading the discussion on what would a more ethical, fairer and just system look like is a very good conversational starting point. And I believe hopefully that will influence more countries to also look and maybe collaborate with one another on what’s appropriate. With the Brazilian General Personal Data Protection Law in response to GDPR, we can see that sometimes the EU can lead the thought on how one should probably regulate.

Annie: Next, the practitioner specialised in digitalisation of culture Kevin shares his view on the need and desirability of global EU leadership on AI regulation.

Kevin: I think, yeah, definitely the EU should continue to promote solutions that basically follow the kind of laws and principles that we’ve established here for the European Union. These include data protection and making sure, for example, that information that is used in systems does not involuntarily escape to other platforms or third countries. It also should promote the business sector to continue investing into AI that is transparent and that the public is willing to accept. There is a strong resentment also in the public against the AI applications and the usage of AI in the public sphere where it is not transparent and how this actually works. And this area is where I think the EU is probably better suited than other global players. And then obviously there is an interest from my side to invest more heavily into AI for culture, because I think it’s a unique asset that the EU has in this diversity and leveraging this diversity of languages, of cultures. And this will or should allow the EU to create also better AI solutions because they are based on thinking more inclusively and widely. 

Annie: Taking onboard the idea of cultural diversity, we go back to the question about the drafting of a universal AI treaty with Marta.

Marta: There have been different thoughts around, you know, how to go about this. Do we try to create a new one? Do we look at the different cultures and their understandings of ethics of what is morally right, what is not morally right. Do we try to throw that all in a pot and try to get idea that then can guide us? Probably not.

Do I think that there can be a global approach? I don’t. It’s really hard because I’m a cultural anthropologist at heart. And to me, there’s always a lot of meaning in the various cultures and their own perspective. So certain understandings, even when it comes to things you would think are universal, like human rights, for example, are in fact different.

People and countries claim: “My view is the one that everyone should have because I’m morally right and this is how everyone should do it”. I think that’s super dangerous. But I understand why there’s this kind of longing for a global approach.

I think we have to invite those different voices and we have to talk about it because maybe we will come to a certain kind of agreement of what we want to accept as a kind of regulation. However, it’s really tricky because so many times when we say global is actually Europe or it’s Western, you know, so we really have to be very, very aware of that in order to prevent that from happening again.

Annie: Speaking of the Western perspective, understood as mainly led by the United States and the European Union, in the case of AI governance we can clearly see that there is no consensus between the two partners despite their shared values and interests. We inquire into the reasons causing this Trans-Atlantic divide, starting with the analysis of Tania. 

Tania: The American approach has been essentially just to continue to fuel research and support for AI systems with no sense of regulating their usage in most cases, including the military. In the US, AI systems are used by the government but also by private companies. Companies deporting children and their families or their separation at the U.S. border are an extreme example of a lack of privacy and a lack of focus on human rights. That is fuelled a lot by venture capital and the kind of the system in the US, which is that capitalism. That is obviously quite different from Europe where I think there is a lot more questioning of how the AI systems might affect things, there are even suggestions of a ban on certain types of systems. Europe is concerned about prejudices and systems, unfair treatment of different individuals, as well as the erosion of privacy.

Annie: Tom confirms this inherent divergence between the two approaches, but introduces also the geopolitical aspect, which could serve as a uniting factor in the face of a perceived common enemy.

Tom: So the United States, when it comes to Artificial Intelligence, has a little bit of an opposite kind of structure than the European Union. There’s very little legislation on the federal level. But there’s also, for the moment, a lack of a coherent national strategy when it comes to Artificial Intelligence. At the same time, the United States is, of course, leading the world in many aspects of Artificial Intelligence. It is, in some way, in a competitive race with China on who sets the standards, and who can reap the benefits and keep the technological superiority in the area of Artificial Intelligence. We can notice the increasingly geopolitical terms when it comes to China and the rivalry between those two superpowers.

Annie: In times when the world seems to be heading towards a Second Cold War or a de-globalisation, cultural relations offer a safe space for dialogue and trust-building. Hear from Marta how listening to other countries’ perspectives can contribute to richness and cross-fertilisation of ideas.

Marta: I think the U.S has a different model, but as I said, I’m always trying to stay a bit more in the realm of civil society and cultural relations, to keep an arm’s length. I think there’s a strong force coming from civil society. And what I find very important in this whole context and idea of cultural relations is that it’s multilateral and that it’s not a one-way street.

In the host country we are usually presenting like: this is the European idea, this is the European model, and it’s wonderful, so please have a look at it. But I think especially now, we are at a point where we have to be open and learn from each other, such as the U.S., Africa, etc. Everywhere I felt, oh gosh, there are so many things we still have to learn coming from Europe and to take in these perspectives.

But having said that, I see from our partners in the U.S. that there is an interest in the lessons learnt when it comes to government regulation of tech companies. For example, there is an interest in the kind of strong focus Europe puts on inclusion, participation, and trying to reduce the possibilities of bias. We are at a certain point in time right now where there is an interest also in the U.S. that maybe a few years ago wouldn’t have been like this. In the last few years, the actors in the U.S. have become more open to the idea of technology regulation.

If we come from a cultural relations standpoint and from a cultural perspective, this is actually a value added. At the same time, when Europe and the US talk, Europeans may come up with ideas that might reduce American innovation mindset or their growth. So one has to be mindful of these things. And what is important to a certain culture is that mindset is an environment you’re in. Then we should try to understand: okay, what can we contribute? It’s not that everything is brilliant about privacy policies in the EU, but we do have experience with it and we would love to share.

Annie: Kevin tells us more about how such interactions between the EU and other partners, namely in the United States, unfold in reality. 

Kevin: We had a conference in Germany where we were trying to contrast also the European approach to AI, specifically from the cultural sector with that, for example, of American businesses. 

At least there is sort of a possibility to collaborate on these issues, say, for example, on diversity and ethics. In the cultural sector, we see the forming of European infrastructures and consortia that try to interchange views online. And we see similar networks emerging in the U.S. So there have been some initial attempts in the last years to kind of bring these communities together, which obviously has been a bit impacted negatively by COVID-19. 

On the other hand, there is, especially in the research and culture sector, very little opportunity, for example, for funding which allows to implement initiatives that work across the Ocean. So it’s usually depending on a personal or business relationship.

Annie: Rose is not so optimistic though. She advocates in favour of European strategic autonomy as a path towards democratic AI. 

Rose: First of all, the EU must not rely on Silicon Valley players for the most basic data infrastructure, but develop their own democratic infrastructure, such as decentralised server parks or citizen data unions. I understand the European hesitant attitude towards AI as an advantage

Once this first step is accomplished, we can implement everything from e-government to e-learning. The EU has to implement laws accordingly, for example, higher taxes on European content. The EU cannot control a globalised world, of course, and ideally we would need a worldwide institution for the democratisation of AI capitalism, like a UN government or a world-wide movement in some way.

Annie: Adam, researcher specialised in multimedia, shares a similar view on Artificial Intelligence companies, laws and opportunities. 

Adam: AI stakeholders include, of course, providers of technology, and here things start getting interesting because a lot of the technology is coming from outside the EU. So it will also be the question in the end, who controls technology and who controls the data? One thing you hear quite often in Europe is because of all the regulations we have, we can’t do that with our data. But in fact, you can do a lot with your data. You just have to be careful and you have to think about how you do it. And that’s also maybe something where this could influence the external relations, because I think there are also other regions in the world that would follow such a model and that would maybe not want to just have technology from third parties coming in, especially those in the closer neighbourhood of Europe. Thus this could be a big opportunity.

Another opportunity I think is making use of the rich resources we have in Europe and making sure they get properly digitised and get usable. We see a clear dominance of US and Chinese technology providers and I think that’s not, it’s not so easy to change. We ‘ve seen examples in the past where trying to do this in a very top-down manner in Europe failed because you just can’t invent another big company that replaces someone who’s already well-established in the market. It’s better to find niches to have them grow from small organisations up, it’s more effective. What this would require is just more networking and better and easier access to and sharing of resources, and also in the end, the awareness in the broader public. We can also try to have a more structured or more informed debate around it. Not somewhere between either hype or fear, but a more balanced debate.

Also we should try to involve some stakeholders who might be willing to join on this path. Like, for example, Japan has taken up some ideas around data protection and I could imagine they would also be more fond of following a European approach on AI than a Chinese one. So there could be allies out there.

Annie: David also weighs in on the key stakeholders in the field of international cultural relations and their interests and challenges in the face of the imminent AI surge. 

David: As for the main stakeholders in the education field, the AI ecosystem includes the citizens, the cultural centres, and also the large technology companies because they are the first to make large investments.

The cultural centre will have its importance – we must think about how to introduce this phenomenon of Artificial Intelligence, not only in the intellectual debate, but also in out way of acting, in our way of being, of evolving, and in the future in our own activity. It seems clear that it is going to be a very important part of social life in the next 20 or 30 years and therefore, cultural institutions have to evolve alongside. 

I am still afraid that we are perhaps in a state that is a little preliminary in this, and we do not know how to integrate, or we are not even making this reflection on how to integrate AI. There is a need for more thinking among the different cultural institutes about the future in a more systematic way and taking into account that this, whether we like it or not, is going to be incorporated into our activity, into our way of functioning and probably into our international relations. Yes, there is already intellectual reflection on the matter, but no, perhaps there are not yet more concrete programmes to integrate the technologies themselves into the activity.

Although we are in much more initial stages of the digitisation process, we cannot get lost in the evolution of technologies. As a cultural entity, we are also interested in library activity and information management. There are institutes whose main activity is not so much linguistic as cultural. English, French or Spanish attract more interest among students. On the other hand, other institutes that have very strong cultures, but languages that are not so international, are probably more interested in their cultural production than in their language teaching.

Annie: It is clear that cultural institutions are at risk of lagging behind, specifically when there are no sufficient resources to be dedicated to all the new technologies that keep emerging. But at the same time, cultural initiatives have the potential to imagine, react to or even spearhead change themselves. Sophia reflects upon her experience in bringing different stakeholders together to discuss important societal issues thanks to the power of art. 

Sophia: The work we’re doing here is not just work from one side of the problem, not just work with cultural organisations, but elevate them and include them in discussion in ecosystems where they usually don’t have access. First, we need to break open those silos, think differently, have different experiences, and approach a problem from a very different angle. This bipolar world that is now splitting between those geopolitical superpowers, is this all triggered through a different understanding of reality, a different self-image that you ultimately project to the world

The way an artist approaches technology and the way they view the potential and the pitfalls, is so inherently different from the way a policymaker would approach that problem. It’s very, very different how a user with limited media literacy would approach it. So if we don’t create this common understanding of what we’re actually dealing with and what it means to be  a digital humanour avatars online basically make up almost 50% of our living and breathing day.Then ultimately this will lead to the fragmentation of our society and just create parallel structures. So that’s unsustainable for democracy, that’s unsustainable for society. A common set of values, a shared set of reality. 

Technologists working in Silicon Valley have been groomed in a very different ideological setting, which we call transhumanism. And what we are advocating for, which in our understanding is a very European approach to technology, is called digital humanism. So what we want is to actually try to evolve in a human-centred way alongside technology that elevates us, that amplifies us, our humanity, our capabilities, our potential as humans. 

And it’s not just about evolving on to a point where the human becomes obsolete. That’s what we’re trying to avoid. But ultimately, if you don’t align those two, it will just diverge and lead to a catastrophe. So this is the moment where we really need to align on strategic issues. And this is why everyone has stressed the importance of the strategic partnership between the U.S. and the EU. The common thread between all of us is technology and how we want to view it and use it.

So far, we haven’t really encountered resistance, but mainly admiration. It is an original idea and people are thirsty for ideas. On the other hand, art has the advantage of actually changing culture massively. But for a lot of people, it sometimes seems that it’s just art and it’s harmless. So you can experiment, you can do stuff. It’s a safe space, but it’s not, you know, it’s not as impactful. It has a massive impact on our reality, but sometimes even under the radar and in a much more subtle way. Everyone is allowed and welcome. And there are no wrong answers. There is no judgement. You don’t embarrass yourself, you’re allowed to criticise, you’re allowed to think outside the box. 

We had some technologists actually say, “I’m sorry, but I think I don’t see it”. And then you’ve seen other people are very open to this sort of experimentation and it really something unfolds and opens up in their mind. I think this is a very, very novel idea to bridge policymakers with artists and bring them to talk in a very novel manner because they’re used to a certain vocabulary. Policymakers we’ve engaged thus far, they have seen it almost like a vacation from their work, really a moment to take a break, to reflect on a problem in a different manner and to just be be surrounded by inspired minds that ask all the right questions all the time, that push them forward constantly to think differently and also just ultimately to go to such a profound level of investigation.

Art, as always, goes one step further and says, But why? It gives you the opportunity to rethink things that we ultimately as society have sort of set in stone and then realise that they’re not, that they’re changeable, that they’re just momentary cultural manifestations, that everything is changed. I think the moment is really to rally our allies and our like-minded people around a common cause.

Annie: That common cause, around which the EU and its partners have to rally, is the prosperity and well-being of humanity. To achieve this with the help of Artificial Intelligence, the Union can: first, strengthen its regulatory cooperation, for example under the auspices of UNESCO, the Council of Europe which is currently negotiating a binding treaty, or its Free Trade Agreements; second, enable people-to-people cultural relations through increased funding for projects exploring AI and culture, international exchange programmes and co-productions in the area, as well as capacity building and development of AI literacy; and third, reach out to diverse stakeholders, not only governmental and not only coming from like-minded countries, and address the issues of Artificial Intelligence in an equal and fair way. In the next episode of our miniseries, we will direct the gaze towards that future and the way forward.

Damien Helly: Thank you for listening to today’s episode of our Composing trust podcast by culture Solutions! If you liked it, you can subscribe and follow us on your favourite podcast platforms, and contact us at 

The views expressed in this podcast are personal and are not the official position of culture Solutions as an organisation.
Musical creation credits: Stéphane Lam