Throughout the miniseries entitled Composing trustworthiness, we explore the impact that Artificial Intelligence, in short AI, has on art, language, and culture more broadly, as well as on international relations.
We examine the EU’s aspiration for trust and excellence in AI, in light of popular excitement surrounding the launch of new tools, but also growing debates around diversity, fairness, values and risks.
In this first episode of the series, we focus on the importance of culture for AI development, and vice versa the influence of AI on culture.
Methodology, sources of inspiration, and transcript
This podcast miniseries is based on a research project that included 9 interviews with cultural practitioners and researchers from different fields, including cultural institutes, artists and diplomats. Their names and positions have been pseudonymised as follows:
- Adam, Researcher specialised in multimedia
- David, Representative of a national cultural institute
- Kevin, Researcher and practitioner specialised in digitalisation of culture
- Lucas, Representative of the private sector working on AI for language
- Marta, Representative of a national cultural institute
- Rose, Researcher specialised in linguistics
- Sophia, Artist and practitioner
- Tania, Representative of the private sector working on AI for privacy
- Tom, Diplomat
The content of the 2021 interviews was transcribed, translated where necessary, carefully curated and re-recorded with the help of AI tools:
- Transcript (speech-to-text): trint.com
- Translation: deepl.com
- Audio (text-to-speech): animaker.com
- Image generation: openart.ai
- Music generation: aiva.ai
Sources of inspiration and references:
- AI and human creativity: an ongoing debate on the obsolescence of artists, the new opportunities, and the issues of copyright. See the extensive “art as experiment” by Alexander Reben, Greg Rutkowski on his style being copied, and the blog post and exhibition by AI for Good by the International Technology Union (a United Nations specialised agency).
- Human-computer interactions (HCI): a new form of dialogue, in which the users mimic the AI system due to the human tendency to adapt linguistically to a counterpart of conversation (interactive alignment). See for instance How do we speak with ALEXA or Conversations With an Amazon Alexa Socialbot.
- Large language models associate Muslims with violence: Natural Language Processing (NLP) models like GPT3 learn undesirable social biases and thus perpetuate harmful stereotypes.
- Google image labelling practices: on the case of recognising weddings around the world and Google’s Inclusive Images Competition check AI can be sexist and racist and AI has a culturally biased world view that Google has a plan to change
- Atlas of AI by Kate Crawford: an artist broke down the entire anatomy of something like Siri to understand how does it get made, revealing the hidden costs of artificial intelligence – from natural resources and labor to privacy, equality, and freedom.
- Face recognition for archiving and metadata enhancement: a positive example of cultural sector use of AI in favour of privacy and education, such as the FAME project.
Damien Helly: You are listening to the Composing trust podcast, by culture Solutions – a series on European cultural action with the world. Is Europe still attractive? How is it perceived by outside the EU? How do Europeans promote culture together in the world, with which partners? What have they learned, what is their experience? Our Composing Trust podcast series will address these issues.
Welcome to you all! My name is Damien Helly, the co-author of this Composing trust series, by culture Solutions. Today’s podcast is a special one, we are hosted by the artificial voice of our colleague Ina Koknova in a mini-series of several episodes on the relations of Artificial Intelligence (AI) with culture and trustworthiness.
Annie: Hello, I’m Annie and I’ll be your host during the podcast miniseries entitled Composing trustworthiness. Throughout the four episodes we will explore together the impact that Artificial Intelligence, in short AI, has on art, language, and culture more broadly, as well as on international relations. We examine the EU’s aspiration for trust and excellence in AI, in light of popular excitement surrounding the launch of new tools, but also growing debates around diversity and fairness of algorithms, on the one hand, and diverging values in a competing or even polarised global world, on the other.
In order to answer the question how the EU can create trust, in, through or despite AI, we count with the contributions of several cultural practitioners and researchers from different fields, including cultural institutes, artists and diplomats. Their names and positions have been pseudonymised, and their interviews carefully curated. They were subsequently transcribed, translated where necessary, and re-recorded with the help of AI tools.
In this podcast we are employing a broad meaning of the concept of Artificial Intelligence. The EU’s proposed AI Act defines it as a software that in response to objectives set by a human, can generate outputs such as content, predictions, recommendations, or decisions, thus influencing the environment it interacts with.
In this first episode of the series, we focus on the importance of culture for AI development, and vice versa the influence of AI on culture. In the following podcast episodes we will further explore the EU’s approach to AI, its impact on International Cultural Relations, as well as the good practice and way forward. To begin uncovering the wide variety of AI-culture entanglement, I would like to start with Adam, a researcher specialised in multimedia.
Adam: The two are, of course, interrelated. All AI technology is at some point developed by humans that have to live in a certain cultural framework, have certain cultural backgrounds. And naturally, that is reflected in the approach that is taken in the development of technology. On the other side, of course, AI will have an impact in the cultural sector, like other technologies have had. And I think for areas like audiovisual archives and cross-linked data, it’s also a big opportunity to have showcases of the positive examples of the use of AI technology, of a use that is not always linked to some surveillance that scares people.
Annie: Let’s now hear from Kevin, a researcher and practitioner specialising in the digitalisation of culture, about the relationship between AI and the cultural sector.
Kevin: I agree there are mutual benefits, there is potential. For example, thanks to mass digitisation of cultural collections there has been a really enormous volume of digital data being produced and made available to the public. In many cases it is also free to use, for example, for training and evaluating AI systems. They can then in turn be used to improve or interlink some of these digitised materials and also create some new services or even business models combining cultural data with technologies. I think there is an opportunity for the cultural sector to bring different uses into the AI domain – we have lots of data that we can contribute and AI and machine learning needs a lot of data. At the same time, we have these culturally diverse collections and we must be aware that these collections in libraries and museums already have inherent biases because they have been collected over many centuries and with different political worldviews. So the AI technologies can be used on one side to uncover some of these biases in the collections. These information and revelations about biases in collections and worldviews can then lead to a bit more balance and ethically competitive and compatible AI.
Annie: However, AI is also known to exacerbate pre-existing biases. Marta, as a representative of a national cultural institute, shares her view of the relationship between culture and AI in terms of challenges and opportunities.
Marta: I think that’s actually a great relationship. It’s a very special one, that has also always been there. They always have been intertwined. Actually, as long as you go back in history and time. But we now really have to look into these opportunities. If you think about culture as an aesthetic perspective, it’s definitely very valuable what artists do, as they work like seismographs of a society or they can be actors of change. They can be critical, they can highlight things, they can open a thought process that wasn’t there before, but they can also create empathy, they can create connection. That’s actually why some tech companies love bringing them in because they enhance their products. These tech companies are open to that kind of conversation, effects and ideas that people were not aware of before. If you think of culture as a set of values and a way of upbringing, AI can be a very strong influence in agenda-setting, in a way we are often not even aware of.
But if you think about the people who do develop AI, who innovate, then they do come from a certain cultural background. And at this point of time, most people come from a particular cultural background and that influences what is developed, how it’s developed, the data that goes into training. I mean, I would love to have more programmes that enable a kind of exchange between developers from different cultural backgrounds or that include questions of, of culture and ethics in curricula. I think we’ve now arrived at a point where we understand that the ethical challenges are actually much bigger than the technical ones. And no matter how you understand culture, either as a perspective or a certain set of values, is a huge opportunity to bring awareness, to be critical, to create a kind of meaningful exchange.
The influence of AI or technology on art: obviously, it’s a tool, often artists use AI as a tool, but it can also be a topic with a strong influence on societies. It is a big topic that is relevant for artists and there are a lot of networks and websites where artists congregate and discuss these topics. Also, clearly, you have more and more the discussion about AI stretching the meaning of what creativity actually means. What does creativity mean? What does truth mean? There are two interesting works from the artist Alex Reben who is questioning where does creativity start, where does it end and how is it connected to a human being. I love how it challenges former ideas that creativity is only something humans can do. And it questions everything, even truth.
Annie: Next, we are moving on to Rose, a researcher specialised in linguistics, who gives us a different angle on the complex interaction between AI and culture.
Rose: The relationship is very important, in a very genuine sense. In AI, we try to rebuild culture by technological means. AI is about common knowledge. However, Alexa has proved to lack coherence in conversation over longer sequences of time. That is why it is a major problem, for example, in e-learning environments. AI changes the topic very randomly. The research field is genuinely interdisciplinary and no AI can be built without the humanities, that is, linguistics, sociology, psychology, philosophy – it is an interdisciplinary endeavour.
Annie: Let’s now hear the perspective of the private sector, from Lucas, a representative of a company working on AI for language.
Lucas: I guess AI is providing more efficiencies. I think a lot of people don’t realise how much AI is there and they just find that it’s very efficient. They probably just think it’s an app that they’re using on their cell phone. So I think that probably the best solutions are those that are invisible. You know, it works. It provides the solution that people expect.
Annie: In contrast to this pragmatic view, Sophia illuminates us about the way an artist and practitioner feels about culture and AI.
Sophia: The impact of AI on culture is massive. I think it is the pressing topic of our time. In fact, the problematic aspect of this is that some of it is happening unconsciously. It’s happening under the surface. People are not aware of it. From the question of antitrust laws, the splitting up of tech monopolies, to how do you make sure ultimately that there is no algorithmic bias. Those are the things that form culture, that create a reality that ultimately we all have to share and live in. Because the word “culture” can be used in a twofold way: culture is the culture we live in, as society, but culture also as arts, as a medium, as a manifestation, as a communication device.
On the other hand, when we think about technology, we have a tendency of thinking it is pretty monolithic. We think it’s an entity that can only be solved, created, and regulated by the same entity, person, or company that created it – it is tasked with also creating safeguards for it. But the reality is that we can’t think that way about technology because it affects all of us. When I say technology affects all of us, it clearly means it affects our government structure, it undermines our democracies. In the worst cases, it marginalises people.
Some of these issues are still under the surface and these are the ones that we need to tackle. And this is where art comes in. This is where it gets really interesting because those are the moments of excavation of the truth that underlies these processes. And there is really not a single profile as well-suited as that of the artists to excavate that truth, to get to the depth of where the problem lies, to try to point us in the direction of what we need to pay more attention as regulators, as cultural facilitators, as policymakers, as government employees. Art can create this interesting, safe, liaison between policy, government and technology. Artists can be the communicator, the facilitator, they can be that entity that makes sure that technology is created for the benefit of all and not just a handful of selected few.
Annie: Speaking of tension between the many and the few, the issues of diversity and bias in AI systems have already emerged several times in our conversation. Tania, a representative of the private sector working on AI for privacy, has a particular take on this question.
Tania: Diverse folks within the machine learning industry, so to speak, even within research, women, make up a very small portion. And then, of course, folks of colour and women of colour are an even smaller portion. And that is obviously problematic. And it resurfaces in the types of systems that are built and the types of questions and concerns that we’re asking. Because when we get a group of individuals, especially from an extremely privileged position, asking about the impact of their work on communities that they’re not involved in and that they don’t understand, then they come up short. If you don’t have diverse folks in the room making decisions and thinking about the technology, then you’re going to inherently affect communities that you’re not aware of, despite even the best intentions. By making a decision from a limited privileged perspective, you’re already hurting others. Essentially, you’re already playing into systems of oppression that you may or may not understand.
Annie: And this is Tania’s reply when asked what is the impact of this lack of diversity in AI systems on individuals from different backgrounds.
Tania: I think it’s difficult for us to foresee all of the impacts. So I know that it has already been brought up in the field of natural language processing. The inability for us to extricate some of the prejudices in large amounts of online text from the way that these language models behave. And so GPT3 is one of the most popular ones now. On the one hand, the training takes, you know, basically the same amount of energy as the running of a car for ten years. So that’s also a problem. And then it consumes massive amounts of text. One researcher decided to investigate what GPT3 had to say about Muslims, and it was unable to write a sentence starting with Muslims that didn’t end up with somebody being murdered, somebody being killed, some act of terrorism or something else bad happening. So this is all coming from GPT3, which is of course fed from massive, massive amounts of text from the internet and other places.
But not only in multilingual models – we see the same problems in computer vision as well. There’s been a big outcry about the fact that the classification systems of Apple or Google will not mark weddings as weddings when they happen in parts of the world that don’t involve white dresses and so on and so forth. We can see that this normative kind of English-dominated and, you know, American culture, or American and European-dominated ways of looking at the world can come through in these models and essentially negate people’s lives.
Annie: David, coming from a national cultural institute, has a more curious approach to the expected impact of AI.
David: Due to the cultural framework of the AI developers, they will include all the good and bad aspects of their culture. But to what extent does it condition culture? It’s difficult to foresee right now, because of course it affects so many elements of cultural activity that we don’t really know how it’s going to be. Especially because it is also unpredictable how the next generations are going to accept, live or cope with AI. They will be AI natives in the same way that digital natives exist now, they will have to live with it and their form of cultural production or reception will be absolutely conditioned by it. The last 30 years have seen a complete change in the ways of producing and receiving culture.
Annie: Yet, cultural consumption by the users of AI systems is only one facet. We also have the workers’ perspective, an often overlooked one, but crucial for global equality, justice and fairness.
Tania: Indeed, AI systems are massive and they involve a lot of steps. So I like to reference Kate Crawford’s work on Atlas of AI, where she had an artist take a look at breaking down the entire anatomy of something like Siri. How does it get made? This goes all the way from, you know, mining for precious minerals that is occurring in different places within Africa, and then all the way down to the software programming and the delivery of your system from Apple or Amazon or whatever. And I think when we look at the system, when we zoom out, obviously it’s massively international, there’s a lot of trade, there’s a lot of different companies responsible, there’s a lot of different workers involved. So there’s a lot of humans involved. Some of those workers are treated much better than others.
I think we could talk about more justice for the working conditions along the way. And I think that would also bring probably better outcomes for a lot of these systems overall. If you’re a worker that is just in there labelling photos all day, and you are treated better, you’re probably also going to be open to educational treatment on how many different types of weddings there are, for example. There’s a lot of invisible labour force behind most AI systems that we don’t acknowledge. And I think that acknowledging and recognising that, would improve the systems overall, not only for those people, obviously, but also for all of the users.
Annie: Language is one clear example of the vast cultural diversity across the globe. It has also been one of the first fields of application of AI. But what does such a language processing model involve, we asked Lucas.
Lucas: An educational product for learning languages includes tagging the data, analysing the data, and sorting the data so that we can give learners a proper fit for their level in whatever language that they’re learning. Most of the items being developed are for business solutions. What we’re using AI for is precisely for language diversity and to promote, to understand more of the world’s languages and to promote more of the world’s languages. It gets to the point where, you know, at some point in the future everybody will be able to use their own language, the language of their own identity, and to speak with everybody else. You could speak your own language with me and I could speak my own language with you. I think that technology will also help simply express ourselves in our own language more efficiently, and yet everybody will be able to understand us.
Annie: Language diversity is also of particular importance within the EU institutions and societies. Therefore, it concerns not only business solutions or individual users but also the public sphere. David talks about the situation in national cultural institutes.
David: The issue of multilingualism is central and fundamental in the Union itself. I believe that AI can be beneficial for multilingualism. It has an important potential, especially for developing technologies that make it possible to somehow maintain unity but also intelligibility between the different variants of the same language or between different languages. It is also fundamental for working in professional fields. AI can serve to unify and make available all those works, especially terms, graphics and texts that should be accessible. AI can be used in distance learning and in assessment, which is a very important part of our activity as a cultural institute, but also in the security of holding exams and in drawing data and conclusions, that can be exploited scientifically for future research on the evolution of the language, the learning processes and their reflection in tests. That work will be carried out in collaboration, of course, with external companies, and we will also try to work within the framework of large programmes or new European projects that are going to be proposed. As a public organisation, we work in the public service of our country, we follow these guidelines, and we collaborate with all kinds of initiatives, both public and private, and above all, we also outsource work because we do not have experts in AI. We have a fairly important IT department but it is not so up-to-date with the issues of artificial intelligence, which means that we have to get up to speed quickly, I am not going to say at the forefront, but at least in line with the trends. We still have to lay the foundations of our digitalisation process a bit more before we can tackle other more ambitious projects.
Annie: Pursuing this ambition or potential requires an inclusive public debate and reflection on the role of AI in society and culture. Adam explains the perception of the public opinion in Europe as follows.
Adam: I think it’s important that this dialogue around AI takes place because very often the attitude towards AI today in Europe is quite binary. So you have people who are very positive and on the other side there is a lot of fear it will always end up with surveillance and control. But we are using face recognition for purposes of finding people in or documenting their appearance in archives, and also using face detection to anonymise pictures before they go out on the web. So actually for improving privacy. Thus, I think having more positive examples and having more of a discussion around the potential of technologies and having applications in the cultural sector might have a positive influence on this debate.
Annie: But this public debate on AI is being influenced by a new actor – Artificial Intelligence itself. Let’s go back to Rose in order to inquire into this narrative or agenda-setting function.
Rose: Human-computer interactions, or HCI, from a linguistic perspective, is a new form of dialogue. We are interested in how users behave when faced with this seldom trouble-free new form of interaction and this illusion of agentivity, as we call it. The users adapt to the system – interactive alignment is the human tendency to adapt linguistically to a counterpart of conversation. We tend to adopt lexis and syntax from our conversation partner, a common dialogue lexicon as a form of implicit common ground – this is a completely natural and highly frequent phenomenon. That makes users, of course, vulnerable to agenda setting by AI that participates in public discourse. And with this comes a new responsibility for developers of systems.
Annie: AI partakes in cultural life not only via language processing but also through the creation of pieces of art. There is also a fascinating intersection between language and machine cultural production, as Sophia shows us.
Sophia: Artificial creativity or machine creativity is almost used as if it were already a reality, right? My conversations with AI researchers over the last years show that most of them lean towards saying that in a way, yes, it displays certain aspects of creativity. But at the same time, what I find baffling about this is that when we talk about it, we really show the limitations of our vocabulary, of our language, because we have no other way of thinking about a machine generating original content. We may need another language that would emphasise that this could be something completely different, like its own species, its own thing that will create its own cultural manifestation. There’s lots of work to be done where we’ll eventually come to the point of creating a whole vocabulary just to deal with machine intelligence and machine creativity.
But it’s not so much about this machine creation, but rather what does it really mean for humans to be creative. It really raises the question of what is actual originality. We have to ask ourselves the question of what does it really mean to be human. Sometimes we realise that a human being is much more automated than we’d like to acknowledge. We only use 500 words of our vocabulary that entails millions of words. So we’re inherently limited. And I think machine intelligence really pushes us to go a little further and to start exploring outside of this safe space. Most of creativity in the 21st century is just really recycling thoughts and influences of other people. So we could really argue that original is to create a patchwork of other people’s ideas. Some people even believe that human creativity is not possible anymore, that we’ve explored everything that the musical scale has to offer. So this is the end of human creativity.
Annie: Last but not least, we finish with the more positive take of the diplomat Tom on the future of AI, creativity and humanity.
Tom: Artificial intelligence is going to shape our future work, not just workplaces, but the kind of future jobs. Say that we will very soon replace most of our current jobs, even jobs that we used to in some ways call white collar jobs. We now have the feeling that even a lawyer, even a doctor, can in some ways become superfluous. So the question that always remains is: what are the kinds of jobs that are safe for human beings in the future? And, very often then you hear this argument that creativity or creative jobs are the ones because creativity cannot be reached by automation. But that’s not necessarily true, because creativity, as we understand it, has so many elements. There are so many elements that can be automated as well. We don’t really know what human creativity is. It really challenges that last refuge of human superiority to machines. That premise is not one of depression and pessimism, but one of optimism and looking in some ways at the potential huge debates that we are having: what should a future society look like? It is high time to re-evaluate our work ethics and the way we define our human nature, the way we define the worth of a human being through professional careers. If we just look at the ancient world in antiquity, the highest form of human dignity was when you started to free yourself up from the kind of questions that Sophia posed before. AI is not just an exercise that threatens humanity, but it actually may be the opposite, bringing humanity back to what it is, back to its fullest potential by enabling us to free up the space to ask these basic human questions. Who are we? Where do we come from? What makes a human being a human being? And so I think that the question of creativity is so crucial that it just does not only concern the arts, it’s actually concerns the future of mankind.
Annie: And with this thought-provoking sentence, we close this podcast episode, full of insights into the conflicting but also mutually supportive relationship between culture and Artificial Intelligence. We have seen that AI can help build understanding and trust between different people, but it may also be harmful as a result of human-originating biases. Then, is AI trustworthy? What is or should be the European answer to these challenges? We will look for it in the next episode of the miniseries – don’t miss it.
Damien Helly: Thank you for listening to today’s episode of our Composing trust podcast by culture Solutions! If you liked it, you can subscribe and follow us on your favourite podcast platforms, and contact us at culturesolutions.eu.
Check the rest of podcast episodes of this miniseries.
The views expressed in this podcast are personal and are not the official position of culture Solutions as an organisation.
Musical creation credits: Stéphane Lam