This fourth episode of the miniseries entitled Composing trustworthiness takes a look forward, pondering on good practices, existing needs and visions for the future. A future where Artificial Intelligence is part and parcel of society and international cultural relations. The speakers share their sincere hopes but also major fears, and offer a series of questions that we must answer as a global community in order to build together the AI-enabled interconnected, diverse and fair tomorrow.
The miniseries explore the impact that Artificial Intelligence has on art, language, and culture more broadly, as well as on international relations. We examine the EU’s aspiration for trust and excellence in AI, in light of new developments and growing debates around diversity, fairness, values and risks.
Methodology, sources of inspiration, and transcript
This podcast miniseries is based on a research project that included 9 interviews with cultural practitioners and researchers from different fields, including cultural institutes, artists and diplomats. Their names and positions have been pseudonymised as follows:
- Adam, Researcher specialised in multimedia
- David, Representative of a national cultural institute
- Kevin, Researcher and practitioner specialised in digitalisation of culture
- Lucas, Representative of the private sector working on AI for language
- Marta, Representative of a national cultural institute
- Rose, Researcher specialised in linguistics
- Sophia, Artist and practitioner
- Tania, Representative of the private sector working on AI for privacy
- Tom, Diplomat
The content of the 2021 interviews was transcribed, translated where necessary, carefully curated and re-recorded with the help of AI tools:
- Transcript (speech-to-text): trint.com
- Translation: deepl.com
- Audio (text-to-speech): animaker.com
- Image generation: openart.ai
- Music generation: aiva.ai
Sources of inspiration and references:
- The Grid: a European Space for Culture in the USA, incorporating art-thinking into the development of new technologies, allowing for conversation between artists, technologists, and policy makers from Europe, Silicon Valley and beyond. Don’t miss the Art + Tech Report.
- Europeana: has had different initiatives on AI, including AI4Culture platform, task force AI in relation to GLAMs and its report, EuropeanaTech challenge for AI/ML datasets.
- IMAGE + BIAS: project by the Goethe Institut that critically engages with the cultural realities being increasingly determined by imperceptible technologies.
- Art+Tech Lab: Open Austria’s support to art projects that deal with the blurring border between technology and human creativity
- Algorithmic Justice League (AJL): a movement towards equitable and accountable AI, which started with the documentary CODED BIAS, winner of multiple awards.
- AI experts’ predictions for 2035: Pew Research Center collected expectations and societal concerns by 305 professionals.
- AI natives: a report introduces you to the new type of consumers and plots a path towards engaging them.
- Federated Learning: approach that decouples the ability to do machine learning from the need to store the data in the cloud.
- Explainable Artificial Intelligence (XAI): a set of processes and methods that allows human users to understand and trust the output created by algorithms.
- AI winters: the hype cycles of optimism and disillusionment of Artificial Intelligence have included several winters since its inception in the mid-1950s. John McCarthy, who coined the term of Artificial Intelligence in 1956, defines it as “The science and engineering of making intelligent machines, especially intelligent computer programs”.
Damien Helly: You are listening to the Composing trust podcast, by culture Solutions – a series on European cultural action with the world. Is Europe still attractive? How is it perceived by outside the EU? How do Europeans promote culture together in the world, with which partners? What have they learned, what is their experience? Our Composing Trust podcast series will address these issues.
Welcome to you all! My name is Damien Helly, the co-author of this Composing trust series, by culture Solutions. Today’s podcast is a special one, we are hosted by the artificial voice of our colleague Ina Koknova in a mini-series of several episodes on the relations of Artificial Intelligence (AI) with culture and trustworthiness.
Annie: It’s Annie here again in our fourth episode of the mini-series on AI. So far we have examined the complex and multi-faceted interaction between culture and Artificial Intelligence, the EU’s values-based approach to the technology, and the broader impact of AI on international cultural relations. Today we continue the discussions about potential risks and opportunities, with a rich set of examples on artificial consciousness and creativity, which have been part of human myths since antiquity.
The current hype surrounding Artificial Intelligence is impressive, with companies vying to launch products labelled as AI and share prices going up. Most importantly, AI has captured the public imagination, which is crucial for generating the necessary debate among the different members of society regarding its desirable use. However, AI itself is not a new technology, “the science and engineering of making intelligent machines” has been around since the 50s. Attention given to AI has fluctuated throughout these seven decades, with two so-called AI winters taking place between the 70s and the 90s, until machine learning took up in the 21st century.
With all this in mind, in this episode we take a look forward, pondering on good practices, existing needs and visions for the future. A future where Artificial Intelligence is part and parcel of society and international cultural relations. Our cultural professionals share their sincere hopes but also major fears, and offer a series of questions that we must answer as a global community in order to build together the AI-enabled interconnected, diverse and fair tomorrow.
To start off, Marta, a representative of a national cultural institute, talks about the potential way the EU can engage with AI through its external action.
Marta: I can only say that what the EU should do is try and offer lessons learned from where they already have made experiences, for example, privacy policies and regulation of players in the tech sector. And, I think it would be really important to share these experiences. What worked, what didn’t work, what should be changed again. Because there is an openness right now, we see a kind of interest in partners, and even in the industry, to listen more and to learn more than before. So go and have that dialogue and share.
Annie: Adam, who is a researcher specialised in multimedia, also emphasises the importance of communication but from an angle focused on audiences.
Adam: An important aspect would really be how you communicate about the AI and also how you make things transparent. There’s a lot of research now on explainability of AI technologies, but it’s still very expert-oriented. Just making anyone able to, to understand the implications of using AI technology and how certain choices or how certain data might impact the outcome is a very, very important thing to understand the technology and also to build trust in that technology. We must be critical about the use of AI technologies when it comes to public authorities – they still feed their data into Facebook without thinking about it.
The media sector deserves more attention. The use of AI can have an influence on public debate, on how opinions are formed in a democratic society. There is a lot of technology that could be suitable for media, but there is very little technology that is specifically tailored to the media sector.
Annie: And next, more opportunities for community participation and fairness, as well as reflections on the societal side of any technological solution from Tania, working in the private sector on AI for privacy.
Tania: I think there’s a lot of topics that are open and being investigated right now where we’ve made some inroads, but it’s not quite where we want it to be. Privacy and security is one of them. And then also, obviously dealing with justice within machine learning systems and what that would mean. I think we have some technical answers on how to do that, but they don’t bring the societal answers so that you can have a totally fair system. The technology is just one piece of the entire system. So we have to kind of question the systems and the way that we design them and how technology can help. We need to better acknowledge the limitations of AI: when we maybe shouldn’t use machine learning, or when something should involve a human as part of the decision making process. I think that’s still a very open question.
I hope in the future we will not only collect all of the data we can in a few small companies that then have the best machine learning in the world because they have the biggest data in the world. One of the technologies I’m really excited about for the future is called Federated Learning, which is the ability for us all to have our data on our own devices and yet to build models collaboratively together. I could imagine a world where I want to get together with the people of my block, and I want to build a navigation system that we can use that shows traffic. So I could see it being like a cool, cooperatively-owned, collective machine learning future, where even individual or small groups feel the ability to design systems like that, which are useful for them, where it’s less driven by venture capital and it’s more driven by actual community needs.
Annie: Kevin, our researcher and practitioner specialised in digitalisation of culture, focuses on the machine-human collaboration, and the particular value cultural professionals bring.
Kevin: Currently there has been a strong focus on tests and processing, such as image processing in the cultural heritage sector. I think we still need more attention to the whole topic of how we can get a better balance between human expertise on the one hand and then the power of massive machines, on the other. First, we can emphasise solutions that create a better interface for humans working with these AI systems and humans involved in quality control capacity. And second, in our cultural heritage institutions, we have a lot of expertise with curators or subject experts that can be fed into AI systems.
Annie: Marta expands on the cooperation between artists and AI experts, and the opening for the European Union to promote such kind of initiatives.
Marta: First of all, we always think we are already so far, but we’re actually still at the beginning, even though a lot has been tried. Tech and art came together, there was, what we would maybe call, a successful collaboration, because it’s lasting and it had a certain effect on the people involved or on development itself. But it takes a lot of time. If you start from the basics and you put an engineer and an artist together, they have to be together for at least six months until they actually understand each other’s language, their codes. And it is so much harder than you would think. This kind of breaks down the silence, starts a conversation. What can we learn from each other? So it’s much easier said than done.
I think anything that supports that, that development and that kind of dialogue, multilateral dialogue would be super helpful. Look for these voices that are not heard. They are there, and they’re very inventive and they’re very, very interesting. Look to Africa, it’s like the lab of the future. Give space to any project that supports that and puts it into dialogue, and then maybe ask the question about a global framework on AI: is that even anything we could possibly imagine, or are we just too far apart from each other with our approaches and our thoughts of what’s important to us?
Annie: Considering that the academic sphere is another fruitful ground for intercultural collaboration as well as asking questions, Rose, researcher specialised in linguistics, highlights the areas in which further research is still necessary.
Rose: We need more interdisciplinary research on AI, with a higher emphasis on the humanities. The algorithms are computer science, the interface is linguistics, the behaviours of an artificial agent stem from psychology, and the impact to goals stems from sociology and so on and so forth. And without language there will be no interface, no lexical or semantic parser and no dialogue agents as Siri and Alexa.
The user perspective is the other challenge. And we need more research before systems can be implemented in difficult areas like medicine or caring for the elderly, things like that which are promoted in the U.S. We need more research on the effects of interactions with AI, how much impact does the AI have on people’s cognition, on their emotions. We can and should build a bridge or links between computer science and the humanities, because we can transfer more abstract social or cultural concepts like this.
Annie: Staying in the realm of linguistics, Lucas, from the perspective of the private AI company working on language learning, traces the concrete steps forward.
Lucas: AI is in such an early stage that there’s very little that it can actually accomplish. The data sets, for example, a lot of them are not tagged very well yet. And so we spend a lot of time re-tagging data sets, the annotation work is very time-consuming. Even if the AI is only doing its job to 70 or 80% accurate, it’s not some sort of a life-threatening issue that could be very, very important. Maybe for GPS directions you need something more like 100% accuracy. But if you’re just tagging data, for example, or marking, you know, where the verb and the noun are in the sentence, maybe the accuracy rate is not so important depending on the use case. A lot of researchers work on developing better algorithms, but maybe that’s not the right solution. Maybe it’s the data itself that isn’t tagged right. There is a lot of work that needs to be done, a lot of work. And it’s still very early in this process, you know, it’s still going to be another 10 or 20 years until you get AI that’s really performing at a level where we expected to actually, you know, be working in a very intelligent way.
Annie: New AI tools, be it related to language, culture or other uses, are being developed all over the world, but the level playing field is a persisting challenge. Marta shares her vision of putting innovation on an equal footing and fostering a global exchange of ideas.
Marta: We need to hear more of innovation, of AI development, in other parts of the world. There’s still not so much awareness about what’s going on in other parts of the world when it comes to tech innovation, and I’ve seen it, I know it’s there. It’s just that these innovations don’t have the platform to be heard. They don’t necessarily have the funding. They don’t have structures set up like we have in the West: you have such amazing structures where you can basically directly go from an idea at the university to getting funding, and then being out there in the market. But it also means certain ideas are being financed through funding institutions that have a certain focus and it doesn’t yet allow for bigger diversity. I would be so interested in what would happen if Nairobi, if Johannesburg, if, you know, Brazil, if they had more structures that would enable the people that are so innovative over there, that do have very important questions in regards to tech innovation and ethics. What I really don’t want is that AI and other tech developments go into the opposite direction of really just increasing the gap and forcing the status quo of a certain dominance in this world. And that would be very, very bleak.
Annie: Speaking about the future, we ask Rose what is the path to avoiding such a gloomy AI prospect.
Rose: When users follow the bot and it comes to select content in their opinion, it means that they relinquish some of their autonomy. I hope that in the future we will emancipate ourselves from technology more and more, and this requires a better understanding of how the system works. That’s no magic. We need more education in this field. So in my opinion, we need a new age of enlightenment, to overcome our faith in technology and mythically over-exaggerated concepts, such as artificial intelligence. These systems are deterministic, they still work after a certain programme and plan. They are not some really intelligent, metrical copy of the human mind. This is not the case at the moment. It’s just the language of advertising.
Annie: We also asked Sophia, an artist and practitioner, how AI would look like in 10 or 20 years from now.
Sophia: It is a funny question. I love it because it has been asked for the last 40 years and everyone has made predictions. So to make predictions is a terribly speculative business. I’m going to tell you what I hope for. I believe that ideally machine intelligence will free up mental space of mechanical work. In this way we can have a couple of hours a day that we could really allocate to our minds, exploring their full potential. This is healthy and conducive to progress for everyone. Time is a luxury, we know that. Giving time to this incredible human brain, where we only use a fraction… imagine what would happen if we trigger the rest of!
It is true that leading AI researchers caution against doomsday and call for creating safeguards. Otherwise, this technology is like a kid that hasn’t learned to use tools yet. It will just be out of control. But any mentality or ideology borne out of fear is misguided. I think we’ve really just scratched the surface of our human potential, and AI will be a massive motor to opening up new pathways that are essential to secure the survival of the species.
Annie: Lucas also reflects on the societal attitudes towards AI and their implications in terms of equality.
Lucas: As I said, AI is basically still just a computer programme. So I don’t think it’s going to replace, you know, the human but it may be able to assist us in being able to make decisions. Oh, you want to be able to learn this language and you want to achieve this goal, so what decisions should you make along the way in order to reach that goal? I think that you might end up with two camps of people: one that refuses to use that kind of technology, and another camp that wants to use it. There would be an awful advantage for those people that are embracing the technology to help them make their decisions in life and achieve their educational goals or their career goals.
I think some people could just fear AI, but it comes down to understanding more about how the technology actually works. And it’s just a computer programme, so you can look at past technologies. It’s kind of similar, some people were scared of cell phones. But other people grow up in a world where they’re always connected. In any case, people will need to manage their productivity time.
Annie: David, who is a staff of a national cultural institute, equally sees the benefits of AI automation, while focusing on the need for public support for education and the labour market.
David: Many of the activities we are carrying out right now will no longer be human-intervention activities. Reporting, translations, IT development and editing. Our work, in many cases, is going to be much more focused on big decisions and less work. I don’t think AI is destroying the job market, I’m not that pessimistic.
What is true is that, yes, there is going to be a fundamental challenge, which is going to be training. But logically, in these technological changes there will always be winners and there will also be many losers. And we must try to minimise the impact on people who may not be able to retrain as quickly as the evolution of technology demands.
And then there will be the artificial natives, so to speak, who, with well-targeted educational policies, will not have a major problem. What happens is that change is indeed very abrupt, it is very rapid in a very short time, and it also generates a great many social problems that we must try to cushion or minimise in some way. The great task of governments will be to keep the labour market as adapted as possible and the education system adapted to all this evolution.
Annie: As a diplomat, Tom extrapolates the problem of inequality into international relations, triggering both competition and collaboration between global players like the EU and nation states.
Tom: Rather than on the possible overtake of human intelligence by AI, I’d like to focus on a practical concern that might be a real problem in 10 years. There is a tendency that might actually favour countries or systems that are authoritarian or that use these technologies in order to impact human rights, to surveil or to control their population. In the case of digital platforms, the more they know about us, the more they advance. Those countries that put the least limits, ethical or legal limits, on what you can do with these massive amounts of data on your users and your citizens, will have a competitive edge over other countries.
We, Europeans, know what the right thing to do is, and we are very grounded in our values and in our basic rights. But that might not be the kind of structure that makes us the leader. There is increasing incentive for countries that want to control through fear their populations and quell free speech. I’m a little worried there is a danger that, if we do not get this right in our own countries, there is a possibility for our private companies, because of business and economic incentives, to undermine our own systems and our own value structures.
This is a dystopia based on existing technology, not on some technology that still needs to happen. We really need to get it right and rally around a set, a framework, an international framework that basically disincentivises companies or countries that use Artificial Intelligence for nefarious purposes. In the next years in the United States, in Europe, around the countries that believe in these values, we need to move together and think it through.
Another thing about Artificial Intelligence where we already might lose control is in the military sector and in the cybersecurity sector because there is a destructive power in the army. Artificial intelligence could very well turn our world into a nightmare where one of the most obvious victims would be freedom.
Annie: Turning away from this highly undesirable scenario, we look at the current examples of projects involving AI in international cultural relations, in order to identify good practice and lessons learnt and like this advance towards the positive AI futures. Let’s start with Kevin.
Kevin: Europeana obviously is very well equipped to serve as a good basis for AI development in the cultural sector. First of all, because it’s a big source of data that can be used for creating AI solutions.
Secondly, because it’s a wide network with many active cultural professionals from the main European cultural heritage institutions, but also other stakeholders like users or education sector. There is a task force that has been working to survey the activities of cultural institutions in Europe with AI, and to summarise some of the main successful activities and also lessons learned in applying AI in cultural heritage institutions.
Thirdly, there is a strategy for making Europeana more suitable for AI, so that means, for example, allowing better access to the data for developers who want to use Europeana data to create AI solutions. Europeana launched a call for projects that use European connections to create training datasets for an AI, especially highlighting issues that are in these collections and trying to uncover them before they actually end up in algorithms and applications.
Annie: Next, Marta brings up the national cultural institutes.
Marta: Goethe Institute has certain spots where it works on the topic of AI, be it in the Bay Area, but also, for example, within Europe, in Australia, Southeast Asia. That normally means that the topic has a relevance in the location – you see it as a discourse in the media going on, and people really have a lot of questions. That’s when we start reacting to that and working around that issue.
We have also formed a very big, so-called EU AI alliance, that is a wonderful network of many European actors and players that are interested in a kind of exchange on big topics and questions around participation, like: How do we invite other perspectives in? How do we increase participation? And how do we deal with bias? I can give Project Image and Bias at the Goethe Institute as an example. Everyone is worried and everyone is thinking: okay, how can we do, what we can do to avoid a situation that we don’t want, where we have a status quo and a power dynamic to play that is not not a healthy one and it’s not fair? So it’s actually really encouraging to see how many actors and how many countries in this world are trying to grapple with that and tackle that question, yeah.
Annie: On such a positive note, we have more inspiration coming from Tom.
Tom: Open Austria Art and Tech Lab focuses on the intersection of human and artificial creativity. It’s really about machine creativity and creating artworks – humans like renowned artists work with machines that are creative machines, and they co-create something together, to show to the broader population why it’s so important to pay attention to AI. You are responsible for your own digital human and you need to be in charge of that.
I also want to highlight a collaborative art piece between a famous European writer called Daniel Kehlmann and a technologist and philosopher in Silicon Valley, Bryan McCann, who worked on a natural language processing algorithm. It was widely received in the media and really created waves, and people started to pay attention to this topic, it even received an award.
Annie: In turn, I would like to mention AJL, which stands for Algorithmic Justice League. It started with the documentary CODED BIAS, and continues to combine art and research to illuminate the social implications and harms of AI. The organisation is seeking to generate a wide movement to shift the AI ecosystem towards equitable and accountable AI, in a belief that we can code a better future.
Speaking of initiatives based in the United States of America, we will now zoom into the European project that best exemplifies AI cultural relations – the Grid.
Sophia: The Grid was born in 2019 out of the European Spaces of Culture programme. The vision and the original ideas were developed together with local partners in the Silicon Valley. The funding by the European Commission was matched by another 50,000 by partners from the global tech industry in Silicon Valley. And that showed immediately the potential of this idea. And now it is registered as an actual not-for-profit organisation.
The Grid was born out of the desire to have an impact and to give groups, people, individuals access to something that is, that has been, pretty secluded or shielded off from us from certain groups. AI companies work with proprietary information and don’t want to share the secret knowledge. But at the same time, if you don’t make sure that in the development and research phase of these products, there is a variety of voices, a diversity of opinions and of understanding of what it means to be human, then you will simply create and perpetuate a very monolithic view on what it means to be a human. In the case of Silicon Valley, that means a 25 year old white man – this is not the image of what humanity looks like, right? The northern hemisphere is always dominating the southern hemisphere, these problems are perpetuated through technology.
So how do we make sure that we break the cycle, we break the cycle of toxic patterns? Technologists really need to work alongside regulators to first help regulators understand what the actual problem is. So there is lots of knowledge transfer that needs to happen, we need to really work on educating policymakers in technology. At the same time, different thinkers and artists who have a much more holistic understanding of humanity and also a desire to safeguard the human spirit, need to be introduced into AI development as well. And this is where the Grid comes in because it connects these three worlds, these three silos, and tries to create this platform of communication, of knowledge transfer, of collaborative art works.
The Grid is a multi-stakeholder platform that includes multi-billion dollar companies or people who have the financial capacity to influence elections, together with individual artists that are struggling to make a living. It brings them together in a space where everyone feels seen and heard, working in small groups, and pairing them to see the potential of interaction. And then also, quite frankly, to create chaos. Sometimes this creative chaos is necessary and it’s like an artistic intervention that is needed in order to instigate change, to foster this out-of-the-box thinking, to come to interesting and also perhaps even antagonistic confrontation. And our philosophy is that everyone needs to talk about it, because it is for everyone. When matching artists and technologists so that they can start talking to each other, mind-blowing stuff happens all of a sudden.
Annie: But the Grid was also important for EU dynamics. It is an endeavour where various European Member States and their cultural institutes came together through the EUNIC cluster, ensuring diversity of approaches and ideas on what should matter on AI topics. Most of the Member States have their own projects but the discussion in such a group enables a learning process to understand what is actually the European value added in doing this together, as the EU, not just a German or a French approach. Moreover, this opens up conversations with partners that maybe would not have been possible if each actor would have done it alone.
This is precisely what the European approach to AI in cultural relations should look like – collaboration both between EU actors and with local entities. However, the Grid also illustrates the limits to this project-based type of engagement between the EU and partner countries and peoples, in particular in terms of maintaining cultural relations over the long run. As a result, from this mini-series of four podcast episodes we can conclude that European institutions and cultural professionals need to engage with AI in and for cultural relations in a both strategic and sustainable way, reflecting on and addressing the common threats and challenges this technology poses to any society, and capitalising on the numerous benefits it can yield in the cultural and creative sector.
Damien Helly: Thank you for listening to today’s episode of our Composing trust podcast by culture Solutions! If you liked it, you can subscribe and follow us on your favourite podcast platforms, and contact us at culturesolutions.eu.
Check the rest of podcast episodes of this miniseries.
The views expressed in this podcast are personal and are not the official position of culture Solutions as an organisation.
Musical creation credits: Stéphane Lam