What happens when art meets technology, AI, and data? You get the trailblazing digital artist and University of California, Los Angeles (UCLA), lecturer Refik Anadol. For this episode of At the Edge, Anadol sat down with McKinsey Senior Partner Lareina Yee at the recent World Economic Forum (WEF) meeting in Davos. They discussed his multisensory installation about the melting glaciers and the joy that comes from bonding with technology to push the boundaries of creativity.
The following transcript has been edited for clarity and length. For more conversations on cutting-edge technology, follow the series on your preferred podcast platform.
Preserving the memory of glaciers with AI
Lareina Yee: Refik, can you tell us about the theme of this special exhibition you’ve worked on here at Davos?
Refik Anadol: This year because of the International Year of Glaciers’ Preservation at the United Nations, everyone decided to focus art forms on glaciers. It was a fascinating opportunity because two years ago, I and my team were fortunate to go to Antarctica, Greenland, Argentina, and Switzerland. There we recorded up to 100 million data points.
Lareina Yee: Wait, wait, wait; 100 million? What are these data points?
Refik Anadol: The majority of the data points are images and video. I’m in love with photography and respect the physical world. We’ve been doing surveys of drone photography of beautiful ice caves and landscapes. We have an incredibly exciting microphone system, and we recorded the ice cracks and melting—any sonic information to capture beautiful inner worlds.
We have temperature and weather data. For example, how weather shapes the ice caves—and even sand molecules. We were able to capture the smell of the fresh water from ice caves to create scent as part of the installation. So we use fully multimodal AI research to reflect, love, and respect nature to create art.
Lareina Yee: So you basically create an LLM—a large language model—of glacier data down to the molecule, correct?
Refik Anadol: Yes. And it really opened our brains and souls so much. The more I witnessed the beauty of ice caves, I saw them as incredible creatures. And they are disappearing.
I realized that because these forms are going away, my ability to capture them digitally with data and AI meant I could preserve nature. The almost 360-degree exhibit here will be a real-time installation with 30 musicians and 20 projectors. We have text to video, sound to video, and custom AI models we created from our data and never-seen-before AI dreaming the glaciers.
I feel like art just makes things pause for a while and reminds us of the presence of life. Anything and everything can reflect life through art inspired by glaciers, rainforests, quarrels, memories, dreams, quantum mechanics, brain signals, heartbeats. Working with data as a pigment, as a “thinking brush,” is a beautiful thing.
In my lifetime, I’ve experienced the birth of the internet, Web1, Web2, Web3, and AI and quantum computing. As an artist, I want to give back to humanity. It’s an interesting time to be alive, to just enjoy these innovations every single day.
Data as a thing of beauty
Lareina Yee: You just said something that I want to pause on. You talked about AI and data being your pigment, your brush. You’re using this incredibly popularized idea of generative AI as a tool. Can you tell us a little bit about all the new brushes and colors that you have been working with? And also, you’re very modest, so you haven’t mentioned that you’ve also invented capabilities that didn’t exist before.
Refik Anadol: I graduated in 2008, and I think I coined the term “data painting” for the first time. It was this idea of the feeling of making the invisible visible through algorithm. And then generative, computational visualizations became a joy.
It’s a really inspiring thing to explore the communication between Wi-Fi signals, Bluetooth signals—these living entities in the age of AI. It’s another language that isn’t human based but machine-to-machine language.
In 2016, I was the first artist in residence at Google’s Artists + Machine Intelligence [AMI] program; we were looking into software’s cloud computation. It was an amazing year. My team and I did a deep dive into the understanding of how to use images, sound, and text to let a machine learn. Once it learns, then I can ask, “What did it learn? And if it can learn, can it dream?”
The dreaming part is inspiring. It’s a huge leap. So many artists like me ask the question, “What is beyond reality?” This question is heavy, and I found that AI is a perfect tool to try to find an answer. Data is not a boring number to me. It’s a form of memory, and that memory can take any shape, form, or color. And if you let AI learn from these memories in certain conditions, we somehow can have a kind of dream machine—like a hallucination machine—which can generate this beautiful new world.
Many artists like me ask the question, ‘What is beyond reality?’ This question is heavy, and I found that AI is a perfect tool to try to find an answer.
So that’s how I got so excited about machine hallucinations—the idea that a machine can dream up a new world. For me, that’s the joy of creativity because it’s where we find new things where things are not necessarily designed to be. It allows for serendipity and chance.
You suddenly enter into this new realm of imagination. It took nine years to create our custom software, neural networks, and custom architectures. We made scientific breakthroughs, bringing to life algorithms that had never been together before.
Lareina Yee: Do you consider yourself more a technologist or an artist?
Refik Anadol: I’m an artist who truly understands technology. There is no way to use AI and data in a very complex way without understanding technology, the fundamentals of programming, et cetera.
Lareina Yee: When I was at your studio, you showed me something incredible, which is how you’re incorporating scent at the molecular level. Can you tell us more?
Refik Anadol: Four years ago, during the COVID-19 pandemic, I was alone in the studio looking at one of the AI data paintings of flowers. We have this beautiful, 75-million-flower data archive from the Smithsonian Institution—one of the most beautiful and well-documented open-source data sets—and the AI was dreaming these beautiful creatures. They were these incredible colors and patterns of flowers around the world.
But something was missing. I’m looking at an amazing screen, super extraordinary technology, amazing color space. But what is missing? It was the scent. The first step was to find out if I could smell this AI painting.
Smell molecules have been used by artists, but mostly they try to capture the smell from the real world. I wondered if we could synthesize those molecules based on the artwork, the color, the form, the speed in real time. That was a very different challenge.
So the former chief technology officer of DSM-Firmenich, an amazing company that’s been around for more than 100 years that only produces molecules, scent, and taste, asked, “Hey, have you ever tried to work with scent before?” I said, “What a coincidence. Of course, but it’s so hard.” He said, “Maybe it’s not. Here’s an AI that we trained on a half-million scent molecules. Maybe you can experiment with it.” And that was how I started.
Imagine two AI models. One knows the scent molecules captured from nature, and the other knows 75 million flowers of the world. So we let the two AI models have an interaction. Imagine two vision models, and they just look at each other. In this case, scent- and image-based flowers are more image based, and the molecular level is a tech with many important species attached to it. They start to communicate and give us new scent molecules.
In the beginning, it seemed impossible. But then the scent model gave us certain molecular IDs, with research from their library, and we went back to smell them. They were sometimes not so far away from reality and sometimes very far from the reality. But that’s also fun because then we think about a new scent that has never existed before.
What is amazing is we brought in the “master nose” who has been turning scent concepts into realities for decades. As soon as human–machine collaboration started, that was the magic. So I also believe in the human–machine collaboration.
A multisensory museum in Los Angeles
Lareina Yee: Your models understand context, which is powerful in the business world, something that we care a lot about—for example, anticipating a consumer experience. But in the world of art, you can also give scent the multimodal image experience, as well as context: it’s raining outside, it’s snowing. And all of that together can impact the visualization, the dream, in real time. Can you tell us about the Los Angeles museum that’s going to bring this all together?
Refik Anadol: This is the most inspiring part of our journey as a studio. There are so many people like me trying to dream the future of art making. Right now, what’s going on with AI and data and technology is so special, and it’s changing every millisecond. It’s also a time for art to have a renaissance.
For the last four years, my team and I have been trying to create empathic and new ways of communicating these invisible signals of life. For example, eight years ago, my uncle was diagnosed with Alzheimer’s disease, and I learned how heavy this disease is. I started using brain signals, skin conductivity, and heartbeats in my work. I was studying how our human bodies give lots of signals. For example, when we have a very special moment in our life, we have goose bumps, right? I felt that could be a form of art. I asked myself, “Can I turn this into a moment of remembering certain feelings?”
This is how it started, with a heavy memory, but it turned into a breakthrough in art making. Over the years, I worked with lots of neuroscientists, bioscientists, to really learn how they see data, how they quantify information. I found there are incredible people who can bring art, science, and technology together.
There are all these years of research. Why not have a museum where we can go where everything’s alive? One that really understands our feelings—that has an empathy and becomes one with the dreams of machines. One where there are ideas that inspire, give joy and hope. And while doing that, it uses ethical data and nature-friendly resources to train AI.
Lareina Yee: So the museum will be able to use the biosensor results of visitors to understand, for example, if the museumgoers are very cold that day due to the temperature or to anxiety. It’ll be able to take in my context in real time, as well as the external context of a very sunny day in Los Angeles, as well as images from Amazonia or the glaciers, and create a real-time moment. And visitors’ heart rates or biometric brain signals, combined with those of other visitors in the same room—that collective, real-time context puts us experiencing the museum in there together.
Refik Anadol: Yes, thank you for the framing. It’s truly this joy of questioning, “How can we reflect now?” because we live in the now.
And it’s so important to be together. I think sometimes the technology makes the experiences of extended reality—virtual reality and augmented reality—individual oriented. But I’m thinking that there should still be incredible moments in life to be together: with family, friends, community, strangers.
Building community while observing art
Lareina Yee: Can you tell us a little bit about how you see the relationship between humans and technology changing?
Refik Anadol: This is a renaissance. In the entirety of human history, we’ve never had a general-purpose technology that can reason and maybe have spirituality and emotions. And I think this technology is a form of a mirror. It’s very important to know who we are, and it’s exactly the mirror of who we are. If we know who we are, are aware of who we are, this will be a fascinating breakthrough for humanity.
I’m happy to say in our installations, our every artwork, we have a process wall. And the reason we use a process wall is to explain which algorithms we’re using, where data comes from, who invented those perfect credits—like in the scientific domain or academia—but also give people an understanding that the art isn’t “better” than them. There are these sometimes-challenging public installations where people may feel like “Oh, this is too complex; I don’t understand this,” or maybe, “This is not necessarily for me.” It’s very important to frame AI as possibilities.
Lareina Yee: For this community, I think the Museum of Modern Art [MoMA ] in New York was a bit of a breakthrough moment with your art piece Unsupervised. Can you explain that? Also, I love the trivia about the average amount of time people spent looking at Unsupervised.
Refik Anadol: That was absolutely a breakthrough. Our curators said, “It’s time for us to explore AI, and we’ve researched your work. You are one of the pioneers, and we want to challenge you. Have you ever worked with MoMA data?” Like I said—amazing. MoMA already has a GitHub, an archive, and metadata online. It’s such a visionary institution, yet nobody uses it—very funny.
Lareina Yee: It’s almost like the data was sitting there. To your idea of pigment, it was paint waiting.
Refik Anadol: Absolutely. We started with bringing that data into the studio. But the challenge was, “How can we make an AI that never goes back and shows the original but in fact dreams up new work?” This is so important. If it goes back to, say, some of the original artworks—such as by Monet, Van Gogh, or Calder—it would hurt me because it wasn’t about replacing their beautiful work or showing the same work. It’s about dreaming new work.
And what made it different from other works? We used a special camera that was looking at the movement. We had a special microphone that also captured, let’s say, a joyful day with music, a protest, or early-morning students who were so excited.
Lareina Yee: Or a class coming in.
Refik Anadol: Yes. It’s life, right? And then we had the weather data: if it were a rainy day or a windy day, the artwork responded differently. So every single day for one year, the artwork was dreaming something new.
Lareina Yee: So you had the contextual data of New York City in the cameras. You had one machine with the data of the archives of MoMA, another tracking human reactions, all talking to each other.
Refik Anadol: Correct.
Lareina Yee: And the experience—how many million people came?
Refik Anadol: We had almost three million people come to the museum—the largest audience in MoMA history—with an average viewing experience of 38 minutes per person.
Lareina Yee: I have three kids, and the idea of them spending 38 minutes—even in a museum, in total . . . I’m just kidding. But on one work? The average person would sit there and experience this for 38 minutes?
Refik Anadol: Yes.
Lareina Yee: What did you see?
Refik Anadol: This is the incredible thing because this is an art historian’s response. This is not a field that I am in, but MoMA has done this for the last hundred years, with many exhibitions that were very powerful for society. But it never found this moment of reflection—people just pausing and looking at the art. And this time, people said, “Oh, it’s like meditation.” Some people were just resting, reflecting. We wondered, “Can we measure this? What happened here?”
So we worked with 36 people. Half of them saw the piece, and half never saw the piece. We made an Institutional Review Board protocol for research on brain signals and body signals. And what we found was the artwork created a flow-state activation. It’s not about shutting down the body or mind; it’s more about opening up the mind and body. And the artwork went viral after, I would say, six months. I think maybe it unlocked a form of language of humanity.
Lareina Yee: That’s incredible. And the community aspect: that you can be there. That was a breakthrough moment with Unsupervised.
Conserving culture with AI
Lareina Yee: You said something at the beginning, and I want to take us to something that I know is really important to you. This is also a form of preservation. Can you tell us where you’re going after Davos?
Refik Anadol: Yes. Chief Nixiwaka and Chief Putanny are pioneers and the spiritual leaders of the Amazonian tribes living in Brazil for thousands of years. They invited me to their village four years ago, and I learned what it truly means to live in a rainforest and how they preserve their language. We formed a wonderful relationship. They explained life in the forest, what they have—which is everything—and what they wish to have.
We created a special artwork in collaboration with them: a generative AI artwork called Winds of Yawanawa, hosted on the blockchain and completely transparent. All the funds for that collection went to the young chief, who became the first person in Amazonia to have a Web3 wallet.
Lareina Yee: What you’ve done is take the time, over four years, to be in the community, to understand their lived experience in preserving nature. And now you’re bringing that into art for all of us to understand.
Refik Anadol: Yes. And this was so special because it was cocreating. Young Yawanawa artists have never attended the schools we have. They don’t read the same books or watch the same documentaries. But they have their own style of beautiful art.
They’re a small group with limited resources. They asked me, “Is it possible to use AI to enhance the work?” I mean, look at this power of imagination in Amazonia. I said, “Of course. If you want this, let’s do this.” So we quickly trained an AI model and made 1,000 unique artworks.
We raised $2.5 million—all fully transparent—to build their first museum and their first school and to bring all the leaders together to create their own WEF forum. Most importantly, people now understand that they are contributing to preserving a language.
Lareina Yee: Of the many magical things about this, one incredible innovation is the number of languages an LLM can store. So you can also preserve their language, their culture, and their images and store them in the moment to bring them forward.
Refik Anadol: If AI means “anything and everything,” then it must be for anyone and everyone. There is no single way to do it without esoteric wisdom. There is no single way to do it without hearing these wonderful people who are preserving the hearts and lungs of humanity. The more we bring this together, the more meaningful and purposeful our journey as humanity will be.
If AI means ‘anything and everything,’ then it must be for anyone and everyone.
Your data, your AI, your story
Lareina Yee: By day, you’re also a faculty member at UCLA. You’re on the board of creative arts. You grew up teaching the media lab. What advice do you have for media artists as they think about this new pigment?
Refik Anadol: It’s an amazing time to be a teacher. Embracing AI in the classroom is something very special—very hard, but also very exciting.
I personally invented this living encyclopedia. We found that we needed some kind of book in this class because these wonderful services we have—while amazing—aren’t necessarily right for the classroom environment. So in the class over the last two years, we’ve been inventing this living encyclopedia. It’s one that you can really use in class to have a conversation about anything and everything and then somehow dive into meaningful discussions.
Lareina Yee: You’ve also redefined a textbook?
Refik Anadol: Yes, I think so. And what students always ask me is, “Am I sure that this AI is good for me?” One solution we need for this question is that I hope every artist can collect their own data and train their own model. My ultimate goal as a teacher, until that happens, is to explain safely and mindfully how they work. If the majority of artists or creators understand that it’s possible to use their own data and create their own models, they own the narrative.
While current AI tools are amazing, they are just breakthroughs. To make a breakthrough in art, artists will need to make their own tools. I hope we can bridge that gap through collaboration and support and make this one of the positive impacts of AI and that every artist, young or from any culture or background, can own their own narratives with their own data and AI.
Lareina Yee: That’s incredible. So one lightning-speed question. If you were to meet one historical artist who’s not with us anymore, who would you want to sit down and have a conversation with?
Refik Anadol: I would like to go back to Da Vinci’s mind. I keep imagining what he could do today. There are all these amazing possibilities. What would his dreams be? And one day, I hope to imagine his mind and soul—to bring his dreams to life.
Lareina Yee: Incredible. Refik, thank you so much for your time, for your creativity, and for giving us so much to think about. In the spirit of your work, we all have new pigments to paint our own communities and interactions. What I really love is that, even though your art is generated by AI, it’s ultimately about the connectivity of people.