Whale communication helpful in understanding AI
Gašper Beguš, a researcher at the University of California, Berkeley, combines research on human speech, sperm whale communication and artificial intelligence (AI) to understand how AI learns, and in this way answer the question of whether humans are truly unique.
Dr. Beguš first studied comparative linguistics and Slovenian language at the University of Ljubljana and now directs the Berkeley Speech and Computation Lab. He also hosts Slovenian students and is mentor or co-mentor to two PhD students at the University of Ljubljana.
His group at Berkeley works on a combination of different, seemingly incompatible topics - human language, AI, and animal language or communication - which when put together open up new horizons, he has told the Slovenian Press Agency in an interview.
"Through the study of these three topics, we are trying to understand how artificial intelligence works, how artificial intelligence learns. This is an extremely exciting area of research which has the potential to yield immense insights and new discoveries in science and in other spheres."
The question they explore in their research is what is it that makes a person human. "We know that we have a language that other species do not have, but we do not know if there is something uniquely human in human language and what that is. This is a very important and challenging question, and in order to answer it, we are building AI models that, unlike GPT, learn in a more human-like way.
"In doing so, we are looking for answers to two fundamental questions, which are essentially two sides of the same coin: are we humans just smarter animals and is language just a consequence of increased intelligence, or did something else have to happen for language to emerge and for it to be something uniquely human? And, on the other hand, are Chat GPT 4 and other models of artificial intelligence, if we scale them up, at some point capable of becoming better or on a par with humans?
While much of the AI field deals with the visual world or text, your research focuses on speech. Why?
There are many important reasons for this. First, we know that babies do not learn language from text, as GPT 4 does. A child starts reading at the age of three or later, but by that time they have a good grasp of the language and are also quite intelligent. On the other hand, large language models like GPT 4 learn primarily from extremely large databases, larger than humans can even read in a lifetime.
Moreover, the problem of the visual world is that it is extremely complex, while language, in particular speech, is simpler. Speech is also the only human faculty that contains a generative principle. We only accept the visual world, we do not transmit it with our bodies, whereas our speech is the only innate generator. Speech therefore has excellent properties for understanding artificial intelligence. In our lab, we use speech to better understand how artificial intelligence learns.
Why is this question important? What insights can it provide?
The whole world is amazed at how well large language models answer questions and perform tasks, but we don't know how they get there. The next big thing that will happen is that we will understand the thought processes inside these models. In this way, we will gain insight into the causal structure, or the answer to the question of why something is the way it is.
This has great potential for science, for the discovery of new medications, and there are also major breakthroughs in mathematics. In short, I think this is a major new milestone. But it is true that these techniques for understanding artificial intelligence are still very much in their infancy.
You have mentioned that you are trying to get more realistically closer to how people learn language. You have built a model in the lab that mimics how a child learns, to better understand how AI learns. Can you tell us more about this?
From a regulatory point of view, one thing that will be important in the future is to be able to decide or understand when AI works in a similar way to humans and when it works in a different way. We do not understand what is different, and what you do not understand you cannot control or find it harder to control.
That's why, as a counterbalance to the big language models in our lab, we are developing smaller models that learn in a similar way to how babies learn language. We give the models some idea of the body - for example, ideas about the lips and the position of the tongue during speech - which means they learn language by listening to it and trying to move their mouths and produce sound. That is very interesting.
For example, we are currently doing a study with brain data that we record directly on the brain during surgery. When the patient is awake, they listen to speech, and we send that same speech through our artificial baby. This is how we learn about how language features are recorded in the brain and how they are recorded in AI.
With the help of your 'artificial babies', you were also the first to show in a high-profile paper in Scientific Reports that signals in the brain and artificial neural networks are similar.
Our group was among the first to show that not only do our models that learn as infants share similarities with humans when learning language, but that there are also similarities at the neural level. This is also very important from the point of view of emerging technologies, namely brainwave decoding technologies. Since AI and humans organise our thoughts and perceptions of the world in similar ways, this means that we can use AI to scan the human brain and decode what is going on inside it.
This is already delivering extremely good results. For example, my colleagues have created a technology to implant brain electrodes in a patient who had lost her ability to speak due to locked-in syndrome to allow her to regain her ability to speak, even in her own voice. They did this by using a recording of her voice from a wedding a few years ago to train an AI model.
On the other hand, of course, this raises the new danger to neuroprivacy. Just today, I ordered a book which talks about the fact that there is a new battle coming for the privacy of our minds and our brains. It will become increasingly possible to decode the brain, perhaps even to read minds. In the future, we may be wearing brainwave scanners to control all sorts of things - from telephones to perhaps cars. On the other hand, it means that companies or governments will have access to data that is currently hidden. So, because of these similarities between AI and the human brain very good things are happening, but at the same time, we have to be careful to not create technologies that will be detrimental to humanity.
One of the three key fields of your research is animal communication. Tell us about this major interesting project CETI in which you study how sperm whales as the largest representative of toothed whales communicate.
Sperm whales are a very interesting species that we know very little about because they are difficult to track, diving for 45 minutes to a depth of one kilometre or more. It is fascinating to learn about them, because they are a species of superlatives - they live 70, 80 years, they have the biggest brains of any animal, they have very complex and interesting social structures; females look after each other's babies, they form clans out of families, calves stay with their mothers for 8, 10, sometimes even 13 years, things that you very rarely see in other animal species.
So, they are very similar to us, but their world is totally different from ours, which makes them particularly fascinating animals for me, but with a tragic history. During the Industrial Revolution, they were horrifically hunted to extract the oil from their heads.
We now know more and more about them thanks to this major project to record their communication, which is taking place in Dominica in the Caribbean. The aim of the project is to listen to them and try to understand and learn from them and then decode their communication.
How do you monitor their communication?
We have non-invasive microphones that are dropped on the whales by underwater drones, we have little artificial submarines in the shape of fish that follow the whales, and we have a huge underwater cable a kilometre and a half deep with microphones, so we are trying to capture their clicking-like communication in a non-invasive way, and to be as passive listeners or observers as we can. The technology for this recording is being done by two of the most prestigious laboratories at Harvard and MIT, so it is really state-of-the-art robotics.
In collecting data, you say you happened to witness an exceptionally rare event.
Yes. The birth of a sperm whale, which hasn't been recorded before, so we're analysing the footage now. It's interesting because we know how they die, whereas birth hasn't been recorded with cameras and microphones yet, so it's fascinating that the team recorded that.
What is your role in the project?
I'm the head of linguistics, but I also work with AI. Linguistics is vital to understanding animal communication, although for many years linguists have frowned on animals. I find it important that we learn as much as we can from animals, so we are developing approaches to first try to understand how AI learns, then we give AI the task of learning whale language and we observe it and try to figure out what is meaningful about it, what the model has seen that I, as a human, have not. AI can't give us concrete answers at the moment, but it can give us clues to analyse certain information that we wouldn't have noticed ourselves, using linguistic tools.
Some scientists will say that animal communication is completely different from human language, but I think there are certain similarities that are very useful to explore. I think it is, in a way, an exciting time to be studying animal communication again, especially because we now also have AI that can help us do that.
Your research aims to answer whether human language is merely a continuity of animal language, or whether there is something unique about it. Recent results from your research on whales, which have not yet been peer-reviewed, suggest that we may be more similar to animals than we think. From a more philosophical, existential point of view, what would it mean if it turns out that human beings are in some way not special, not unique?
Let everyone answer this question for themselves. The Renaissance basically put man at the centre of the universe as something very special, but I think it will turn out that animals have more intelligence than we might have thought until now. Of course, it also depends on how we define intelligence.
I think we are now, as I said at the beginning, in a world where there are three intelligences at work, two are biological - animal and human - and the third is artificial. In fact, in a sense, for the first time in history, we are at a point where artificial intelligence is being placed alongside animal and human intelligence, or where the relationship between them is changing.
What this means for development of AI? Does it mean that, by becoming more capable it could reach or even surpass humans?
This is a futuristic view that scientists disagree on. While some think this will never be a problem, that AI will never achieve greater capabilities than humans have, others point out that AI has important advantages over biological intelligence that are so great they must be taken seriously. One of these advantages is, as AI pioneer Geoffrey Hinton put it, immortality.
You can take GPT, clone it or copy it and train it indefinitely or upgrade it on completely different hardware, whereas you cannot take my brain and put it into another body. Humans have to relearn everything and it takes a very long time to learn everything we know. So in a way, this is a really important issue, because at some point it may come to the point where AI becomes smarter than humans, if it is all down to the size of the brain.
I do not think that in 10 years computers will have power over humans, but it is true that the only thing we have not encountered in history is an entity that is more intelligent than us, and we have no tools to deal with that. This danger, even if it is unlikely, is a big one, and it must be taken seriously.
In addition to the dangers that we may face in the future, AI entails other challenges that we are already witnessing today. You have mentioned neuroprivacy, and in a recent lecture at the Ljubljana Institute of Criminology, you also highlighted loneliness. Why?
Loneliness is the great pandemic of our world and time, but it is true that chatbots can also be a substitute - although not the best one - for human contact. In some sense, technology is alienating us from each other. In America, in addition to Chat GPT, another popular tool is Character - an app that allows you to talk to a chat bot that mimics a desired personality that you can then establish a relationship with. I don't know where this will lead, but I assume it will not be good psychologically for mentally healthy people. There are a lot of ethical and psychological issues here, and that is what the psychological profession will have to deal with.
It is clear that AI will become a tool that we will all use and with which we will in some way coexist. Therefore, one of the key challenges in the future will be how to stay mentally and physically healthy in this increasingly virtual environment, which is unnatural and perhaps not the most favourable environment for humans psychologically.