By Ruby Ajilore
Transcribed by Tiffany Trinh
Cognitive evaluation refers to human thought. Knowing how your mind works allows for better control of it. Cultural activities, like language for instinct, influence greatly the way we process our environments and interact with our learning processes and connections. All of these factors play an important role in how we chose to live our lives and make day to day decisions.
In this interview, VIBE TALKS Correspondent Ruby Ajilore speaks with Professor Arnon Lotem, Behavioural Ecologist at Tel Aviv University. He has conducted research concerning the evolution of cognition; and the way culture plays a huge role in shaping it and memory alike.
His research was done in partnership with Professor Joseph Halpern and Professor Shimon Edelman of Cornell University, New York as well as Dr. Oren Kolodny of Stanford University, California. He discusses his thoughts on human memory, thinking, artificial intelligence programs and much more.
Prof. Lotem: We tried to think about the possibility that the version of language involves new requirements from the brain. Most people understand that language is something very complex and there is no other animals that have language that is at the level that we have in humans. Usually, we know that language is complex and we expect that the brain will develop in certain ways to attain or require language. The thing is that most people expect if you’re new to learning English, it means that you use more of your brain, but we don’t specify what more brain means. If you ask about memories for example, it will appear that if you have more memory, or faster memory or better memory then you can do better studying languages. What is a big problem is that humans don’t have a very good memory, we don’t know that our working memory and ability to recall what people say to us is very limited.
Ruby: Google’s artificial intelligence program was able to teach itself how to walk without any human instruction. How do you weigh in on how the program was able to do that?
Prof. Lotem: Most computer programs and artificial intelligence systems have to learn language. Since, computers don’t have a lot of memory, they use a lot of memory and a lot of computational power and they find ways to learn it. This brings us to the paradox, when we know that in order to solve this problem by computers, usually we need strong computers that have a lot of memory computation. It is a bit surprising to find that humans actually don’t have a lot of memory. Which means that the human brain doesn’t exactly work like a computer, it works very differently. I’ll give you a simpler example, the simplest things for computers are very simple. We know that in a computation point of view they are very simple but very difficult to people. For example, we make shopping lists because we cannot even memorize twenty items and through a computational point of view this is something very easy to do. We can make simple calculations, this is something any computer can do. Before, it was very difficult because you had to keep track of all the numbers you write down. You have to write down things because our memory is very limited. So, our brain somehow evolved to be very different than a computer.
Ruby: What is thought, when it comes to artificial intelligence and how greatly does it differ from human thought?
Prof. Lotem: There are differences in the way things are programmed or the way things can work but there are also some similarities. In order to plan moves, solve programs or make solutions, you need some representation. You need representation of reality, representation of the game or representation of what happened lately. We study animal behaviour and we think about how things evolve. Humans didn’t start with nothing, we use our brains which have evolved for years, since the times of animals. Animals also need to solve many problems. In many ways, whether or not you use a lot of memory of a little memory, you have to build some sort of representation of the environment or of reality and then try to play with it. There are two ways of how you can build such a representation. When we think about this representation we think about a network or an association. It associates some items in memory, it could be words, it could be objects. The problem is how to build this network. When you have such a network there are many ways of how to use it. In computers, they take everything and they start finding rules in the systems. Animal and humans don’t take all the data at once, or they don’t remember anything, they’re very selective of what they’re taking in. They have sloppy memory, and this sloppy memory is adaptive. It’s like the brain saying if you don’t convince me that this is important I’m not going to remember it. All the things that are repeated again and again it will be deemed important. If you hear some gibberish, most of the gibberish you can’t remember, but if within this gibberish you hear some repeated elements, then your brain says maybe these are important objects or maybe these statistically patterns are significant and I should learn them.
Ruby: What are the drawbacks of having these ways of thinking and functioning in the brain?
Prof. Lotem: If you only need to know approximately what you see, then you don’t really care about the exact order of items. You can just know a few features and you’ll recognize what you see. The fact that we have a smaller memory space makes the system a little more fragile. Longer learning time is required and in some cases more memory space is needed.