Baylor College of Medicine

Reverse Engineering the Brain’s Perceptual Inference and Decision-Making using Artificial Intelligence

Master
Heading

Transcript

Content

[Intro melody into roundtable discussion.]

Erik: And we’re here. 

Juan Carlos: Here we are again. 

Erik: Yes. So, uh, this is the Baylor College of Medicine Resonance podcast. I am one of your hosts Erik Anderson.

Juan Carlos: I am another host and the lead writer for this episode, Juan Carlos Ramirez.

Kiara: And I am another host, Kiara Vega.

Juan Carlos:  Along for the ride. (All laugh). 

Kiara: Along for the ride, yes. 

Erik Anderson: Kiara is going to educate us about any Neuroscience that we get wrong in this.

Juan Carlos:  Fill in the gaps for today.

Kiara: I hope my PI is listening – (laughs) – so he can be proud. 

Juan Carlos: Yeah, ‘cause today we'll be talking about the advent of using methods in artificial intelligence to understand the brain and Dr. Tolias will discuss his journey into artificial intelligence research and how he and his lab work developing cutting-edge algorithms from Neuroscience work to better understand the brains perceptual inference and decision-making.

Erik: Man…yeah…

Juan Carlos: Ooo!

Kiara: Yeah, that’s a lot.

Juan Carlos: Yeah, so I guess it would make better sense to start off by defining artificial intelligence and when it began. It’s, it's like a buzzword in everything nowadays. The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages – sort of, the Inception of artificial intelligence was created to tackle these problems and the first development of the electronic computer was in 1941, which was followed by the first stored computer program in 1949. A man by the name of Norbert Wiener was the first to theorize that all intelligent behavior arose from feedback mechanisms. And they, after that the Logic Theorist designed by Newell and Simon in 1955 was considered the first AI program to ever be created and subsequently, Dr. John McCarthy, which is well known in the computer science and machine learning environment, is an American Computer scientist who coined the term ‘artificial intelligence’ and held its first conference in 1956. So, we're going to way back in time. 

Kiara: Yeah.

Juan Carlos: So, it's nothing new, really. It's just now we're figuring out ways to use artificial intelligence, especially in the biomedical space. Yes, that shows a lot of cool promising techniques. So, how has artificial intelligence changed the way we approach biomedical research and life in general? Well, hopefully, Dr. Tolias can answer a lot of these and in the cutting-edge world, but in general, the use of machine and computer vision, speech recognition, language translation, monitoring tools and robotics, and healthcare, financial investing. It's, I mean, it's everywhere.

Kiara: Everywhere.

Juan Carlos: But recently, we’ve made lots of strides in the biomedical space.

Kiara: and another development that I know that AI is being groomed for is Diagnostics, like, to play a very big part. 

Juan Carlos: Yeah, especially when we have, nowadays we have so much data, you know, so much data. We don't know what to do. We have to parse through all this data and it could take lifetimes for a one sole human being to accomplish in one PhD but the data analysis in biomedical research has really accelerated the pace at which we analyze data, and it really has fostered discoveries unattainable by previous methods. Dr. Tobias’ lab utilizes a subset of artificial intelligence called machine learning to decipher the network-level principles of intelligence. He is part of this collaboration of neuroscientists, physicists, mathematicians, and computer scientists focused on brain research for a machine intelligence to engineer a less artificial intelligence. This group is called the Neuroscience Inspired Networks for Artificial Intelligence (NINAI) or NINAI, right? 

Erik: So, it sounds like artificial intelligence you’re just saying, that you know, is a computer. Like a logic machine that is artificial. 

Juan Carlos: Yeah, artificial intelligence is kind of an umbrella term for using machines, right? Computers, right? To perform tasks that otherwise humans would normally do, right, but to a much better degree.

Erik: Yeah, because I mean that was in here when you were talking, because you were talking about Turing. I imagine were talking about one of the first computers, because yeah, I think I always just had an amorphous idea of “Oh, yeah. I know artificial intelligence. I've seen Steven Spielberg AI films.” (Laughs)

Kiara: The robots!

Erik: Yeah, robots! Exactly. That's what I think. 

Kiara: That’s what I thought.

Erik: Yeah, totally but – 

Juan Carlos:  Sky Net!

Erik: But It really is just any computer, it sounds like, and then the Deep learning, is just, yeah, I guess, feedback mechanism that you can just feed. 

Kiara: You said that they use machine learning? Yeah, to reverse-engineer. Right? 

Juan Carlos: Right, right. So, to create algorithms that are going to – 

Kiara: - kind of put to extract the principles from the biology and try to – 

Juan Carlos: - understand how the brain just makes everyday decisions important decisions – 

Erik: - with the idea being, I mean, this is obvious, but anybody… I guess I'm not trying to come at anybody that hasn't made this connection, but the idea being that – (laughs) I guess I am – that the brain is a giant computer. (laughs) So, you can study it likewise. 

Juan Carlos: Yeah, and so he'll tell us a lot about his background, what brought him here, but I will have to preface the interview with: there's a lot of deeper learning – no pun intended – that one would have to, sort of, read up on to fully understand what he's trying to convey and it's an action-packed episode for sure. Just have to keep you on the edge of your seat. And then it's also very positive, right, for the future. So, there are a lot of good takeaways.

Kiara: It’s amazing, I mean, it's been called a ‘moonshot’ what they're trying to do in his lab. 

Juan Carlos: Really?

Kiara: Yeah, because, well besides the projects where they’re trying to reverse-engineer principles, biological principles, and then instruct machine learning algorithms. They're also trying to image the biggest volume of the visual cortex in mice that has ever been. But yeah, they, that's why okay, what they're doing is so amazing and it's so ambiguous that they have, they have the competency, the materials, and the people to just kind of make it work or that's the that's the goal. Right? It's very, very hard to do trying to do. 

Erik: Yeah, there is an MD/Ph.D in the lab, that I think just recently left but presented his data at my seminar and they call it like the ‘million-dollar Mouse’ because – 

Kiara: Platinum mouse. 

Erik: Yeah, the Platinum Mouse, is that it? 

Kiara: Yeah, that's it. 

Erik: (laughs) Okay. I'm thinking of the million-dollar man! 

(All laugh)

Kiara: Close enough!

Erik: I need to check my references 

Juan Carlos: in the future though. 

Erik: Yeah, in the future. 

Kiara: Yeah, Platinum mouse. Yeah, that's one of the mouse that they used to image the cortex. And they're actually like, reconstructing it. They have these collaborations with Princeton and other labs where they’re trying to reconstruct a 3D model with using electron microscopy. And then also 3D modeling these areas of the cortex.

Juan Carlos: Makes me want to join us live now.

(All laugh)

Kiara: Yeah, his lab is amazing.

Juan Carlos: Well, without further Ado. Let's prepare to get our minds blown by some very fascinating work in Neuroscience. Let's go to the episode. All right.

[Interlude melody]

Juan Carlos: Welcome everyone. My name is Juan Carlos Ramirez. I'm your host of the Baylor College of Medicine residents podcast. I am here today with Dr. Andreas Tolias. Welcome, Dr. Tolias. 

Dr. Tolias: Nice to be here.

Juan Carlos: I have to be honest. I've been, I've been kind of looking for this moment for a long time and I've been telling people “we're going to have Dr. Tolias come on the show for the second season” and when they ask me about you, you've done like so many things in such a short amount of time. It's kind of difficult for me to tell people exactly in a few sentences who you are. So, if you wouldn't mind could you explain a little bit about your background or yeah, where you went to school. 

Dr. Tolias: Yes, right. So, I did my undergraduate degree at Cambridge University in Natural Sciences and then my Ph.D at MIT in Systems and Computational Neuroscience and then some post-doctoral training in Germany at the Max-Planck Institute in Tübingen also in Systems and Computational Neuroscience. And I study how the brain works, how the brain computes information, and how it gives rise to intelligent behavior. Particularly, I study visual perception or visual inference, how is it that the brain is capable of doing all the wonderful things that enable us to see the world. Although this sounds like a very simple problem to us, it’s the exact opposite. It's actually a very complex computational problem. It involves large chunks of our neocortex – in some primates up to 50 or 60% of the neocortex is dedicated to processing visual information system. So, it’s mathematically very complex what it is trying to do. In my lab with my collaborators were trying to understand how these, what are the algorithms of the brain and how it implements them to give rise to intelligence. We think that these, if you want algorithms that learn, are going to be generic. There will be similarities with maybe some differences but similarities with other things that we do in cognition, decision-making, and other sensory perceptions. In parallel, we're trying to build models of the brain. These are sort of machine learning models or AI models that to try and mimic the brain’s capabilities. For example, we're trying to build a model that's capable of doing visual recognition of objects in par with what the human is capable of doing. So that's sort of our, if you want, the aim is to understand it at the systems level and computational level, but then test this understanding by building models that are then dealing with the real-world complexity in trying to solve, you know, these interesting problems.

Juan Carlos: Okay.

Dr. Tolias: So, that sort of, I think, describes…so of course you know I work at different levels of the systems. We go all the way from individual cell types: What is the function? How they’re arranged? How many cell types there are in the brain? You know, the molecular level: try and identify transcriptomally how they are, how they're assembled in circuits and how these circuits function, you know, when animals either see visual images or make perceptual decisions on those images or movies.

Juan Carlos: Wow. So, I'm more familiar with the wet lab side of research and you know most of the time when people go into the lab, they have their experiment, and they have certain readout that they're looking for. So, like, short-term and long-term goals. What does that look like in a day in life for you in your lab? 

Dr. Tolias: Yeah, so in my lab there is a wide range of expertise and it's a very collaborative lab.  There are people who have expertise in a molecular level, other people the cellular and systems level, circuits level, you know, doing uh, very complex in-vivo recording experiments where we can record on the order of 10,000 neurons simultaneously from an awake, behaving animal, using neuron imaging methods. And also, people with very strong machine learning and computational skills to help build models to help analyze this data. So, it's a very collaborative lab and each project, usually, you know, every person in the lab, whether a student or post-doc or senior researcher has their own particular project they’re working on but they collaborate with other people to get this kind of bigger pipeline in place so we can really finish a particular question or address it to a certain extent. So, it's not, you know, people work individually on their own project and some people are more on the experimental side, others are more on the computational side or machine learning side or AI side, if you want, but really most of these projects involve a lot of collaboration back and forth, people sort of help each other.

Juan Carlos: So, it sounds like you have almost every capability.

Dr. Tolias: Well, not all, but we also have an extensive network of collaborators here in Baylor, but also in Europe and in Canada and other places here in the United States. We work together because we do not have all the expertise locally. So, that works very well. Some people have more expertise in machine learning and deep learning or AI or people in microscopy and stuff like that.

Juan Carlos: So that's in your lab, but outside of your lab you work with physicists, mathematicians, computer scientists.

Dr. Tolias: Yeah, a lot computer scientists and physicists as you said. We have a very strong collaboration in Tübingen in Germany and another one in New York and Columbia University, Toronto University, and Cornell where there are some physicists there either working with the technical side of things or the imaging side or more on the computer side or deep learning side. 

Juan Carlos: So, you have all these people coming together to focus on Cutting Edge brain research projects. And is this, I think I’ve looked it up, it’s called NINAI?

Dr. Tolias: Yeah, so we have an organization that we formed a few years ago is called NINAI which is basically the umbrella organization that enables all our team that is very extensive. It involves people from the Allen Institute in Seattle, Princeton, Cornell, here in Houston Baylor, you know, Rice University Tübingen, Columbia University, New York, and Toronto the Vector Institute, and we kind of collaboratively work together in understanding how the visual cortex gives rise to inference. By inference I mean, like how it enables the brain to effortlessly recognize things like objects or compute the depth of objects that basically allows you to see the world. 

Juan Carlos: So, basically, it’s making these innate decisions that we kind of do very effortlessly.

Dr. Tolias: Unconsciously! That’s right! Experts call this unconscious inference. There something called the Moravec’s Paradox, which is: things that are very difficult for us like memorizing things or let's see playing chess or solving math equations to some extent.

Juan Carlos: Playing Go.

Dr. Tolias: Playing Go, you know, when they go on computers now in AI systems are much, much better than any human and they can learn very fast. There is for example, you can take a computer, even let’s say the size of your laptop program it using deep learning to solve how to play chess and beating the world champion in a few hours, which is quite impressive. 

Juan Carlos: Garry Kasparov was the first victim. 

Dr. Tolias: But now with deep learning its more impressive, but even more interesting if you create an image for, let’s say banana, you know, versus an apple you can effortlessly say ‘this is a banana’ or ‘this is an apple,’ but modern AI system, they're very easy to fool in ways that for you, it would be like a joke, you will never be fooled. This is called the Moravec’s Paradox. It seems that stuff that are innately easy for us or children, you know, how to walk, how to hold this bottle and drink from it, you know, without putting too much pressure, too little pressure and having it fall. These are very easy for us but they're very difficult for robots or AI systems but things that are more difficult for us, like playing Go, they’re very easy. This is a paradox and why it's like that and some of it is really what you were saying earlier. We think that in the course of evolution, let's say we had millions of years to evolve very good neocortex that does very advanced vision to survive because you’re in the jungle and you mistake a tiger for food or something you'll get eaten out. So, you won't survive. You don't have time to think about it, it happens very fast. So, we evolved to be visual geniuses, if you want, and we can effortlessly do it and we're studying very complex mathematical problems very easily but things that are like more modern like 3,000 years ago came up with Go in China and try to figure out how to play, they’re not intuitively obvious to us, so we need to think about it probably using parts of our brain that are more recently evolved like the frontal lobe. So, in some ways, we’ve perfected some aspects of our intelligence, this innate intelligence, and you have even some animals when they're born, they can immediately do some of these tasks. So, I think that is one hypothesis why there is this difference.

Juan Carlos: Often times, we take for granted how difficult these tasks are for a computer to do because we’ve been doing them effortlessly for so long and sometimes, we don’t understand how difficult or what an accomplishment it is for something like AlphaGo, or the stuff that’s going on in DeepMind or even a Canadian group that publishes that they have created an algorithm that can detect cats out of YouTube videos, you know for us that's very easy.

Dr. Tolias: Exactly. You know, it seems like in the last few years, especially since 2012, there has been like a renaissance of artificial neural networks or deep learning that is, in a very crude level, trying to approximate how the brain solves these problems, you know, so it’s built by artificial neurons, connective synapses, these synapses undergo plastic changes due to some objective function, maybe as exactly the way the brain works but in some ways its similar, and then they enable engineers and computer scientists to build these remarkable algorithms that are already influencing our everyday lives when we go do a Google search or we talk to Siri or we talk to Alexa, it’s kind of background , there is some sort of artificial intelligence network around it.

Juan Carlos: Yes, algorithms are everywhere! They run our lives.

Dr. Tolias: Exactly.

Juan Carlos: We can’t really survive without them nowadays it is but I think it's more interesting to understand the power of algorithms and AI and machine learning in biomedical research. So, such as AI in Neuroscience and how they’re kind of driving each other: we understand the brain, which helps us build better algorithms to better understand the brain

Dr. Tolias: Yes. Exactly. It’s kind of a circle. Our lab, and our collaborators work at the interactions between Neuroscience and AI. So, on the one hand we're using deep learning which is, let’s say, the most successful form of AI right now and may probably change, to model the brain and get insights. We use the same tools that people, let’s say to program a computer to play Go, we use it to model the brain, and build predictive models of how the brain works and then try to understand how the brain works by using these tools because don't forget that, you know, with modern Neuroscience we’re capable of collecting very large data sets. In deep learning, it’s really, sort of, if you want success is the ability to analyze and make inferences when you have large data sets, especially with a lot of labels. So, for example, we could experiment where we show a lot of stimulation to an animal, record their activity through the model of transfer function. Now then we use this method to model the brain and gain some insight and the hope is then by understanding something about the computation of the brain that we can build smarter AI algorithms that then can maybe not just analyze our data better but also can be used, for example, to recognize faces or do voice recognition or to do more robust analysis of all other sorts of kind of data. So that's kind of the goal here, is to go into this. But at the very basic scientific level, you know, we’re not ourselves, you know we’re not trying to, we're trying to like, learn the basic principles, if you want, and then reverse engineer the brain out of the principles, and putting these principles into some demonstration that, you know, this information that we learn from the brain can be used in successfully advancing machine learning. 

Juan Carlos: So, aside from studying and enhancing studies in the brain and learning how the brain works better. Do you suspect that we can use these algorithms to, uh, to further enhance or at least accelerate some research in other domains like cancer? 

Dr. Tolias: Yeah! Yeah, for example, you know, I think that's the goal, right? Umm, okay. There are two things I want so say there: One is that modern, let’s say you look at cancer research. You know we have a lot of technologies now to collect a lot of data including, let's say single cell sequencing or large studies where, you know, we do DNA sequencing in humans, and we look at their different types of phenotypes. So, you know, basically this is my personal view, you know, biomedical research is very successful in developing technologies that are relatively cheap to collect a lot of data, but we do not know how to analyze it and we don't know how to understand this data. The whole field of, let’s say genetics and molecular biology is based on this idea of you know, we, we think very serial, you know, causal manipulations are done one gene at a time. We don't even know how to think of this High dimensional gene or protein interactions that happen and most of the diseases are very multifactorial based on many genes, in combination of genes and environment in ways that we don't even have the right, you know our brains did not develop how to analyze this is and maybe we don’t have the right mathematical tools but what we have is the capability to collect a large amount of data. So, now with deep learning, if we have a lot of labels in these data. For example, let's say we sequence all everybody and we record their phenotypes, and we know in 10 years who’s going to get cancer or not.  So, we do this chronically. It's possible to build a model for now that's going to predict who's going to get what you want some sort of Black Box model that, you know. But the problem is that old-age Alzheimer’s is a good example, right?  Like, if we could follow, we could, let’s say record everything that someone eats now, how much they exercise, what is their DNA or their relatives, who they talk to – we monitor them 24/7 and we collect all this data for 20 years, 30 years, and we say ‘okay, who is going to deteriorate at what rate?’ Maybe we can build a predictive model from now that’s going to predict: ‘this is gonna happen to you, this is gonna happen to you, this is gonna happen to you’ but the problem is that it's a very complex data and we don't have the time or you know, we can collect large data but to do this very chronic data collection, we haven't built a society to do. Now there are countries like China, is trying to do stuff like that because there is a very top-down system. So, it is easy for, you know, someone to give an order and everything is done systematically. 

Dr. Tolias: Now, the brain though, in vision, we did not evolve to look at DNA, you know, sequencing letters and understand what the relationship between that and Alzheimer’s or cancer but we’re very good recognizing images. So, if we understand, but we don't need a lot of labeled data – we do a lot of unsupervised learning and we do inferences by understanding something deeper about the data that is not just this brute force with simple input/output labels. Let’s say, a child can learn the concept of an apple from a few examples whereas a machine with a modern deep learning algorithm needs thousands of these examples to really become good at it. At least that’s sort of, those are some recent developments. They are generally speaking, this is, you know, they’re not very robust. Should we understand how the brain learns from a few examples of vision, then maybe we could come up with ways that instead of the input being images is DNA sequences, but based on the same principles of learning, you know, of drawing inferences in a more causal, you know, maybe even suggesting experiment like a child may learn the concept of an apple because it knows something about curvature because he has to touch that apple. So, that’s more causal manipulation. The brain is wired to from a few examples an learn robustly, and maybe we can translate those algorithms to other domains where instead of being images in a robot touching things is if you want an AI scientist that looks at this data, let’s say, an algorithm, but instead of having original to look at images it reads sequences of DNA and looks of behavior in some other space, you know, phenotype, or how much people eat lead or what is their habits. Then maybe even do some perturbations where, you know, the way the child touches the apple, we may make suggestions or change your life, you know, it sort of tries to learn from a population level then builds an understanding of what causes cancer, let’s say, that the human it would be impossible for you to do because, it's just because we did not evolve to process that kind of information. Then once we have these models, these predictive models that train on few examples for you know, then maybe we can analyze these models and then we can gain an insight to have what’s called externability/interoperability from the models. So, that's kind of a very long-term vision that may take decades to achieve but I think that sort of, we may see that happening within our lifetime. You know, there are doctors and scientists that are machines, basically, that are much more capable of interpreting what learning about this data and gain some insights and then they tell us.

Juan Carlos: That’s something that I’ve, kind of uh, I mean, aside from kind of blowing my mind a bit, I’ve been reading this book called Deep Medicine by Eric Topol and it kind of touches on some of those ideas – that we just don't have the brain power and the speed but we can design models that do this for us. And then from there we can create these predictive models, as you were saying, but it does, it also talks a lot about the challenges but in your opinion, what are what do you, what do you feel is like the greatest challenge?

Dr. Tolias:  I think one immediate challenge is organization and accessibility to the data. You know, for example if you go into a hospital or even a big research organization or even a big biotech company or a firm, I’m not sure they have the data organized in ways that are easily accessible to computer scientists to be able to build these models and the problem right now is, this is a particular problem because these algorithms – not the ones that I’ve been talking about before which is a more vision for the future – but current algorithms work very well with your very large data sets and unless you have this very large data, so, let's say you want to build a system that predicts, maybe better than any radiologist who’s going to get cancer or not from a mammogram and is much more accurate, right? So, it has a much lower false-positive rate and false-negative rates. Now, that algorithm is trained on maybe 10 million data points. It may, like, reach that performance but let’s say it trained on a million, it may not. And then what is the quality of this data? Because it’s going to have to be annotated. For example, we have to have chronic data where we say, you know, this person did this test at this point in time, the doctor decided it was negative. Next year, they came back in and the doctor wasn’t sure and the doctor did a biopsy and it was still negative but this other patient was positive. So, you need these chronic data, so you need a very well-organized system of the data well-annotated by humans, right now and then well-organized in a way that you know, and we're not talking about data sets that are very large necessarily, although they are very large but not very large by computer science standards. Even from my lab, we have much larger data sets that we collect in an experimental lab, but I think they're just not, you know, the way hospitals have been built and the way their software works and the way that data science works, they were not built for this type of deep learning, if you want, or modern AI analysis. They were built for doctors or individual scientists or statisticians to download some of these data for you as a doctor to find this patient, his history, you know, and it was like what a human-based interaction, you know, it wasn’t for large-scale science, and I'm not sure what’s going to solve this problem, you know, I don't think, I think that's a challenge and all that. That's a challenge that everybody, including here at Baylor, is worried about and thinking about and as I said, you know, it would be interesting to see which country or which organization, and it may have to happen at a very large scale at the federal level where someone, you know, gives incentives or motivation for very large-scale project like that. The NIH has projects like that, you know. The other thing is that people understand the importance of these, so there's a lot of open accessibility to the data. For example, I forget, but some hospitals now, as long as the data is deidentified based on regulations that you have to abide by, they want to make them available to the world because, you know, this could save lives, you know. So, many people are very protective of this, and protectionism too, can be bad. People are very protective of their data and other people are not and they work together they’re going to win right? So, it will be interesting to see how these things evolve and also maybe if you want to have impact, especially in a country like the United States that is very multicultural, you have to be careful that this is also, you know, there's no biases built into the model because of different genetic backgrounds or different nutrition in different people. So, it needs to be done in a very sort of organized way, I believe, but in a large scale, and I think that right now is not, uh, it's a challenge as far as I can see. Now, I think there is an improvement in the last few years, like, I think a group from NYU did something with mammograms that had many thousands of images and Google, I think had something like that, recently collaborating with I don’t know how many hospitals. You know, DeepMind is working on that with NIHS.

Juan Carlos: I think this, kind of, has spawned an area of what is called ‘medical intelligence.’ And so, it's the gathering of as many data regarding the health of people in the now, like fitness trackers, you know. For me, I'm obsessed with a fitness tracker to my heart rate, how much I sleep, when I'm stressed, all these things, but I look at it and you know all I can really draw from it is “Well, I didn't get enough sleep on that day and maybe that's why I was tired. I was cranky”, all the stuff, and in the long term it may usually lead to stress or whatever but an algorithm gathering all this data and analyzing it can tell me “Oh, you’re at an increased risk for diabetes” or something. So, I think that's an underlying spawn of a new enterprising, if you will. But as you mentioned earlier, there is some friction or you, you mentioned that there are algorithms that could be really good at detecting radiological graphs of patients that can tell them much better than a human that you have a certain diagnosis. So, this kind of instills fear in some people, you know. I’m surrounded by medical students here at the Baylor. They even have electives that focus on how to adapt to the uses of technology in the clinical setting. So, how would you, how would you respond, or can you comment on people having this fear?

Dr. Tolias: I think it’s just, It's actually not just for the medical students. I think there's a fundamental problem with the educational system not just in the United States but around the world. For example, if you study biological sciences and medicine, you do not get trained as the, let’s say, mathematical or computer science or physical sciences. You know, you don’t learn programming, you don’t learn – and I’m not saying that every doctor should be like, able to program these things – but you need to be educated enough to understand the limitations and the expectations because these things are not, I mean, if they were perfect, you just press a button and it would be game over, nobody would go to medical school to just cover up with being everything integrated but not right now. Maybe that's what it will be sometime from now but it’s not going to happen, right. So, I think right now is a more scary situation because people need to be educated, and starting from undergraduate to medical school, as you said, there is the lack of the programs to educate people, do you know what I mean? And, it is an issue, I’m not going to say it’s not because eventually, let’s say you are a doctor and someone like you that is interested in this thing, reads, is educated, and understands it, then you’re going to do this as your Ph.D., and then say, you become a radiologist – now you'll be able to adapt to these things, understand them better versus one that is used to, like the old-fashioned way of doing things, right. So, these would be assistive technologies in the beginning. Like, you know like assistive driving, like Tesla or something like that, but I think it's very important for the doctors to have an understanding on how these things work so then they can also know how to trust us, but even more important, it can actually contribute more effectively to the researcher that’s being done. For example, what it would take to – and not just doctors but like, administrators and the problems are similar, administrator in the hospital, I’m not sure they’re trained like that, so they don't really understand or even systems administrators like data scientists in hospitals, right. What does it take to organize your data? What should I do as a CEO of a hospital? So, then it gives me a competitive advantage to my patients to bringing AI technologies, so it’s not so much of a fear of, you know, “oh, people are going to lose their job”. I don’t think that’s the problem right now. The problem right now is that some people may not be getting trained in the right way to be able to fully function in this great new world. 

Dr. Tolias: Laughs out loud. Do you know what I mean? I don't know what Baylor is doing particularly but there is worry in many places, you know, you need for example, maybe you should have courses in deep learning. You should have people, you know, like how people take statistics, which is kind of more traditional statistics and learn about Analysis Of Variants or epidemiology, you know. This is a different form of statistics. I think you need to get trained in this stuff. So, you have some understanding of what these technologies can and cannot do, but also it enables especially people who are interested in research to contribute to this because it’s not like the doctor who doesn’t know will be able to do this kind of research, that’s impossible. You know, people that are experts in deep learning or computer science are the ones doing it. But you need this synergy between varying fields.

Juan Carlos: Yeah, ‘cause I always used to have, I kind of shared the same fear before, you know, I started reading more but I guess I'm a little different. I helped I wanted to be a software engineer when I was a kid and all that went away and life happened, and now I'm kind of back in a position where I realized that it's becoming more important. 

Dr. Tolias: Oh, yeah. You have an advantage now. You know, even if you want to a doctor, your software engineering background now, it's going to give you an advantage over, you know, other people because you are the one that is going to be capable of being more in driving seat to make these decisions “What should be useful? What should be used?” which can contribute to research because, you know, we’re entering a new era. I feel the educational system, even if you go to like, even engineering schools and you take biology courses, I’m not sure in the curriculum how much computer science they do or if they even have to know how to program. You know, even if they understand, you know, the fundamental things about, you know, data science. So, these are important, right.

Juan Carlos: I think when it, kind of, uh, cemented in my mind that I was going to pursue even on a personal side, just kind of, learn on my own since I was a student. I was doing some graduate coursework at Hopkins. And then I was also in a lab that breast cancer research that collaborated with a bioinformatics lab, and we had lab meetings once every two weeks or so, sometimes informally and I think half of the time we spent trying to understand each other from the wet side – the biological side – through “how do we use these algorithms to better understand this RNAseq data?” so from then on, I noticed like, I realized I have to learn! I have to yeah, not just because I want to be an M.D/Ph.D. but just as a researcher, I want to be able to have that capability to do at least some part, some of that analysis in-house.

Dr. Tolias: Exactly. You as a biologist or let’s say, scientists that are biologists, you have the intuition to setup the right problem. But in computer science, because of their background, does not or intuition can take many years to develop, longer in some cases than learning technical skills like how to write software, right? So, if you and that engineer don’t talk the same language, you have a problem because it’s not like you give someone data to analyze and something is going to come out. How you set the problem and then knowing what the limitations or what kind of problems from the technical side are, can be solved. So, you need both you and the engineer have to sort of come together and you need to understand a bit of the tools that they use are capable of or not capable of, and what are the limitations, and they need to understand what biological data is, what the format is, and I think that is lacking because, as you said, you know, maybe when you were at Hopkins you guys would meet with Engineers, every week or every few days and kind of discuss and you would read each other's papers or other papers in the future. So, you need that kind of synergy can get things happening, otherwise, it will be very hard, and not only hard, it can also lead to errors where people like, you give someone the data and they don’t know the limitations, they discover stuff that is just artifacts, you know, and then you're not, you don’t understand the result and you don’t understand the limitation or the technical stuff that gave rise to it. You know, why it could have been over-fitting the data or, you know. 

Juan Carlos: Right.

Dr. Tolias: You know, it’s dangerous too. So, the lack of people that can understand both fields and talk to each other. It's not just slowing down progress, but it can also be dangerous. You can read papers that are like, wrong because of this problem.

Juan Carlos: Wow. I think I certainly see some medical schools, at Baylor here, we have a strong kind of focus into making that change, more of like a technological push. But, when I was doing my interviews, I interviewed at Sinai in New York and it was explained to me that part of the curriculum there for every medical student is to learn how to code a little and I was like “Wow,” I was pretty impressed.

Dr. Tolias: Yeah, some people are more pushing in that direction, I guess. I don’t know the details but yeah, that’s impressive.

Juan Carlos: Well, for me it was like it's something that was a, I saw it as a pro. Yeah something positive, but not all medical students would have shared that. But I think now, because AI has either been glamorized or it's been used as a scare tactic. I'm not sure I did think maybe to leave me to listen to this podcast specifically this episode over and over to sort of, learn that yeah.

Dr. Tolias: Yeah, and I think people like you should be given the opportunity to sort of, follow it, especially if you’re very interested in the research.

Juan Carlos: I'm excited to see what, what happens in the future because it's, in essence, all of this is all to improve the quality of human life, and the use of AI and machine learning in medicine is to improve the quality of human life for patients and for ourselves too. Uh, so the idea of just having, having that fear and rejecting it and creating friction for it to be incorporated into our medical system, it seems counterproductive, but I think for everyone for our listeners that kind of had a fear or maybe a further interest in enhancing your computational skills and how that could possibly improve even the way you look at problem solving, I think it’s a lot different. I think somewhere just looking into what the positive impact of AI are is a good start heading that, you know, it's all that we can share about that today. 

Dr. Tolias: Yeah, it’s great to be here. 

Juan Carlos: So, there you have it. This has been a revelation in AI with Dr. Tolias. You've heard it from the expert’s mouth what is going on and what the limitations are in the field and highly encourage everyone to pick up the book Deep Medicine. It breaks it down very simply and helps you understand a little bit more about the use in health care or in research. And so it's all we’ve got for today. But uh, it'll take some time for my brain to unpack all of this information. But I really, I really appreciate you attending. 

Dr. Tolias: Thanks a lot. It was my pleasure.

[Outro melody.]