Roles of Neuroscience & AI in Solving Complex Healthcare Challenges

Podcast, Videos

Share:

In this exciting episode of Impetus Digital‘s Fireside Chat, I sat down with Dr. Phillip Alvelda, CEO and Co-Founder of Brainworks, to discuss all things artificial intelligence and machine learning. Among many other topics, we explored the evolution of AI, the concepts of whole-brain AI systems and ambient biometrics, as well as how AI-enhanced vital sign tracking through a mobile device can be used to monitor for COVID-19 and other conditions.

You can preview our discussion below:

Q: Can you talk about the evolution of this nebulous concept of artificial intelligence compared to when you first got started and what it looks like today?

A: The interesting thing is the dream has never really changed. The dream has always been, “Can we make an artificial system that can think and do things that a human can do?.” Whether it’s to reason, infer, or predict; it really is about navigating the world that we live in and solving the problems that you know we currently need people to solve. Whether it’s picking fruit in the field, or treating disease in the hospital, or managing a car as it’s driving down the road. These are all things that today, we have to do work to do.

The story of technological advancement has always been, “How do we get machines to do more and humans to do less?” or you can phrase it as, “How can a human use a machine to do more than they could without the machine?” That has been a constant throughout and I think what’s changed is our growing understanding of how complex the brain really is, which, in the 60s, we had people like Turing and Licklider at DARPA and a few other places, setting out what the goals were, and those are, dead-on, still the goals today: to create those systems that capably solve these problems. But, I think at that time, there was a lot of hubris that says, “Okay, we understand some bit of circuitry; if we model the human brain like this circuit, then we’ll have AI.”

The AI that they were thinking about in that sense was general intelligence. “How can I have something that’s indistinguishable from a human?” You may have heard of the Turing test where you have a conversation with a system and how long does it take to figure out if the system is an artificial computer or whether it’s a human? Then, of course, you had great fictional retellings like in Blade Runner, where you had Deckard trying to figure out if he’s a replicant or not. All that is really quite cool. I think the real puzzle was that the complexities of the brain were beyond our capability to understand back then. We just didn’t have the instruments that could look at the brain in meaningful ways and figure out what was really going on.

Fast forward to today’s day and age, our computing power and our ability to stimulate things have grown tremendously. What you’re seeing is that every year, we’re taking some aspect of the brain, where we understand a little bit more. We’re advancing how we build a computer simulation of that. We’re starting to capture the power of more and more of the brain. If you were to ask me, “How much have we captured so far?”, “How close are we?”, “Is there really a threat of exponential growth in technology today where the terminators are going to take over and we’re at risk in some near term?” No, I don’t think so.

The way I would phrase it is as the people that do more work and the closer you are to working in machine intelligence and AI, the less you’re worried because you know how shitty the systems really are today. They’re not capable of plugging themselves in and turning a wheel on their own. There’s no connection between what the computers are doing and solving general complex problems. Let me give you an example, the type of system that we’ve captured in the brain is equivalent to a cubic centimeter of our visual cortex that is dedicated to identifying faces and pictures. We now understand that piece of the brain so well, and we’ve got artificial systems that outperform now how humans do that. Great – we can identify faces in pictures. We’re not quite at steering a car yet. We’re at extracting sentences from text and speech, but we’re not very good at extracting what they mean; sarcasm is beyond us.

We’re not quite there in replicating little bits. But, imagine what we have solved is like a cubic centimeter in the back of the head for the visual cortex for face identification, maybe a cubic centimeter over the auditory cortex to simulate when we hear a noise, and we can parse a word out of the stream of speech. If I were to say, suppose a doctor tells you what your diagnosis is, and asks you whether you’d like to be treated, there are economic considerations, ethical considerations, complexities of family management, contagion, and all of these things that are implicit in that discussion that are in the framework of what any human knows when they’re 18 or older, we have not built any of that…

For more of our discussion, you can watch the whole Fireside Chat with Phillip Alvelda, or listen to the podcast version, below.

To check out previous Fireside Chats and to make sure that you don’t miss any future updates, subscribe to our newsletter or follow us on YouTube, LinkedIn, Twitter, Facebook or our podcast. If you enjoyed this episode, kindly leave a review on iTunes.

About Impetus Digital

Impetus Digital is the spark behind sustained healthcare stakeholder communication, collaboration, education, and insight synthesis. Our best-in-class technology and professional services ensure that life science organizations around the world can easily and cost-effectively grow and prosper—from brand or idea discovery to development, commercialization, execution, and beyond—in collaboration with colleagues, customers, healthcare providers, payers, and patients.

LinkedIn
YouTube
Podcast