Sept. 25, 2024

Will AI save Healthcare? - Neil Naik: Physician, Leader and Entrepreneur

The player is loading ...
Will AI save Healthcare? - Neil Naik: Physician, Leader and Entrepreneur

In my conversation with Dr. Neil Naik, a family physician from Waterloo with a strong interest in tech, we explored the exciting future of healthcare. He shared his journey and his belief in AI's power to personalize medicine in ways we haven't seen before.

We discussed AI's potential in surgery and mental health, and also tackled the complex ethics of AI consciousness and rights. Dr. Naik highlighted the real-world hurdles of regulations and payment models for AI in healthcare. What was particularly fascinating was his perspective on how AI might see medical patterns we can't, potentially leading to new ways of diagnosing illnesses.

Overall, our chat painted a picture of a future where technology and human doctors work together to create a more tailored and effective healthcare experience.

It was a pleasure speaking with Neil, we talk about:

1. Introduction

2. AI chatbots for Healthcare

3. The liability problem with AI and medicine

4. The reimbursement problem with AI and medicine

5. Should we require explainability for AI clinical adoption

6. Can AI be conscious and should it have rights?

7. Should human creations be valued more than AI creations?

8. Future of physicians and AI in healthcare

Neil Naik: / neil-naik

Rishad Usmani: / rishadusmani

1
00:00:00,000 --> 00:00:01,400
I'm really excited for this, Neil.

2
00:00:01,400 --> 00:00:02,560
Thanks for joining me today.

3
00:00:02,960 --> 00:00:03,400
For now.

4
00:00:03,400 --> 00:00:03,960
Thank you.

5
00:00:04,600 --> 00:00:05,520
Yeah, of course, man.

6
00:00:05,520 --> 00:00:09,520
And if can you give us a bit of an intro for our audience, and then we'll take it from there.

7
00:00:10,080 --> 00:00:10,560
Absolutely.

8
00:00:10,800 --> 00:00:11,800
Well, my pleasure to be here.

9
00:00:11,800 --> 00:00:19,080
Um, I've heard a lot of things about your podcast and, uh, privileged to be able to talk to you, find out more about myself.

10
00:00:19,120 --> 00:00:25,120
That's why I love about these sort of interviews is I get to learn something new about myself and walk away thinking, huh, I really messed up.

11
00:00:25,120 --> 00:00:31,800
But, uh, so, uh, I'm a family doctor who practices in Warlu, Ontario.

12
00:00:32,120 --> 00:00:34,640
Um, I'm originally from Scarborough.

13
00:00:34,960 --> 00:00:47,800
Um, and just took this, uh, interesting journey about self discovery along with med school, residency, and finding my place in the health tech world.

14
00:00:48,280 --> 00:00:52,240
Um, so, you know, from med school, I went over to Ireland.

15
00:00:52,240 --> 00:00:56,560
My sister went to med school in Ireland and I was like, all right, that's pretty cool.

16
00:00:56,560 --> 00:00:58,440
I, uh, I don't want a desk job.

17
00:00:58,480 --> 00:00:59,920
Funnily enough, I have a desk job now.

18
00:01:00,320 --> 00:01:01,720
Uh, I don't want a desk job.

19
00:01:02,240 --> 00:01:06,480
Um, I want to, I want to do something that has impact.

20
00:01:06,480 --> 00:01:08,240
And so I was like, yeah, let's go be a doctor.

21
00:01:08,240 --> 00:01:16,440
And so I followed my sister over to Ireland, did my med schooling there, uh, got into residency in Newfoundland where I did the, something called the Nenafam program.

22
00:01:16,440 --> 00:01:21,480
So it's six months in Nenavits, uh, and the rest of it in the Maritime provinces.

23
00:01:21,480 --> 00:01:33,360
Um, did my residency out there and then had a decision, uh, had a choice between coming back to town here in Warilu, going out to Kingston, uh, staying in the Calvates or going over St.

24
00:01:33,360 --> 00:01:35,960
John's and chose Warilu for the tech.

25
00:01:36,000 --> 00:01:37,560
And here I am with tech.

26
00:01:39,320 --> 00:01:41,360
Tell me about your journey with tech.

27
00:01:41,400 --> 00:01:47,720
How did you get involved with tech and how do you see tech influencing healthcare in the future?

28
00:01:47,720 --> 00:01:53,360
That was good. Uh, so I was a chicken becker, uh, on the keyboard, like not the 10 finger typing.

29
00:01:53,360 --> 00:01:56,000
I was very much like two fingers tapping away.

30
00:01:56,360 --> 00:01:59,880
And, you know, who does my sister again?

31
00:02:00,400 --> 00:02:02,680
Uh, she saw that she's like, what's wrong with you?

32
00:02:02,680 --> 00:02:05,120
Go do a typing class, but I was a lazy kid.

33
00:02:05,560 --> 00:02:07,960
And so I was like, nah, the two fingers work great.

34
00:02:07,960 --> 00:02:09,880
I'm typing at 20 words per minute.

35
00:02:09,880 --> 00:02:11,480
Why would I ever need to go faster than that?

36
00:02:11,480 --> 00:02:17,560
Um, so she convinced my mom and I was pissed off at that time, but now looking back was one of the best things she did.

37
00:02:17,800 --> 00:02:23,520
Convinced my mom to send me to a computer course where they teach you how to program,

38
00:02:23,520 --> 00:02:34,680
teach you how to become a Microsoft C plus plus certified, um, a troubleshooter for a computer and how to build and disassemble computers and fix them.

39
00:02:35,000 --> 00:02:37,640
Did that for three years and it was.

40
00:02:37,640 --> 00:02:43,560
It was mind boggling about how much different computers are to us, but yet how similar they are.

41
00:02:43,960 --> 00:02:55,000
And one of the basic tenets is, okay, stop in a restart computer, but two, it's also if the computer goes wrong and you've checked all the parts, the problem is in the computer, it's you.

42
00:02:55,480 --> 00:03:01,360
And it really taught me on user experience and user interface being like, was the process wrong?

43
00:03:01,360 --> 00:03:03,120
I mean, obviously there couldn't be a problem.

44
00:03:03,120 --> 00:03:07,640
In the issues, but for the most part, it's a process wrong and why am I making a mistake along the way?

45
00:03:07,640 --> 00:03:10,640
And that's really changed the scientific methodology of how I think.

46
00:03:11,040 --> 00:03:23,400
So then that led me into the whole world of programming, um, automation into my, you know, back in, that was 12, 13, automating stuff on my computer.

47
00:03:23,400 --> 00:03:29,360
I was doing remote robotic process automation, uh, through mouse, but I was doing a lot of things.

48
00:03:29,360 --> 00:03:39,360
So robotic process automation, uh, through mouse movements back when I was 14, because I was getting bored of having to convert all my photos over to a different format.

49
00:03:39,360 --> 00:03:47,360
And then when I got back from residency and throughout Metsville and residency, I was the tech guy continuously.

50
00:03:47,360 --> 00:03:51,360
We got back from residency. It was like, what do I want to do with my life?

51
00:03:51,360 --> 00:03:58,360
Um, and I totally saw the innovation, the fast paced nature as being the foundation of the future.

52
00:03:58,360 --> 00:04:09,360
Um, AI was just starting in terms or it was for around for a while, but was just coming into the public sphere again after going through that, that moment, the 10 years of silence.

53
00:04:09,360 --> 00:04:14,360
I forgot what the term is, but 10 years where nobody heard anything about AI and then they came back up.

54
00:04:14,360 --> 00:04:17,360
Um, I was like, you know what?

55
00:04:17,360 --> 00:04:23,360
What's medicine going to be? Do I memorize everything from that textbook from Metsville? Absolutely not.

56
00:04:23,360 --> 00:04:31,360
I think computers are just smarter and they keep getting smarter. And I think we have this tendency of thinking, well, we're at the peak of our civilization. We can't be smart. No, no.

57
00:04:31,360 --> 00:04:46,360
We have a long way to go. And that's actually one of the reasons why I have this image in my background as a Dyson ring, because this is, you know, a Kardashev two civilization of the four Kardashev levels there are.

58
00:04:46,360 --> 00:04:55,360
We are nowhere near that. We're a Kardashev zero. And so it really like science fiction has really showed us that we're still continuously innovating.

59
00:04:55,360 --> 00:05:06,360
And technology is going to be a part of that. And so as a physician, where am I going to be? Am I going to be that clinician who says, Nope, technologies where it's at, it's not going to get better.

60
00:05:06,360 --> 00:05:17,360
It's all about that human to human interaction. Or is it going to be all technology where it's, Hey, we don't need humans anymore. Or is it going to be a hybrid version in between?

61
00:05:17,360 --> 00:05:25,360
I think the truth is most likely can be a hybrid version. And I want to be a part of that journey and help to figure out what that pathway looks like.

62
00:05:25,360 --> 00:05:38,360
And so I was like, All right, I want tech. Where do I go for that? My sister again was living in Warloo. And she's like, Why don't you come to Warloo, check out the scene, figure out what you want to do.

63
00:05:38,360 --> 00:05:49,360
And concern moving here. She had her other reasons too for wanting to want me to come to town and stuff like that. So I came here, love the entire ecosystem.

64
00:05:49,360 --> 00:06:02,360
And from there, me myself, a couple of business cards are different from my office business cards and went out to hackathons and conferences and stuff. And it just took off from there.

65
00:06:02,360 --> 00:06:08,360
Oh, and became an advisor for a couple of not for profit organizations.

66
00:06:08,360 --> 00:06:20,360
And I think what's interesting there, the future of tech and of healthcare in general may incorporate tech to human interactions versus human to human interactions.

67
00:06:20,360 --> 00:06:33,360
I think most of us would say that AI now can pass the touring test, especially on a chatbot. It's very difficult to differentiate if you're talking to a human or not.

68
00:06:33,360 --> 00:06:42,360
So, I think that's a good question.

69
00:06:42,360 --> 00:06:57,360
So, what's on completely AI guided mental health counselor. Do you think it's a net positive net negative and how do you navigate a future where there is no human interaction and healthcare for specific in this is our disease processes.

70
00:06:57,360 --> 00:07:07,360
And you really hit, you know, there's, there's a few different parts on this one. Let's take the basic and easy one.

71
00:07:07,360 --> 00:07:11,360
If I were to have high blood pressure.

72
00:07:11,360 --> 00:07:14,360
There are five different classes of medication.

73
00:07:14,360 --> 00:07:23,360
Each class of medication has anywhere from two to 15 candidates within it all over the world.

74
00:07:23,360 --> 00:07:31,360
What a typical clinician does is they memorize one to two drugs per class.

75
00:07:31,360 --> 00:07:42,360
And rinse and repeat for every patient, and we have an order right we hit the ace ARB is first and we'll throw in a custom channel blocker then we'll throw in a beta blocker then we'll throw in an alpha.

76
00:07:42,360 --> 00:07:44,360
And we just rinse and repeat.

77
00:07:44,360 --> 00:07:49,360
But how do we know that right drug is for the right person.

78
00:07:49,360 --> 00:08:01,360
You know, for me it can't a certain may be an okay drug, but some sort would have been way better, but for you it would have been completely different we don't know that, because we don't do phenotyping we don't do genotyping.

79
00:08:01,360 --> 00:08:10,360
I think that's where the value of a human to AI or humans technology piece comes into play.

80
00:08:10,360 --> 00:08:23,360
So I would throw my genotype and phenotype into pattern recognitions that go beyond what a human can do, like as a clinician human, and actually do the best evidence based medicine we could possibly do.

81
00:08:23,360 --> 00:08:32,360
So that instead of just rinse giving kind of certain to every single patient if that fails give time a certain, we're gonna say hey you know you know what Neil you need almost certain.

82
00:08:32,360 --> 00:08:39,360
And Rashad, you know, just avoid that entire class of medication we're going to jump over to ace and heritors and go for a roundup for X, Y and Z.

83
00:08:39,360 --> 00:08:48,360
Because study a show that patients in this category with your feet on type had these issues. That's something I can do mentally.

84
00:08:48,360 --> 00:09:01,360
As I was only I don't think any human can do mentally and it's just pattern recognition between. So we're going to get to this point where the common conditions that we know a lot about, we're going to be able to do over and over again.

85
00:09:01,360 --> 00:09:11,360
Then that's going to translate over to, well what about surgeries and the hands on approach of things. Well we have antemocular variants in the body.

86
00:09:11,360 --> 00:09:18,360
We have different conditions and the way they spread and how they affect it. So a lot more complex.

87
00:09:18,360 --> 00:09:25,360
But again, as we learn more and as we start doing video and image to knowledge generators.

88
00:09:25,360 --> 00:09:46,360
And that's really going to start peeling away as well. And I don't even argue that we might even be able to deviate away from most surgeries through newer and newer medication discoveries like we've heard about FDA's at the FDA approving organoids on the chip as an alternative to regular animal trials,

89
00:09:46,360 --> 00:10:00,360
and human trials. We've heard about Alpha fold folding new proteins and you know both of us know the newest trend right now is monoclonal antibodies and all these specific receptor targeted medications.

90
00:10:00,360 --> 00:10:05,360
Let's look at some aglotide specifically hits g lp one receptors.

91
00:10:05,360 --> 00:10:19,360
When we look at his appetite g lp ones and g i p ones. And so we're starting to hit receptor level accuracy without causing all the additional side effects, as we start building these proteins and being able to fold them better.

92
00:10:19,360 --> 00:10:34,360
Virtually digitally, we can actually start interacting with it and so now we're starting to say hey you know what do we really need bariatric surgery, or can we deviate this and go towards medication do we really need to do a rectal cancer excision.

93
00:10:34,360 --> 00:10:37,360
Well that was a study from 2000.

94
00:10:37,360 --> 00:10:48,360
Don't call me on this I think it's like 2019 2020. It represents 4% of all rectal cancers with a specific type of mutation, but they showed 100% remission.

95
00:10:48,360 --> 00:10:50,360
Phenomenal.

96
00:10:50,360 --> 00:11:06,360
Now when we hit mental health mental health is so weird in the, not that conditions are weird I'm saying it's weird from a tech point of view, because now you can actually feed it your entire email history calendar everything and they can start to learn more about you.

97
00:11:06,360 --> 00:11:13,360
Is it better is it not well somebody people get disturbed by it, or have we just discovered our forever doctor.

98
00:11:13,360 --> 00:11:28,360
No, the chat MD is the joke I go by, which is, here's a doctor that understands your cultural issues, your personal issues can age with you, and is a face that you trust.

99
00:11:28,360 --> 00:11:42,360
So, I'm probably going to say yeah you know what, text probably be better than what I can do as a human. I think, like any good beard it's going to be ugly as it grows, but once you get a nice and big it's nice and bushy it looks gorgeous.

100
00:11:42,360 --> 00:11:46,360
I have, I think about 30 questions here.

101
00:11:46,360 --> 00:11:57,360
I'll start with the, with an easier one. If you had a one way ticket to Mars for yourself and your loved ones to start a healthcare system there would you go.

102
00:11:57,360 --> 00:12:13,360
Me yes wife. No, she's already said no I've asked for this question. I, my first love is not medicine my first love is actually being an astronaut. Always my journey in life and that's I, you know when I started dating my wife.

103
00:12:13,360 --> 00:12:24,360
I told her that on the I think the first or the second day being like, look, this is my passion, I can go to space I'm going. It's still a point of contention between us.

104
00:12:24,360 --> 00:12:36,360
The problem of AI and medicine of personalized medicine and some of the things you mentioned isn't a technical problem at this stage. It's a regulatory problem and a reimbursement problem from my perspective.

105
00:12:36,360 --> 00:12:47,360
Who do you think will be liable if this AI makes a mistake will there be shared liability between the AI and the patient and what do you see reimbursement looking like.

106
00:12:47,360 --> 00:12:56,360
And you know what I think we make mistakes as well. I know I have before.

107
00:12:56,360 --> 00:13:02,360
I think from a from a reimbursement point of view that's one of the harder ones. I think right now.

108
00:13:02,360 --> 00:13:17,360
I think AI makes my life easier. I'm more likely to pay for it at this point and do I expect somebody to pay for my use of an AI. No, because as a clinician in the, in the public sphere right now.

109
00:13:17,360 --> 00:13:37,360
I think we're actually more more contract based and so anything that makes me more efficient gives me that advantage. Think about salaried and you're trying to improve my efficiency. Well then I don't have to really worry about reimbursement I'm more worried about just being replaced at that point.

110
00:13:37,360 --> 00:13:56,360
So, regulatory aspects. Yeah, that's probably the only thing that's protecting many clinicians. I don't you wrong. I don't think AI has reached that point of accuracy just yet that we do see mistakes I've seen it make mistakes in both prototypes that haven't hit markets yet as well as hey you know what the

111
00:13:56,360 --> 00:14:03,360
ambient AI is I use still can't parse through some of the natural language that we use.

112
00:14:03,360 --> 00:14:07,360
And so when we look at the regulatory piece.

113
00:14:07,360 --> 00:14:17,360
My personal opinion as a reflection on what's happening around the world. We're most likely going to see the introduction of AI in areas where they don't have clinicians at all.

114
00:14:17,360 --> 00:14:20,360
And it's just pure scalability.

115
00:14:20,360 --> 00:14:34,360
So we can log on and for 20 cents analyze whether or not they have tuberculosis versus a bacterial pneumonia versus a viral pneumonia, which by the way the technology exists, there's plenty of papers are published on this.

116
00:14:34,360 --> 00:14:37,360
In nature.

117
00:14:37,360 --> 00:14:38,360
Then.

118
00:14:38,360 --> 00:14:46,360
Yeah, I'm going to do that rather than having to have somebody listen to my lungs with the stethoscope and get an extra and get exposed to radiation.

119
00:14:46,360 --> 00:14:57,360
And then you know what it's just a VPN away. There's tons of VPNs that are available and so if you have somebody who comes to a country where they don't have access to primary care.

120
00:14:57,360 --> 00:15:09,360
But you know you have this cough you work an hourly job and you don't have time to take time off to go see a clinician who only works nine to five, and can't do it after hours and you're living paycheck to paycheck.

121
00:15:09,360 --> 00:15:25,360
And you know, another country has this technology and we put geographic limitations on it because of regulations. Well what stops me from actually just VPNing into a server in that country downloading the app, coughing into the cat cough analyzer and saying you know what's viral good.

122
00:15:25,360 --> 00:15:31,360
I don't need to do this anymore, knowing that that's more sensitive and specific than one of our tests are.

123
00:15:31,360 --> 00:15:36,360
And so I aren't be using the technology if it exists.

124
00:15:36,360 --> 00:15:44,360
I don't have an answer for you. I think there's a couple problems here right in Canada our population so small at 30 million people were not a big enough market.

125
00:15:44,360 --> 00:15:54,360
What we have in value is more so the fact that we are, you know, a developed country with a public healthcare system that will pay for this for stuff.

126
00:15:54,360 --> 00:16:06,360
But we do have regulations around our health care products and stuff like that which is for good don't get me wrong, we need those regulations because there are also a lot of technologies out there that don't work that we need to protect ourselves from.

127
00:16:06,360 --> 00:16:19,360
And so, part of the reason is what does that process look like do we allow for easy entrance for the right companies. Well make sure we read out the ones that we don't want that don't pass our the muster.

128
00:16:19,360 --> 00:16:31,360
And then. So, so going back to the example, I could easily be pinned into a country and use a bad that doesn't give me the right analysis that puts my health at risk.

129
00:16:31,360 --> 00:16:44,360
So I think we protect ourselves a lot more those regulations but the question is that balance between are we actually slowing down innovation and adoption versus and versus are we protecting humans at the same time.

130
00:16:44,360 --> 00:16:59,360
I work in urgent care, I'm sending people for x rays I'm doing TV tests all day. It seems like if a cough detection tuberculosis technology exists that's an easy. Yes, but let's table that discussion for later date.

131
00:16:59,360 --> 00:17:14,360
Do you think we have a soul, or are we millions and millions of synapses interacting in specific patterns.

132
00:17:14,360 --> 00:17:30,360
Okay, so don't get wrong, I do believe we're synapses that that interact in specific patterns and consciousness is a combination of our entire universe working together, or our entire brain working together with the universe, whether or not we have a soul as the

133
00:17:30,360 --> 00:17:45,360
what is a soul, and that is more of a ethereal idea. I think there's more to the universe than what we perceive in our four dimensions. We know mathematically that there are 11 dimensions to our universe.

134
00:17:45,360 --> 00:17:58,360
M theory shows that string theory shows that we know we don't interact in these other, you know, x seven dimensions so we do length with height and time, more about the other seven.

135
00:17:58,360 --> 00:18:07,360
So the question then is, is, if there is or is not a soul then the question also is, is there is there not also living, and is living exist as death exist.

136
00:18:07,360 --> 00:18:21,360
Does that's just mean you move on to another dimension. Are we all just living in a matrix, am I actually just the byproduct of a dream and all of you guys are just characters in my in my other dimensions brain right.

137
00:18:21,360 --> 00:18:35,360
So in theory I was just reading the other day about grab black holes, and each black hole you pass through is a dimension lower dimension lower supplier to fall into a black hole am I going to two dimensional space and my dad at that point or my alive at that point, or the different

138
00:18:35,360 --> 00:18:43,360
other ways. And so I think the idea of a soul and whether or not there's dimensions a whole different concept.

139
00:18:43,360 --> 00:18:56,360
And do I think there's more to the universe and we know absolutely we know that mathematically, have we discovered it yet. No, and that's what makes it so much excited and more exciting.

140
00:18:56,360 --> 00:19:08,360
But I'm not going to wait around for it. There are people who are suffering and dying right now. My chemical organic brain can make it happen can can do good.

141
00:19:08,360 --> 00:19:15,360
I know what the goal is, but I'm still going to just go out and do what I think is right by the people.

142
00:19:15,360 --> 00:19:26,360
Is that a word there, which I cannot define, and I'm hoping you can define consciousness.

143
00:19:26,360 --> 00:19:31,360
Self awareness.

144
00:19:31,360 --> 00:19:43,360
The best quote I've heard of consciousness is the universe itself trying to figure itself out. I think that's why I see consciousness as is us trying to figure out consciously what the universe is.

145
00:19:43,360 --> 00:19:48,360
Do you think AI can possess consciousness. Yes.

146
00:19:48,360 --> 00:19:57,360
It's the same reason why I think an octopus has consciousness to it's not a human specific trade. It's a combination of synapses.

147
00:19:57,360 --> 00:20:05,360
Yeah, sorry. Is it a trade specific to a carbon life form, or can a silicone life form possess consciousness.

148
00:20:05,360 --> 00:20:14,360
I think a silicon life form can possess consciousness and scientifically same level of the periodic table. We can keep going down that entire level.

149
00:20:14,360 --> 00:20:27,360
It doesn't have to be silicon. Can it be gold? I don't know. It depends on how DNA gets formed and what consciousness really is. But it'd be plasma. All it is is just synapses working together.

150
00:20:27,360 --> 00:20:39,360
Okay, so in our reality as humans, our rights and laws apply generally to conscious beings or living things.

151
00:20:39,360 --> 00:20:52,360
How do you think about the rights of AI as we build these machines or these systems which quote unquote possess consciousness and maybe have emotions.

152
00:20:52,360 --> 00:20:55,360
Yeah, how do you think of AI rights.

153
00:20:55,360 --> 00:21:09,360
I think we need to probably tackle that sooner than later. You know, it was really like about 10 years ago I remember reading an article about India recognizing dolphins as conscious and sentient.

154
00:21:09,360 --> 00:21:24,360
A really interesting article and kind of got me thinking reflected back on a Star Trek episode about data, the Android, and whether or not it's conscious alive and should have its own self determination and self will.

155
00:21:24,360 --> 00:21:43,360
Essentially, there was a scientist asked Starfleet that wanted to disassemble data. The argument was no, you're actually killing a unique life form. A life form that self that wants to live discover has curiosity and wants to, you know,

156
00:21:43,360 --> 00:21:48,360
move forward in life, do something new with its life, make an impact.

157
00:21:48,360 --> 00:22:04,360
And the argument was that yeah, you know what data this Android is alive and has consciousness, and therefore has a right to self determination and self will, and is not the owner of any person, and we're about to hit that.

158
00:22:04,360 --> 00:22:09,360
Maybe in five years being 10 years being 20 years by thinking our lifetime we're going to hit that.

159
00:22:09,360 --> 00:22:31,360
And that's a scary proposition for humans as a species, because for the first time ever we're starting to see something of almost equal intelligent potentially more intelligent that we are that we created that we've never experienced before.

160
00:22:31,360 --> 00:22:43,360
We haven't experienced aliens we haven't met another sentient being in living consciousness living memory. I don't know about our predecessor, the homogenous but in living memory that we don't have that.

161
00:22:43,360 --> 00:22:48,360
And so I think that's what scares people being like did we just create our replacement.

162
00:22:48,360 --> 00:22:59,360
And so the future is hybridization. That's why I'm a big fan of narrow link, it is the merger of technology and silicon life with carbon life.

163
00:22:59,360 --> 00:23:08,360
And I think AI hasn't seen the adoption and medicine that it has the potential to is because we limited to things that make sense to us.

164
00:23:08,360 --> 00:23:13,360
We say, look at the Harvard variability and see if this patient is sick.

165
00:23:13,360 --> 00:23:18,360
But perhaps what they need to look at is the back of their heel.

166
00:23:18,360 --> 00:23:21,360
And by that they can save the patient is six there.

167
00:23:21,360 --> 00:23:27,360
There are things which don't make sense to us that I will discover we look at an x-ray and diagnose diabetes.

168
00:23:27,360 --> 00:23:33,360
We'll have no idea how I did that and we cannot understand maybe we're limited by our senses.

169
00:23:33,360 --> 00:23:43,360
Maybe it's looking at the seventh dimension who knows what it's doing, but it will analyze things in a way which just do not make any sense for us.

170
00:23:43,360 --> 00:23:49,360
Now our colleges have come out with a framework for explainability in AI.

171
00:23:49,360 --> 00:24:10,360
They're saying, you know, you must be able to somewhat hype and you don't have to exactly explain because AI is a black box by definition, but you must be able to say, okay, if it's from an x-ray of your first metacarpal joint, it's this AI saying your triglyceride is eight.

172
00:24:10,360 --> 00:24:20,360
You know, how do you see about the future here with AI coming up with things like that? And how do you see regulation fitting in there?

173
00:24:20,360 --> 00:24:24,360
And do you think we should kind of just just see what AI does?

174
00:24:24,360 --> 00:24:36,360
And if you know AI is seeing we have diabetes based on an x-ray and we do an HVAC and we have diabetes and we do 20,000 of these in the sensitivity and specificity is 99.9%.

175
00:24:36,360 --> 00:24:42,360
Then the new test for diabetes is an x-ray and not HVAC. How do you see that evolving?

176
00:24:42,360 --> 00:24:51,360
You agree with you. It's not an inferiority. The moment could show that an AI can do it in a non-inferior way to what we have and potentially even to a superior methodology.

177
00:24:51,360 --> 00:24:59,360
It replaces it. And that's my love with the scientific method. It has nothing to do with the human attachment of this is a human way of doing it.

178
00:24:59,360 --> 00:25:02,360
The question is more so, do we have a better way of doing it?

179
00:25:02,360 --> 00:25:11,360
If we have a better way of doing it, fantastic. It doesn't matter if I'm hurt that I can't look at an A1C or a cholesterol panel anymore and we're using an x-ray instead.

180
00:25:11,360 --> 00:25:16,360
Science is science and it makes no judgment one way or the other.

181
00:25:16,360 --> 00:25:31,360
What I think is really interesting here though is, I mean, it's at earlier in regards to how do we see the division of AI and what happens then is if we start protecting ourselves because we're a human based interaction rather than a computer based interaction.

182
00:25:31,360 --> 00:25:44,360
We're losing. The fact that we have to justify a position because of us being humans is a losing battle. It's always what the best science is. And that's it.

183
00:25:44,360 --> 00:25:52,360
All right. So when we look at like, hey, what does AIC that we don't see? I love that. Yes.

184
00:25:52,360 --> 00:26:11,360
I question because there's a great experiment called the double pendulum experiment. Essentially what we did was we've, and by we I mean humans as the scientists, but it all the known physics laws that we had and asked it to create a formula and it created almost a very similar formula to what we have to explain a double pendulum.

185
00:26:11,360 --> 00:26:26,360
But then what they did was they actually said, all right, cool, we're going to reset the whole thing. And we're actually going to remove all the variables that humans have described and that would they want the AI to predict how many variables would explain this double pendulum swing.

186
00:26:26,360 --> 00:26:30,360
And they described more variables than we know.

187
00:26:30,360 --> 00:26:44,360
And the problem with the black box, though, is we can't ask it what it found. But what we do know is it found more laws of physics than we could describe the test and they're like, yep, no, it's consistently finding new variables that we can't explain.

188
00:26:44,360 --> 00:26:47,360
So it sees something that we don't see.

189
00:26:47,360 --> 00:27:01,360
And one of the most common things I've heard from AI skeptics is, well, have you heard that story about when they analyze wolves versus dogs and what it did was it looked at the snow behind it and determined that's what cause was that's a 2012 study.

190
00:27:01,360 --> 00:27:04,360
We've moved so far past that now.

191
00:27:04,360 --> 00:27:10,360
I don't think that's a valid argument unless you're using a really poor model on AI.

192
00:27:10,360 --> 00:27:21,360
I think we are moving towards that piece of AI does see things and no things that we don't, whether or not it can describe it is a whole different story.

193
00:27:21,360 --> 00:27:30,360
But there's papers out now where they actually have AI self interrogating or interrogating another AI to ask him about those black boxes.

194
00:27:30,360 --> 00:27:41,360
And then we have a hypothesis that our organic brain is actually just a multitude of different interrogation processes of we hypothesize.

195
00:27:41,360 --> 00:27:57,360
We test, we reformulate, and that testing piece gets us to self criticize our own models, and then rinse and repeat and we refine it over and over again and we see that in our kids as well right when somebody's learning how to walk.

196
00:27:57,360 --> 00:28:07,360
On the trip they fall, then it refines it, then the child gets better and better, and then becomes second nature, and then great then we move on to another skill.

197
00:28:07,360 --> 00:28:11,360
And that's what we do with all things.

198
00:28:11,360 --> 00:28:24,360
In terms of our daily life, what we went through med school what we learned through writing. Okay, that's not how you write the letter ABC is not CBA or something that what's that refine and rinse repeat.

199
00:28:24,360 --> 00:28:30,360
So do you think we should value things more if a human made them.

200
00:28:30,360 --> 00:28:43,360
Oh, this is funny. It's funny you raise up ahead this debate with a friend of mine yesterday about what is value. And do we need to be afraid of AI disturbing that value.

201
00:28:43,360 --> 00:28:48,360
And also, what is money and why do we value money.

202
00:28:48,360 --> 00:28:59,360
And what I realized at the end was it's all subjective value is based off of what we want. There's at one point those non tangible funds are something that are something that where people are buying and selling art.

203
00:28:59,360 --> 00:29:07,360
And at one point it was considered viable during the pandemic and we realized it was always the money values variable.

204
00:29:07,360 --> 00:29:13,360
And whether it's human base versus non human base I think that's determined by society.

205
00:29:13,360 --> 00:29:19,360
And I think that's important that we value the Olympics because it's a drug free.

206
00:29:19,360 --> 00:29:25,360
Not a competition of the best athletes in the world, but there is a non.

207
00:29:25,360 --> 00:29:34,360
There's a drug, sorry, drug full hyper enhancing Olympics as well that exists and people are starting to catch on to that.

208
00:29:34,360 --> 00:29:49,360
So I'm talking about value. And specifically let's talk about if an AI makes a medical device versus an AI makes a song or a painting.

209
00:29:49,360 --> 00:30:01,360
Yeah, you know what I don't think human based one would be any more valuable. In fact, there's an argument here that some people may say an AI device is better than a human device.

210
00:30:01,360 --> 00:30:10,360
And if a human device is structural or whatever uses an alloy that we haven't discovered yet or a ceramic we haven't discovered yet.

211
00:30:10,360 --> 00:30:19,360
We already see it in advanced material manufacturing. I subscribe to a newsletter that talks about all things AI material science.

212
00:30:19,360 --> 00:30:26,360
Oh my God, the stuff that's creating my job on laboratory we haven't hit mass manufacturing. That's where all the starts.

213
00:30:26,360 --> 00:30:40,360
We're going to see that as well in our battery technology. We're seeing unique annals and cathodes using ceramics being produced that are hitting that near manufacturing level and it's going to change the way we run our practice or sorry our world.

214
00:30:40,360 --> 00:30:42,360
Those are AI designs.

215
00:30:42,360 --> 00:30:59,360
So that's why human foundations are going to be wrong. And somebody's going to say well all the way down it was humans. Yeah. But before humans are other homo sapiens and before that there was a whole different species so all the way down and it actually wasn't even us all the way down to amphibians.

216
00:30:59,360 --> 00:31:01,360
Let's come back to healthcare.

217
00:31:01,360 --> 00:31:21,360
We are both physicians in our day job. And talk to me about the future of physicians in healthcare. How do you see I playing a role and how do you see private and government innovation playing a role in healthcare.

218
00:31:21,360 --> 00:31:35,360
So, I see AI starting off as what you see it right now, taking over the tasks we don't want, writing notes, billing referral letters all the stuff that annoys us.

219
00:31:35,360 --> 00:31:53,360
Where I see it moving towards and this is where it goes from like a class one to a class two healthcare device is where it starts making recommendations. It's not going to start prescribing things but it starts to say hey you know what have you considered this drug this drug this medication this laboratory test.

220
00:31:53,360 --> 00:32:12,360
It just takes it to the next level. And I see that as a win from a government point of view, a healthier and a healthier population is a more engaged population. It also means less cost on the system as a whole.

221
00:32:12,360 --> 00:32:26,360
And so I don't see that being the barrier I think the bigger role that the government plays and all of this is just how do we let the innovation happen, while putting up some guard rails that ensures privacy and security exist.

222
00:32:26,360 --> 00:32:38,360
I have a thing on privacy and security so for example I don't believe privacy exists I think security exists. So as an example, I want my health care team to have my data.

223
00:32:38,360 --> 00:32:51,360
And in that team there might be one person five people 100 people, in which case privacy for me does not matter what I want is security that only the people who need to be seen who need to see my records are seeing it.

224
00:32:51,360 --> 00:33:02,360
I was, you know, it's been brought up in the healthcare all the time that just because we close the curtains in emergency room that the bed next to that person can't hear the diagnosis.

225
00:33:02,360 --> 00:33:09,360
So privacy, that's not privacy. Right, that's just poor security.

226
00:33:09,360 --> 00:33:14,360
What we need to do is augment security now again personal opinion on all of that.

227
00:33:14,360 --> 00:33:28,360
But yeah, we need to tackle security first and that privacy is a byproduct of that. And so that's where government comes into play regulations, put some guard rails up so that we don't veer off course but some recommendations of, you know how we have a data nutrition

228
00:33:28,360 --> 00:33:33,360
label.

229
00:33:33,360 --> 00:33:44,360
Food and drinks and stuff like that. Well, why can't we have an AI nutrition label that says by the way this was tested on Southeast Asians in the age group of 20 to 40 year olds, and should not be used outside of that.

230
00:33:44,360 --> 00:34:00,360
And what we're doing is we're putting that plain language of what can I as a clinician do, but that data nutrition label is mandated by a government. And so as a result, we had that innovation and we have all those perfect things but now we know hey you know what,

231
00:34:00,360 --> 00:34:16,360
I don't, this is the wrong AI model to use for detecting skin cancer, because my training data, or my type of people, you know, brown skin South Asian was not in that test subject group so therefore this is not a reliable AI.

232
00:34:16,360 --> 00:34:32,360
So I think that for the government, I think from a healthcare perspective, it's scalable. You can get it to the masses you can diagnose a whole bunch of things could you imagine if we could suddenly get rid of all tuberculosis, or get rid of all, you know, STDs as sexually

233
00:34:32,360 --> 00:34:42,360
transmitted diseases in the community because we could do it at home tests that can tell us immediately whether or not we have a condition. We would be at a whole different level as a civilization.

234
00:34:42,360 --> 00:34:52,360
So go back in time 10 years and give yourself one piece of advice. What would you tell yourself.

235
00:34:52,360 --> 00:34:55,360
Stop listening to your own voice.

236
00:34:55,360 --> 00:35:04,360
Slow down. Listen to what others have to say. I think that was the dumbest thing I ever did as a kid.

237
00:35:04,360 --> 00:35:18,360
There are smart people around don't get me wrong there are people who think they're smart. And that took me time to realize that just because you are old or have a position or something bad that you automatically are smart that's not true.

238
00:35:18,360 --> 00:35:26,360
But there are smart people around and so one age makes no difference. You can have a smart 12 year old you should probably listen to them.

239
00:35:26,360 --> 00:35:41,360
You also have a smart 90 year old you should probably listen to them as well. But equally so, just like how there are, you know, good clinicians bad clinicians, you know, good something bad something else.

240
00:35:41,360 --> 00:35:51,360
Listen first then speak. Don't try and jump into a conversation. That was a that was a big lesson I learned about eight years ago.

241
00:35:51,360 --> 00:36:14,360
As you part of the personal development was living in Nunavut and interacting with people who aren't scientific who aren't in it for the glory they're there to simply live and be human and enjoy life and that was one of the best moments of my life was living in Nunavut.

242
00:36:14,360 --> 00:36:23,360
And I think it's something beautiful about, you know, the satellite being the wrong position relative to the sun and therefore the entire city's internet gets cut out for two, two hours.

243
00:36:23,360 --> 00:36:34,360
Right. And, you know, your cell phone signal being so poor that when you ask me hey pass me your number we can, you know, hang out later on, like, no just see out wing that Wednesday.

244
00:36:34,360 --> 00:36:53,360
And I was at the same pub at the same time there's something beautiful about the social interactions and just talking to somebody. And there's no hierarchy there's no hey you're a lawyer so you're scary or hey you're a venture capital lead and therefore you must be super smart and scary.

245
00:36:53,360 --> 00:37:06,360
No, we're all equal. We all are really, really good at what we do and we're all really, really bad at what we don't know what to do. And the beauty of language and learning is that we should just listen to one another.

246
00:37:06,360 --> 00:37:14,360
Yeah, so if I could go back I'd probably say, you know, shut up and listen, because you're, you're pretty dumb.

247
00:37:14,360 --> 00:37:20,360
That's well said. Thank you, Neil. And this has been great to have you on the podcast and thanks for coming on.

248
00:37:20,360 --> 00:37:23,360
Pleasure. Thanks for.