Ask Mike Reinold Show

Using ChatGPT and AI in Physical Therapy

Facebook
Twitter
LinkedIn
Email

The use of ChatGPT and other AI models has exploded in recent years, and you’re starting to see this in most professions, including the medical profession.

We’ve been using AI quite a bit at Champion for a variety of things and wanted to share our experiences. We go over some of the use cases as well as different models that we’ve tried with varying success

We do think you should be using AI. It’s an opportunity for growth.

We dig into how AI actually helps us treat, teach, and stay current without drowning in papers. From OpenEvidence to Gemini vs ChatGPT, we share real clinic workflows, movement apps we like, and guardrails to avoid hallucinations.

• why we trust OpenEvidence for fast, linked citations
• how AI reduces bias during eval prep
• when to compare ChatGPT, Gemini, and Grok
• prompts that demand sources and confidence levels
• movement analysis apps for golf and bar velocity
• ambient note tools to draft SOAP notes
• ethical guardrails on privacy, consent, and hallucinations
• practical routines for morning research sprints
• using AI to generate screeners, red flags, and alternatives
• model strengths, weaknesses, and staying literate

To view more episodes, subscribe, and ask your questions, go to mikereinold.com/askmikereinold.

#AskMikeReinold Episode 383: Using ChatGPT and AI in Physical Therapy

Listen and Subscribe to Podcast

You can use the player below to listen to the podcast or subscribe. If you are enjoying the podcast, PLEASE click here to leave us a review in iTunes, it will really mean a lot to us. THANKS!


Show Notes

OpenEvidence

Transcript

Brian Santos:
So Sean from North Carolina is asking, “Hey Champion, I have a question regarding the use of AI as a physical therapist. I’m a couple years out of physical therapy school, and I feel like I missed the boat on using AI while I was in school. It’s hard to keep up with it as it seems to keep evolving every month. How are you guys using AI in your current practice? Are there any specific apps or websites that you recommend, and why?”

Mike Reinold:
Awesome. Thank you, Brian. And I think it’s kind of cool that Sean from North Carolina was read by Brian from North Carolina. This is like a connection. I feel like you two bonded right there, so really cool. But I’m going to start this episode and say that this is probably going to be the fastest outdated episode we’ve ever recorded of all time. By the time you hear this or by the time you’re looking at this in three months, we’re probably going to have different opinions, because AI right now is bonkers in terms of how fast it’s progressing, what’s new every day. Every model is accelerating at such a rapid pace, it’s crazy. So whatever you think you’re doing now is going to be probably different in three months, but maybe we can talk some general terms and stuff like that. So Sean, first off, I hear your point on missing the boat.

I think everybody on this podcast did not grow up using AI. We didn’t use it in high school. We didn’t play around with it, trying to figure out how to cheat on our homework. So we’re probably not as good at it as some of the younger generation. But the more you play with it, the better you get. Now, let’s break this episode into two things, guys. And so I’m going to throw it at you guys. One, what are we using for AI models? Because there’s ChatGPT, there’s some others, and I’d like to hear some thoughts. I know most of us are using them fairly often now. But two is, what AI apps are we using? Because we actually have a few questions on this. I tried to put them together. But what AI apps? I know there’s some running apps. We know there’s some baseball apps, but what apps using AI have we tried?

So who would like to start? Let me see. Anyone? Jump in. Anthony, you want to jump in? What have you been trying in there? Anthony’s probably… Are you the youngest of us, technically?

Anthony Videtto:
Yeah, I think you could say that. Technically, yeah.

Mike Reinold:
But correct me if I’m wrong, you missed the boat on AI technically, right?

Anthony Videtto:
Yeah, definitely wasn’t around even just three years ago when I was in school, so I had no background for like, “Oh, I forgot to do a homework assignment. Can ChatGPT write it up for me real quick?” So I missed that boat, too. Recently, I think most of us have come across something called OpenEvidence, which has been an awesome resource and platform. I think for myself, just to quickly search, “Hey, I have a question about literature on return to hitting,” or whatever the case may be. You plug that in OpenEvidence, and it gives you a list of really great resources, articles, that kind of shows what might answer your question. Click on those links, it takes you right to the article. So that’s something that I’ve been using quite frequently if I have a question about something in the literature and I don’t really want to do a full lit review or lit search. So that’s something that I’ve been using quite frequently.

Mike Reinold:
Yeah, I completely agree. I think OpenEvidence is one of the rare ones that I trust, and maybe that’s going to be my downfall, but I actually feel like the hallucinations just aren’t quite there. But what you could do in OpenEvidence is really cool, is you can just ask a clinical question like, “What’s the evidence behind lasers for wound care?” for example. I’ve been looking into that. What’s the evidence behind special tests to differentiate lateral epicondylitis? You don’t have to be a prompt engineer, which is like a new field coming up, where you have to make a fancy prompt. You can just ask a clinical question, and it nails it. And most importantly, because I usually put the same searches in multiple chatbots just to see the difference, it gives you references that have links, and you go right to the PubMed article, and you read it. And man, OpenEvidence is number one so far for me. So I agree, Anthony. Brendan, did you have one that you wanted to talk about?

Brendan Gates:
Yeah. I like OpenEvidence a lot, and I’m with you. I think my favorite thing about it is it just lists all those references. So you can go back, and you can check those papers just to make sure that it’s not hallucinating. So I like that a lot. I don’t know about you guys, I use ChatGPT a little bit, mostly for fixing stuff at my house, but OpenEvidence has definitely been the one that I use most clinically. But I wanted to share a little story that I thought was appropriate for this. A couple years ago, I had the opportunity to present at the APTA Massachusetts Annual Conference, and the keynote speaker was actually an ED doctor from I think BMC, and his whole talk was about AI’s place in healthcare. And going into that talk, I didn’t think that there was going to be as much use case for AI in our field versus something like finance, but he proved me very wrong, very quickly.

And he gave us this story about how they’ve been using their own proprietary ChatGPT model. And the staggering example he gave was how to use it as a tool. So he told us a story about how a patient came to the ED. He evaluated him, he ran tests on him, he made a diagnosis, and his words were, long story short, he got the diagnosis wrong. Unfortunately, the patient passed away. And then after the fact, he took all of his objective data, his subjective interview, and he put it into his ChatGPT or their ChatGPT version and said, “Come up with a diagnosis.” And that ChatGPT actually spit out the correct diagnosis. And I thought that was pretty staggering. He said, “Would the outcome have been different? Would the person still be alive?” Probably not. He was pretty sick, but maybe. And I think it was interesting to hear that he said on that day, ChatGPT, or their equivalent, was better at his job of diagnosing a patient than he was.

So he told the rest of the crowd that now, when the residents and everybody else do their rounds, they actually have two screens. So they have one screen with their chart, so they can chart review their patients for the day. And then their other screen is this ChatGPT model. And so they’ll use it as a tool to plug in basically all of their assessment or their evaluation, and then they can almost double-check their thinking and bounce it off of them as if it’s like a colleague who just has access to a lot more info a lot quicker. So I think where people in healthcare can use it and be successful is maybe it’s not going to replace us… It won’t replace a sound assessment, because it can only output what you input. So you need to be able to give it good information for it to help you. But I think the clinicians that can use OpenEvidence or things like that as a tool and use it properly, it’s probably going to be pretty helpful.

Mike Reinold:
Yeah. And I think that’s such a great rationale, Brendan, why we should all be embracing this and giving it a shot. And I could tell you from even my experience, I just started using it with evaluations. A new evaluation comes in. I try to prep it with a bunch of stuff. And I’ll be honest with you, as you get older in our medical professions, there’s so much bias in our heads that… Just experiences that we’ve been through, that sometimes you lead yourself to a conclusion because your brain wanted to, versus having an open mind and being a little bit more thorough. And I’ve definitely had a couple cases where I was like, “Oh man, geez, I don’t think I would’ve looked for A and B. I just would’ve looked for C and D.” And that kind of opened me up to not just being stuck in my patterns of trying to find that. So yeah, I agree, that’s pretty good. Dave, what do you got?

Dave Tilley:
Yeah, so off of Gates, there’s a really interesting story here. So one of the big models, Stability AI, was founded by a guy called Emad, and he was a hedge fund manager who did very well in the finance world, and then essentially was kind of just looking for things to do. And it came to be that his son was diagnosed with a very rare form of autism when he was born. And so he had the resources and opportunities to pretty much find the best doctors and all this kind of stuff. And essentially when he went to these different neurologists and doctors, they said that his form of autism is very rare and hard to treat, and that there is an ungodly amount of evidence coming out every day on that field of autism, that it’s really hard for neurologists to read and process and see what’s real, what’s not, the methods.

So think about a thousand papers on autism coming out maybe every few months. It’s hard to keep up with that. So he pretty much said, “Okay, well, I’m going to build an AI model to help process and understand the research very well.” And he essentially did that, and he built Stability AI off the back of helping doctors better diagnose his son with this rare form of autism. And the doctors were essentially saying that the ability, to Gates’ point, to have my own brain, my clinical judgment, but have something analyze 100 to 200 articles for me in 10 to 15 minutes while I’m doing something else and then I can go back and use my clinical judgment or whatever, that helped his son find a very successful treatment method for his type of autism, which the neurologists were still there, everyone was still there, but they can’t read 100 to 200 articles per day.

And so along the lines of OpenEvidence, I think that’s where it maybe summarized a lot of our thoughts, is we have our clinical judgment. And I think the opposite of… My brain is always like, “AI can’t be empathetic, AI can’t probe better questions, AI can’t be…” When someone’s starting to tear up in an eval because they’re going through something really hard, they can’t be there for that, but they can definitely be there. Those models can help us process. Somebody walks in with something you haven’t seen in five to six months, and it’s like, “Give me the current evidence on 18-year-old hip labral repairs with acetabular, whatever this fracture is that I haven’t seen in six to eight months before.” That’s where I think I’m finding the most use cases now, is it still goes through the lens and filter of me, but it helps me process a volume of information that’s impossible for me to do on my own.

Mike Reinold:
And call me crazy, Dave. When you wake up in the morning, and you’re preparing for the day… I know we all kind of do similar… I’ve been preparing for my day all morning. Me jumping in on OpenEvidence and looking up some stuff because I got a new patient coming in, and I’m looking up for recent stuff. It’s refreshing. I feel like I love it. I feel like, “Okay, I feel good about myself.” I feel confident. I learned a couple things this morning. And I’m old. To me, it’s exciting to be able to do that. And before, it was daunting. That’s what everybody says. When we do episodes on how to stay current, everybody says, “It’s too much. I don’t know. There’s too much.” And they’re right. There’s too much crap getting published in so many journals and stuff like that. You don’t know what to read. So yeah, that’s awesome. Brendan, did you have something again?

Brendan Gates:
Yeah, it reminded me when Dave said the empathy piece. This same keynote speaker that I was talking about, that was another part of his talk. They actually ran some studies at BMC, and that was a big concern. Was the AI able to be empathetic? And after their research, which I don’t have, so I can’t prove it, but what he said was that the AI models were actually more empathetic than a lot of the doctors. And I thought that was crazy.

Mike Reinold:
Crazy, depressing, and disappointing. I don’t even know. It’s like that should be the most empathetic field, but you just become a robot, I guess. Len, what do you got? So as the senior, as the elder…

Lenny Macrina:
Senior, right.

Mike Reinold:
…Elder physical therapist without a doctorate degree, just whatever. Tell us about your experience with AI. Is it confusing for you?

Lenny Macrina:
Yeah, I use this new concept called AOL. You dial into this. No.

I love everybody’s responses, but I think we’ve kind of lost what the question was, which is some apps and websites that we’re using. So OpenEvidence is a great one, but I think also… And I think you kind of touched upon it, where you put things in different AIs to test. I think for me personally, ChatGPT obviously, and I’ve subscribed to the $20 per month, which is another subscription, because I think it’s a little better. But it is a little crazy at times, so you got to be careful. I think also Google Gemini is a really good one. It has access to everything in Google. It has YouTube, it has Google, all Google information, anything that’s on Google, all the search engines, it has access to that information. And I’ve also been playing with Grok, which is X, Twitter, whatever, Elon Musk’s app as well. And I find that one to be pretty reliable as well. For some apps, I use Swing Coach for golf, maybe more for me personally, is a good one.

Mike Reinold:
I was hoping you were going to talk about this. That was perfect. Yeah, exactly. Tell everybody a smidge about that too, because even as a PT, those similar things, I don’t want to break your thought, but I want you… That’s important. What does that app do, and what do apps like that do?

Lenny Macrina:
So the app basically is watching me move, and it’s detecting certain things that it knows are critical. For example, in the golf swing. But I think there’s also other ones for, and I think Gates, I don’t want to put words in your mouth, but I feel like there’s other ones for like watching people deadlift or watching people squat and watching velocity of movement and stuff like that. And I don’t remember the names off the top of my head, but those are valuable. So it’s watching you move. It’s sensing where the club is in space compared to maybe a pro golfer or the elite golf swing. And it’s giving you feedback on like, “Your club is too steep. Your club is too shallow.” And it gives you feedback to videos to help you improve whatever. For me, it would be my golf swing.

So I think there’s those concepts out there as well that people need to try to find, especially for us. Like we said, we use GymAware for movement and velocity-based movement stuff, but there are apps out there… You just record the video of them moving, and it will tell you how fast they’re moving. It’ll give you a velocity-based kind of input or output of how fast that was, and probably give you feedback on that, too. So I think, again, there’s a bunch of apps out there. This is going to be outdated in a few weeks or a few months, but I think right now that’s what kind of I’m using and what is helping me with putting PowerPoints together with patient care, and also just kind of observing myself and others and how they move. So yeah.

Mike Reinold:
Awesome. Thanks, Len. Pope, you got anything you want to add? I know you’re embracing it quite a bit. And I think we’ve covered a bunch, but what’s your experience?

Dan Pope:
Yeah, I’m definitely afraid of AI taking over my career and my business, so I’m trying my best to stay on top of it. I haven’t looked into this. I’ve heard this from several physicians, and I know you could do it for physical therapy, but having some apps that will listen to your evaluations and just write a SOAP note for you, I think that’s another no-brainer. I haven’t looked into this myself. I think our note-taking is very easy, so I haven’t, but just me being lazy. The other thing I will say is that I use ChatGPT almost exclusively. I haven’t messed around with some of the other ones, but what I’ve found is that I was very frustrated with ChatGPT initially, and what I learned over the course of time is A, ChatGPT is getting better, but I think it’s important that you just start messing with it, meaning that you have to learn how to prompt appropriately.

So you can tell ChatGPT that you’re a physical therapist, you work with patients, the only information I want is from PubMed or randomized controlled trials. How would I treat this specific person? And if it knows all that information, it can give you something very similar to OpenEvidence, where you can say like, “Hey, I want the citations for every single thing you’re saying.” And then if you don’t know, if one of the things that’s saying is accurate, you can ask it, “Where’d you get it from? Why do you think that? How certain are you about this?” And what I will say is that even OpenEvidence will hallucinate or give you wrong information. But the other thing is that I think older folks are getting scammed by AI videos. They’re like, “Oh my gosh, did you see what Michael Jackson did?” “Mom, Michael Jackson has been dead for…” You know what I mean?

Just because you can now make those videos online, it’s hard to tell if you’ve never looked at them. But I think with AI, you have to interact with it, and you can start to figure out where you’re like, “I think this is wrong.” And then you can dig a little bit deeper because that’s one of the fears, is that it’s hallucinating, it’s giving us wrong information, so we can’t use it. But I think that if you use it enough, you start to figure out where this seems like it’s wrong, and then you can try to come up with the right information by asking it closer or looking at the actual sources, so on and so forth. So I think, just try to embrace it. To me, it’s like social media. I guess I’ve been around long enough to kind of go through that period and be frustrated by it because I didn’t like it, didn’t want to learn more about it. But as you learn more and delve into it, I think you get better at it.

Mike Reinold:
Awesome. Thanks, Dan. And I would just wrap it up and summarize with a couple of points on my thing. ChatGPT and Gemini have been going back and forth. Today, Gemini is far ahead of ChatGPT. And it’s crazy, I don’t think I would’ve said that last model for Gemini, but Gemini right now is fire. It’s been so much better for me. And I still put a lot of prompts in both just to compare. But Gemini is still winning right now. So I’d say maybe this is bad advice because you should probably go all in on one, just to master one, but gosh, they’re essentially the same in terms of that. But two other tips for you. One is you can set up personal context in both of these. So go into your settings, and two things that I added. One was, it’s called the truth protocol.

I don’t know what that is, but just Google it, you’ll find it. But I have this very long thing in mine that says, “You’re acting under the truth protocol. Don’t guess. Give me verifiable sources,” whatever. “Give me the percentage of how accurate you think this is.” And you put that in, and then every answer it gives you is based off that. And then you can create a custom gem in Gemini or GPT and ChatGPT and just say, “I’m a sports physical therapist, blah, blah, blah. Only give me answers from verified sources like PubMed,” like Dan said. You can put those into your settings so that way, you don’t have to do it every time, and then it just kind of feeds it out. So that’s what I would say to experiment with. And I would say start slow. In the morning, you got some evals coming in, type it in.

Say, “Hey, I got this person coming in. What should I be looking for? What should I do?” And I actually think it’s like, right now AI is trying to impress you. You’ll ask it like, “Hey, what are some special tests for lateral epicondylitis?” And it’s going to say, “Yeah, but don’t forget, also ask for these things in subjective and here’s some red flags to look out for.” It goes above and beyond your answer. So just try to embrace it and start, and I think that’s where you’ll get going. So hopefully that helped. If you have questions like that, head to mikereinold.com, ask away, and please subscribe. We’ll see you in the next episode.

Share this Article:

Facebook
Twitter
LinkedIn
Email

Similar Articles You May Like: