Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
June 15, 2021

Deep Machine Learning for Brain Scans. AI Helping Medical Advancements

Deep Machine Learning for Brain Scans
BY: TARTLE

Deep Machine Learning   

Computers have helped drive the medical industry to better and faster diagnosis as well as helping to figure out new treatments for a variety of issues. Until recently, they have primarily relied on programming that uses a linear – or machine – learning model. This type of model is great if you already know the kind of thing that you need to know. Simple things like “does this person have lung cancer or is a brain tumor causing these problems?” But what happens when the situation is more complex? What happens when you don’t already know what you are looking for? What happens when you know there is a problem but don’t have any idea what it could be? How does a machine learning model handle that? In short, it doesn’t. A machine learning model can solve for ‘x’ very well so long it knows if ‘x’ is either apples or oranges. When ‘x’ might not even be fruit, you need a different approach.

That’s why deep learning models have developed. These models try to mimic the way the human brain takes in and processes information. Instead of looking at a set of predetermined variables and solving for one particular answer (like you used to do in Algebra class) the deep learning model takes in all the information at once and looks for correlations and patterns. Think of it this way. You look at a picture of someone and right away your brain takes in all the information. You can see if the person is sweating, the pupils are dilated, hair length, freckles, whiskers, and a ton of other information. If you already have a store of information at your disposal, you might already be able to make certain deductions about that person’s health. A deep learning model does the same thing. Already programmed with a vast amount of medical knowledge concerning symptoms and their causes, a deep learning machine can make a diagnosis before a human would see the problem. This is because the machine can hold more specialized information, recall it faster, and will often have much greater attention to detail than most people. 

A recent study in Nature Communications suggests that deep learning models could be very useful indeed for medical professionals, particularly in the area of brain imaging. This is because the brain is immeasurably complex with more variables than what can be truly accounted for in the machine learning models. These models have been held back though, generally by the fact they take some time to get the results needed. Data may need to be run through the machines a few times to train the program, to give it a chance to develop the best ways to analyze the data and produce good results. So long as the deep model is ‘trained’ in this way then the results are typically better than the older machine learning models. 

Does that mean that linear, machine learning is strictly a thing of the past? Not at all. Those models still do better when it comes to simple tasks with a limited amount of variables. In fact, the two models can even be used in conjunction. A trained deep learning model can be used to establish the necessary variables to enable a machine learning model to take over and provide the needed solutions. 

Where might deep learning take us in the future? In addition to analyzing the brain and being able to make early diagnosis of cancer, Alzheimer’s, and other diseases, it could be applied to other things like an annual physical. Imagine you go to the doctor and instead of a lengthy examination (preceded by a long wait) and blood work that might take days to get back, you just walked into a scanner, got a finger pricked and the deep learning model told you your level health and anything wrong with you in a few minutes, complete with recommended treatments? It isn’t that far-fetched.

How can you help make that future a reality just a little bit faster? By signing up with TARTLE and sharing your medical data with universities and hospitals working on these models. That will give them more data to work with which will help them better train those deep learning models, which will one day make your trip to the doctor’s office a lot easier and faster. 

What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.

Summary
Deep Machine Learning for Brain Scans. AI Helping Medical Advancements
Title
Deep Machine Learning for Brain Scans. AI Helping Medical Advancements
Description

Computers have helped drive the medical industry to better and faster diagnosis as well as helping to figure out new treatments for a variety of issues. Until recently, they have primarily relied on programming that uses a linear – or machine – learning model.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 3 (00:07):

Welcome to Title Cast with your hosts Alexander McCaig and Jason Rigby. Where humanity steps into the future. And source data defines the path.

Alexander McCaig (00:24):

Okay. Take 20 of this episode.

Jason Rigby (00:28):

A lot of brain damage going on, Alex.

Alexander McCaig (00:30):

Just chaos with all the equipment this morning.

Jason Rigby (00:34):

Yeah. But we're back. We're living life, loving it.

Alexander McCaig (00:38):

I'm going to need a brain scan after this.

Jason Rigby (00:40):

Yes. Luckily that's what we're talking about.

Alexander McCaig (00:43):

Oh, interesting.

Jason Rigby (00:43):

Well...

Alexander McCaig (00:45):

A very very interesting transition you have there.

Jason Rigby (00:48):

A team for the center for translational research and neuro imaging and data science trends, leveraged deep learning, to better understand how mental illness and other disorders affect the brain through brain imaging.

Alexander McCaig (01:01):

Yeah. So when I take a picture of something, you, albeit you may be quite uninteresting to take a picture of.

Jason Rigby (01:10):

Yeah, we don't want to take a picture of me.

Alexander McCaig (01:11):

But there is a lot of data in that image itself.

Jason Rigby (01:15):

Mm-hmm (affirmative).

Alexander McCaig (01:15):

There's a lot to analyze. Depth, angles, colors, vectors, body positions, pupil dilation. There's just so much going on. Right? [inaudible 00:01:25] of your skin, maybe you're sweating. All this good stuff.

Alexander McCaig (01:30):

Now that is so much information that is all essentially not really interconnected. And when you look at a standard machine learning model, it's not really conducive for image analysis. So, from a medical standpoint, when you look at MRIs and genome sequencing, when it's taking that sort of snapshot that is so data heavy, using a standard machine learning model isn't necessarily the most conducive route for that, because we don't know a lot about what we're taking pictures of, or the sequencing aspect of human genetics. So if we don't know how to analyze it, or where to start, or what to look for, regular machine learning doesn't do us any good. Because we haven't predefined, like we want you to look for these sort of correlations.

Alexander McCaig (02:26):

Now, the difference between machine learning and deep learning, is that deep learning can efficiently take massive amounts of data, and it can analyze it and its complex structures, and then figure out what maybe we should be looking at.

Alexander McCaig (02:41):

So rather than us coming in ahead of time and saying, "This is what we need to look at," it'll analyze huge mountains of it, and then try and see later, by working backwards, to say, "These are kind of the pointers or the key takeaways that we need to look at, in this sort of analysis." So that when you, as a doctor, come back in, you can say, "Okay, I can reference what the deep learning model is telling me we should look at," which was otherwise an unknown, and then I can use that and say, "Okay, now I know what to look for first, when I go to do another scan." And I could go back and put that into a machine learning model.

Alexander McCaig (03:15):

But with the deep learning, it helps for figuring out, well, what is the X that we're trying to solve for, that we didn't know it needed to be solved for. Does that make sense?

Jason Rigby (03:27):

Yeah. No, that makes sense. And one of the things that they talk about is... And they talk about this in the article. Inclusions are often based on preprocessed input. That denied deep learning, the ability to learn from data with little to no pre-processing. One of the main advantages of the technology. So pre-processing seems to be the key.

Alexander McCaig (03:44):

Yeah. Pre-processing is a function that we're doing for machine learning, in most standard models. And that's why a lot of people have a preference for it because it's quite quick, and it can handle pretty big datasets. But anything like super massive requires that deep learning, and it doesn't require that pre-processing or saying, "This is what you need to look for." It can actually go back.

Alexander McCaig (04:05):

And so, not only do you have some sort of analysis and output that you're receiving from the deep learning, but you can also go back and say, "Oh, okay, what is the algorithm it decided to use, after deductively going back?" And then you can say, "Oh, that's quite interesting." And then you can also test that specifically. And maybe you can take that, "Oh, great. You learned this algorithm, now let me take that and go put it over into a regular machine learning model."

Jason Rigby (04:28):

Yeah. And I like how they were talking about a classical machine learning, compared to deep learning, with single number measurements, classical machine learning works better. Like patient's body temperatures, or patients that smoke cigarettes. Those approaches work better, because you're analyzing something that is more linear.

Alexander McCaig (04:49):

Yeah. It's linear. It's a very binary world. It's up or down.

Jason Rigby (04:53):

Like taking five million patients that you have in this health care system, and then analyzing their temperatures that they're coming into the emergency room on. A classical machine model would be fine that way.

Alexander McCaig (05:05):

It'd be fine.

Jason Rigby (05:05):

It would work more efficiently.

Alexander McCaig (05:06):

But then you start doing where you're imaging a three-dimensional brain.

Jason Rigby (05:09):

Mm-hmm (affirmative).

Alexander McCaig (05:11):

Think about all those different vectors and all the angles you have to analyze it, and say, "Oh, did I miss it somewhere else?" Deep learning is more conducive for something like that.

Jason Rigby (05:16):

Yeah. Complex information.

Alexander McCaig (05:19):

Yeah.

Jason Rigby (05:19):

And then, they're saying... This is what was interesting to me. They were saying that they could analyze complex information, and then it does a better job of answering simple questions. So, would classical machine learning have an issue answering a simple question? Is it going to confuse it? Or does deep learning understand that you're asking a simple question?

Alexander McCaig (05:42):

No, I think deep learning has a better job of understanding after the fact of a simple question being asked.

Jason Rigby (05:48):

Yes. Yeah.

Alexander McCaig (05:50):

It all goes back to the pre-processing.

Jason Rigby (05:50):

Mm-hmm (affirmative).

Alexander McCaig (05:51):

Who puts in the input. And you'd better hope that your deep learning model is not racist or anything like that.

Jason Rigby (05:56):

Yeah, which could happen.

Alexander McCaig (05:56):

Now, what if it's like, "Oh, interesting." You know? But you knows what that...

Jason Rigby (06:02):

Well, this gets into the... And I thought this was interesting. You're going to love this part.

Alexander McCaig (06:05):

Go ahead. Yeah.

Jason Rigby (06:06):

Another advantage of deep learning is that scientists can reverse analyze deep learning models to understand how they reach conclusions about data. So you can have a deep learning machine checking out... I imagine they have it checking out the classical machine.

Alexander McCaig (06:19):

Why not?

Jason Rigby (06:20):

And then have it reverse engineering and seeing what's happening.

Alexander McCaig (06:24):

It's like a good psychiatrist. You want to break it down over time. And it's like, "Well, how did you come to that conclusion? Tell me."

Jason Rigby (06:33):

Well, they're learning on their own. So I want to know...

Alexander McCaig (06:37):

I want to know what love is. They're not learning, they're deducing.

Jason Rigby (06:42):

Mm-hmm (affirmative).

Alexander McCaig (06:42):

There's no learning being made. If it was learning, then it would be able to think, deduce, and then act again upon this, completely on its own free will. We start and stop this machine. I'm tired of hearing that these things learn. They're not learning, they're deducing. It's like a really good Sherlock Holmes.

Jason Rigby (07:00):

Yeah. But they're finding things that's outside of maybe what a human would deduce.

Alexander McCaig (07:07):

Well, yeah. That's the point.

Jason Rigby (07:08):

But then it's like, "Oh, why is it coming to this conclusion?"

Alexander McCaig (07:12):

Mm-hmm (affirmative).

Jason Rigby (07:13):

And then from there...

Alexander McCaig (07:14):

That's beneficial.

Jason Rigby (07:15):

Yeah.

Alexander McCaig (07:15):

That's the beneficial part.

Jason Rigby (07:16):

And they said this. We can check the data points a model is analyzing, and compare it to the literature to see what the model has found outside of where we told it to look.

Alexander McCaig (07:23):

Yeah. So if doctors are only looking in one place, the group can miss something.

Jason Rigby (07:30):

Right.

Alexander McCaig (07:31):

And this'll be like, "Hey, you missed something." You know how we can become blind to this world and certain truths, if we have a warped perspective? This thing's coming in and is like, "I'm going to analyze it."

Jason Rigby (07:40):

Right.

Alexander McCaig (07:40):

"I don't care how you think about the world or whatever it might be, I'm just going to look at what's here and then drive some sort of output."

Jason Rigby (07:46):

Yeah. And there was an article that was published in Nature Medicine, that demonstrated deep learning's potential. And I think this is the key word-

Alexander McCaig (07:53):

It [crosstalk 00:07:54]?

Jason Rigby (07:53):

Is they're looking at potential. Yes.

Alexander McCaig (07:57):

Yeah.

Jason Rigby (07:57):

Do you want to talk about that?

Alexander McCaig (07:57):

Yeah. You're correct. Keep going.

Jason Rigby (07:59):

Our results point to the clinical utility of AI for mammography in facilitating earlier breast cancer detection, as well as an ability to develop AI with similar benefits for other medical imaging applications. We have developed an approach that mimics how humans often learn, by progressively training the AI models on more difficult tasks.

Alexander McCaig (08:18):

Yeah. So it's like, "Great. You've learned how to deduce over here, with the least amount of error, let's open you up to something more difficult."

Jason Rigby (08:24):

Yeah. And the AI can detect cancer accurately, while also relying less on highly annotated data. Our approach and validation extend to 3D mammography, which is particularly important given its growing use and the significant challenges it presents for AI.

Alexander McCaig (08:36):

Yeah. I don't want anything anecdotal, I want to know specifically what's going on here, directly related to that patient. And if this thing can do a scan, deep learn, and within the hour, tell the person like, "Hey, these cells don't look good."

Jason Rigby (08:48):

Yes.

Alexander McCaig (08:49):

Or, "This actually looks like a malignant brain tumor. We've got to work on this right now." Even if the doctors are looking at it and you've got of them staring at an x-ray or the MRI, "Maybe, maybe, maybe." And this thing is, "Well, I know you guys are waffling on it, but just this looks really bad. Because we've scanned thousands of brains, I've deduced over all of them. We know what the error rates are, and I put this back in, this probably looks like something we should deal with."

Jason Rigby (09:10):

Yeah. And so right now, it's not used in real-world clinical settings, but the technology is going to be the future of care.

Alexander McCaig (09:18):

Yeah. And I think that's phenomenal. If we can start scanning people and bodies and genome sequences, and have these things learn from it very quickly, and we can bring someone to a better quality of life and state of health faster, let's do it.

Alexander McCaig (09:31):

And if we can just... You know some stores, I can walk in and it's got our [inaudible 00:09:37] ID and I walk out of the store and it charges me? Why can't I just walk through a scanner at the hospital, and it says, "You're good to go. See you later."

Jason Rigby (09:43):

Mm-hmm (affirmative).

Alexander McCaig (09:44):

I don't have to see a doctor, nothing. It scans me, and it's a very proactive approach. Once a month, you go through a scanner. "Bye."

Jason Rigby (09:49):

Oh, that would eliminate so much. "I'm not really feeling that well, let me go into this booth and have it scanned me real quick."

Alexander McCaig (09:54):

Yeah. You go in-

Jason Rigby (09:54):

This is the future.

Alexander McCaig (09:55):

Mouth gets swabbed. It goes through a genome sequencing thing, goes through that and it also does a full body scan by about-

Jason Rigby (10:01):

"Err, you've got the flu."

Alexander McCaig (10:02):

"Okay. Bye."

Jason Rigby (10:02):

"Next." Yeah. "Here's your medicine. Get out."

Alexander McCaig (10:05):

And you've got a machine, that outputs things. "Here's your stuff." And it pops the pills out right there.

Jason Rigby (10:09):

Yeah. Yeah. Perfect. That's how it's going to be.

Alexander McCaig (10:11):

Oh, that's our new hospital.

Jason Rigby (10:12):

We love it. The future predicted here at T Cast.

Alexander McCaig (10:18):

Just like Alex Jones. I talked about this. We were the ones.

Jason Rigby (10:22):

Now you just screwed up the algorithm. You can't talk about Alex Jones and you can't talk about COVID.

Alexander McCaig (10:28):

[inaudible 00:10:28] kick us off because we were rapping on Alex Jones. Sorry about that.

Jason Rigby (10:31):

We hate Alex Jones.

Alexander McCaig (10:32):

Oh yeah. We didn't mean to say that.

Jason Rigby (10:34):

On Google, we hate Alex Jones. And...

Alexander McCaig (10:37):

It's like, "I don't care what you say."

Jason Rigby (10:38):

Get your COVID vaccine. What else? Go see a doctor.

Alexander McCaig (10:42):

Yeah.

Jason Rigby (10:42):

This is not medical information.

Alexander McCaig (10:44):

Globalism is good. Underground bunkers, you don't need them.

Jason Rigby (10:48):

Yeah, exactly. Okay, perfect. We're out.

Alexander McCaig (10:50):

Yeah, bye.

Speaker 3 (10:58):

Thank you for listening to Title Cast with your hosts, Alexander McCaig and Jason Rigby. Where humanity steps into the future and source data defines the path. What's your data worth?