After earning a PhD from Stanford, Russ Greiner worked in both academic and industrial research before settling at the University of Alberta, where he is now a Professor in Computing Science and the founding Scientific Director of the Alberta Innovates Centre for Machine Learning (now Alberta Machine Intelligence Institute), which won the ASTech Award for “Outstanding Leadership in Technology” in 2006. He has been Program Chair for the 2004 “Int’l Conf. on Machine Learning”, Conference Chair for 2006 “Int’l Conf. on Machine Learning”, Editor-in-Chief for “Computational Intelligence”, and is serving on the editorial boards of a number of other journals. He was elected a Fellow of the AAAI (Association for the Advancement of Artificial Intelligence) in 2007, and was awarded a McCalla Professorship in 2005-06 and a Killam Annual Professorship in 2007. He has published over 200 refereed papers and patents, most in the areas of machine learning and knowledge representation, including 4 that have been awarded Best Paper prizes. The main foci of his current work are (1) bioinformatics and medical informatics; (2) learning and using effective probabilistic models and (3) formal foundations of learnability. He recently spoke with AI Trends.
Q: Who do you collaborate with in your work?
I work with many very talented medical researchers and clinicians, on projects that range from psychiatric disorders, to stroke diagnosis, to diabetes management, to transplantation, to oncology, everything from breast cancer to brain tumors. And others — I get many cold-calls from yet other researchers who have heard about this “Artificial Intelligence” field, and want to explore whether this technology can help them on their task.
Q: How do you see AI playing a role in the fields of oncology, metabolic disease, and neuroscience?
There’s a lot of excitement right now for machine learning (a subfield of Artificial Intelligence) in general, and especially in medicine, largely due to its many recent successes. These wins are partly because we now have large data sets, including lots of patients — in some cases, thousands, or even millions of individuals, each described using clinical features, and perhaps genomics and metabolomics data, or even neurological information and imaging data. As these are historical patients, we know which of these patients did well with a specific treatment and which ones did not.
I’m very interested in applying supervised machine learning techniques to find patterns in such datasets, to produce models that can make accurate predictions about future patients. This is very general — this approach can produce models that can be used to diagnose, or screen novel subjects, or to identify the best treatment — across a wide range of diseases.
It’s important to contrast this approach with other ways to analyze such data sets. The field of biostatistics includes many interesting techniques to find “biomarkers” — single features that are correlated with the outcomes — as a way to try to understand the etiology, trying to find the causes of the disease. This is very interesting, very relevant, very useful. But it does not directly lead to models that can decide how to treat Mr. Smith when he comes in with his particular symptoms.
At a high level: I’m exploring ways to find personalized treatments — identifying the treatment that is best for each individual. These treatment decisions are based on evidence-based models, as they are learned from historical cases — that is, where there is evidence that the model will work effectively.
In more detail, our team has found patterns in neurological imaging, such as functional MRI scans, to determine who has a psychiatric disorder — here, for ADHD, or autism, or schizophrenia, or depression, or Alzheimer’s disease.
Another body of work has looked at how brain tumors will grow by looking at brain scans of people, using standard structural MRI imaging. Other projects learn screening models that determine which people have adenoma (from urine metabolites), or models that predict which liver patients will most benefit from a liver transplant (from clinical features), or which cancer patients will have cachexia, etc.
Q: How can machine learning be useful in the field of Metabolomics?
Machine learning can be very useful here. Metabolomics has relied on technologies like mass spec and NMR spectroscopy to identify and quantify small molecules in a biofluid (like blood or urine); this previously was done in a very labor-intensive way, by skilled spectroscopists.
My collaborator, Dr. Dave Wishart (here at the University of Alberta) and some of our students, have designed tools to automate this process — that can effectively find the molecules present in say blood. This means metabolic profiling is now high-throughput and automated, making it relatively easy to produce datasets that include the metabolic profiles from a set of patients, along with their outcome. Machine learning tools can then use this labeled dataset to produce models for predicting who has a disease, for screening or for diagnosis. This has led to models that can detect cachexia (muscle wasting) and adenoma (with a local company, MTI).
Q: Can you go in to some detail on the work you have done designing algorithms to predict patient-specific survival times?
This is my current passion; I’m very excited about it.
The challenge is building models that can predict the time until an event will happen — for example, given a description of a patient with some specific disease, predict the time until his death (that is, how long he will live). This seems very similar to the task of regression, which also tries to predict a real value for each instance –for example, predicting the price of a house based on its location, the number of rooms, and their sizes, etc.. Or given a description of a kidney patient (age, height, BMI, urine metabolic profile, etc.), predict the glomerular filtration rate of that patient, a day later.
Survival prediction looks very similar because both try to predict a number for each instance. For example, I describe a patient by his age, gender, height, and weight, and his genetic information, and metabolic information, and now I want to predict how long until his death — which is a real number.
The survival analysis task is more challenging due to “censoring”. To explain, consider a 5 year study that began in 1990. Over these five years, many patients have passed away, including some who lived for three years, others for 2.7 years, or 4.9 years. But many patients didn’t pass away during these 5 years –which is a good thing… I’m delighted these people haven’t died! But this makes the analysis much harder: for the many patients alive at the end of the study, we know only that they lived at least 5 years, but we don’t know if they lived 5 years and a day or lived 30 years — we don’t know and never will know.
This makes the problem completely different from the standard regression tasks. The tools that work for predicting glomerular filtration rate or for predicting the price of a house just don’t apply here. You have to find other techniques. Fortunately, the field of survival analysis provides many relevant tools. Some tools predict something called “risk”, which gives a number to each patient, with the understanding that this tool is predicting that patients with higher risks will die before those with lower risk. So if Mr A’s risk for cancer is 7.2 and Mr B’s is 6.3 — that is, Mr A has a higher risk — this model predicts that Mr Awill die of cancer before Mr B will. But does this mean that Mr A will die 3 days before Mr B, or 10 years — the risk score doesn’t say.
Let me give a slightly different way to use this. Recall that Mr A’s risk of dying of cancer is 7.2. There are many websites that can do “what if” analysis: perhaps if he stops smoking, his risk reduces to 5.1. This is better, but by how much? Will this add 2 more months to his life, or 20 years? Is this change worth the challenge of not smoking?
Other survival analysis tools predict probabilities — perhaps Ms C’s chance of 5-year disease-free survival, is currently is 65%. but if she changes her diet in certain way, this chance goes up to 78%. Of course, she wants to increase her five-year survival. But again, this is not as tangible as learning, “If I continue my current lifestyle then this tool predicts I will develop cancer in 12 years, but if I stop smoking, it goes from 12 to 30 years”. I think this is much more tangible, and hence will be more effective in motivating people to change their lifestyle, versus changing their risk, or their 5-year survival probability.
So my team and I have provided a tool that do exactly that, by giving each person his or her individualized survival curve, which shows that person’s expected time to event. I think that will help motivate people to change their lifestyle. In addition, my colleagues and I also applied this to a liver transplant dataset, to produce a model that can determine which patient with end-stage liver failure, will benefit the most from a new liver, and so should be added to the waitlist.
Those examples all deal with time to death, but in general, survival analysis can deal with time to event, for any event. So it can be used to model a patient’s expected time to re-admission. Here, we can seek a model that, given a description of a patient being discharged from a hospital, can predict when that patient will be readmitted — eg, if she will return to the hospital, for the same problem, soon or not.
Imagine this tool predicted that, given Ms Jones’ current status, if she leaves the hospital today, she will return within a week. But if we keep her one more day and give some specific medications, we then predict her readmission time is 3 years. Here, it’s probably better to keep her that one more day and give one more medication. It will help the patient, and will also reduce costs.
Q: What do you see are the challenges ahead for the healthcare space in adopting machine learning and AI?
There are two questions: what machine learning can do effectively, and what it should do.
The second involves a wide range of topics, including social, political, and legal issues. Can any diagnostician — human or machine — be perfect? If not, what are the tradeoffs? How to verify the quality of a computer’s predictions? If it makes a mistake, who is accountable? The learning system? Its designer? The data on which it was trained? Under what conditions should a learned system be accepted? … and eventually incorporated into standard of care? Does the program need to be ‘‘convincing”, in the sense of being able to explain its reasoning — that is, explain why it asked for some specific bit of information? … or why it reached a particular conclusion? While I do think about these topics, I am not an expert here.
My interest is more in figuring what these systems can do — how accurate and comprehensive can they be? This requires getting bigger data sets — which is happening as we speak. And defining the tasks precisely — is the goal to produce a treatment policy that works in Alberta, or that works for any patient, anywhere in the world? This helps determine the diversity of training data that is required, as well as the number of instances. (Hint: building an Alberta-only model is much easier than a universal one.) A related issue is defining exactly what the learned tool should do: In general, the learned performance system will return a “label” for each patient — which might be a diagnosis (eg, does the patient have ADHD), or a specific treatment (eg, give a SSRI [that is, a selective serotonin reuptake inhibitor]). Many clinicians assume the goal is a tool that does what they do. That would be great if there was an objective answer, and the doctor was perfect, but this is rarely the case. First, in many situations, there is significantly disagreement between clinicians (eg, some doctors may think that a specific patient has ADHD, while others may disagree) — if so, which clinician should the tool attempt to emulate? It would be better if the label instead was some objective outcome — such as “3 year disease-free survival’’, or “progression within 1 year” (where there is an objective measure for “progression”, etc.)
This can get more complicated when the label is the best treatment — for example, given a description of the patient, determine whether that patient should get drug-A or drug-B. (That is, the task is prognostic, not diagnostic.) While it is relatively easy to ask the clinician what she would do, for each patient, recall that clinicians may have different treatment preferences… and those preferences might not lead to the best outcome. This is why we advocate, instead, first defining what “best” means, by having a well-defined objective score for evaluating a patient’s status, post treatment. We then define the goal of the learned performance system as finding the treatment, for each patient, that optimizes that score.
One issue here is articulating this difference, between “doing what I do” versus optimizing an objective function. A follow-up challenge is determining this objective scoring function, as it may involve trading off, say, treatment efficacy with side-effects, etc. Fortunately, clinicians are very smart, and typically get it! We are making in-roads.
Of course, after understanding and defining this objective scoring function, there are other challenges — including collecting data from a sufficient number of patients and possibly controls, from the appropriate distributions, then building a model from that data, and validating it, perhaps on another dataset. Fortunately, there are an increasing number of available datasets, covering a wide variety of diseases, with subjects (cases and controls) described with a many different types of features (clinical, omics, imaging, etc etc etc). Finally comes the standard machine learning challenge of producing a model from that labeled data. Here, too, the future is bright: There are faster machines, and more importantly, I have many brilliant colleagues developing ingenious new algorithms, to deal with many different types of information.
All told, this is a great time to be in this important field! I’m excited to be a part of it.
Thank you Dr. Greiner!
Learn more at the Alberta Machine Intelligence Institute.