By Benjamin Ross, Senior Editorial Assistant, AI Trends
“What I’ve been looking at for the past few years is how things are evolving within the clinical trial space, and what impact that’s going to have on clinical data management,” said Francis Kendall, Senior Director of Biostatistics and Programming at Cytel, explained to attendees in Orlando during the Summit for Clinical Ops Executives (SCOPE).
We’re going to see a shift in how clinical evidence is produced and where it’s produced from, said Kendall.
“It’s a new paradigm about data usage,” he said “We have traditional clinical trials, and they will always remain, but we’re starting to see things like pragmatic trials or synthetically controlled models. How do we deal with that data?”
Data standards have embraced this paradigm shift, Kendall says. Efforts to provide standards for data, including FHIR, which looks at the interoperability of data, and the FAIR data principles are examples of the industry’s response to approaching standardized data, but we’re still playing around with how to structure data as a life sciences industry.
“In a way, we’re probably one of the only industries that doesn’t come to grips with standardizing our data,” Kendall said.
We need to look at this in a different way, says Kendall. “Rather than try to shoehorn data to the standard, we need to look at the data and see how we can gain value.”
The data itself is changing as well. “We have more types of patient data as we go forward. It’s not just the traditional data; there’s a lot we can get, from omics data to tracking data.”
Researchers want to be able to pull insights from that data, but how? Machine learning is one solution that has been gaining traction in other industries outside of healthcare, says Kendall.
“There are some good examples in finance for instance where the traditional data manager or data administrator—who would create a program and look at the data [themselves]—now just manage how the algorithms are running as they manage the data,” he said. “Why are we not at that stage yet?”
Machine learning is an increasingly viable option the more data we collect, Kendall says. And it’s not the amount of data that’s expanding: the data sources have increased as well. Omics data and tracking data in a real-world setting mean there’s value in increasing the number of patients involved in a given trial. Our clinicians and researchers want to pull that data together, says Kendall.
“A lot of data isn’t purpose-built, which leads people to say, ‘You shouldn’t look at EMR [electronical medical record data] because it’s payer-based data’, for instance,” Kendall said. “But it’s data that decisions are made on. We have to accept it for what it is and get value from it.
To Kendall, the value in data such as EMRs is that, while it may be payer-focused, it’s longitudinal. “You can actually start to gain insights about the patient journey, how a particular drug is affecting them.”
Statisticians within life science companies will want to get ahold of this data as well, Kendall argues. There are new techniques that have been enabled by machine learning and the data. “It’s data they want, and a data manager is going to have to think about how to provide that data.”
And that data are coming from everywhere, Kendall says. “Not one database really holds all the data… We have to look over our shoulders as a life science industry because there are the Googles, the Apples, and the Facebooks out there looking at healthcare as a commercial space, and they’re processing the data accordingly. We as a data management group should be looking at how we process that data to match these big tech companies.”
To do this, Kendall says the industry will need to embrace a change in mindset, something it’s not keen to do. “The life sciences industry is really conservative,” he said. “It’s slow in adopting, even when regulators encourage it. I find that this hesitancy comes from a fear of job loss and automation. But I don’t think that fear should be there. We have to automate because the data won’t stop coming in.”