IBM wants to bring machine learning to the mainframe

661

Despite the emphasis on X86 clusters, large public clouds, accelerators for commodity systems, and the rise of open source analytics tools, there is a very large base of transactional processing and analysis that happens far from this landscape. This is the mainframe, and these fully integrated, optimized systems account for a large majority of the enterprise world’s most critical data processing for the largest companies in banking, insurance, retail, transportation, healthcare, and beyond.

With great memory bandwidth, I/O, powerful cores, and robust security, mainframes are still the supreme choice for business-critical operations at many Global 1000 companies, even if the world does not tend to hear much about them. Of course, as with everything in computing, there are tradeoffs. The cost and flexibility concerns are chief on the list, but the open source push from the outside world is pushing new thinking into an established area.

Companies that have invested in mainframes have sound cause to continue doing so. They are highly optimized for transaction processing, are as secure as a system can be, and have been the subject of many millions of dollars in code investment over the years. They are certainly not cheap, but neither is moving the bulk of business-critical applications to a new architecture. The only thing that might push a large company to do so is a perceived lack of capability and choice—something that mainframe users are willing to tolerate in favor of relative safety.

While the case for mainframes is still strong, there is a lack of flexibility that users of commodity X86 clusters enjoy. Those users can freely scale up and out, integrate the latest open source frameworks for analysis, and continue to scale those operations in a more seamless, agile way. Mainframe users are slower to adopt newer open source frameworks that might give X86 shops a competitive edge.

To counter this gap in flexibility, IBM described an effort to bring the machine learning components from its Watson AI framework to the mainframe. The company already announced Spark for mainframes—this builds on Spark as the engine to deliver the machine learning capabilities so users can have machine learning on the system (versus moving it off those boxes for different analysis). As Rob Thomas, VP of Analytics at IBM tells The Next Platform, this opens new doors for mainframe sites. He points to the example of Argus Health Systems, which manages a number of healthcare providers. Unlike the more static analysis their teams did with analysis being run at set intervals, teams can now get continuously evolving updates about patients and providers that can be fed quickly into the models and rerun for new cost assessments that use the most recent combined data.

Read the source article at TheNextPlatform.com.