AI applications often benefit from fundamentally different architectures than those used by traditional enterprise apps. And vendors are turning somersaults to provide these new components.
“The computing field is experiencing a near-Cambrian-like event as the surging interest in enterprise AI fuels innovations that make it easier to adopt and scale AI,” said Keith Strier, global and Americas AI leader, advisory services at EY.
“Investors are pouring capital into ventures that reduce the complexity of AI, while more established infrastructure providers are upgrading their offerings from chips and storage to networking and cloud services to accelerate deployment.”
The challenge for CIOs, he said, will be matching AI use cases to the type of artificial intelligence architecture best suited for the job.
Because AI is math at an enormous scale, it calls for a different set of technical and security requirements than traditional enterprise workloads, Strier said. Maximizing the value of AI use cases hinges, in part, on vendors being able to provide economical access to the technical infrastructure, cloud and related AI services that make these advanced computations possible.
But that is already happening, he said, and more advances in artificial intelligence architectures are on the horizon. Increased flexibility, power and speed in compute architectures will be catalyzed not only by the small band of high-performance computing firms at the forefront of the field, he said, but also from the broader HPC ecosystem that includes the chip- and cloud-service startups battling to set the new gold standard for AI computations.
As the bar lowers for entry-level AI projects, adoption will go up and the network effect will kick in, creating yet more innovation and business benefit for everyone — enterprises and vendors alike, he said.
In the meantime, CIOs can give their enterprises a leg up by becoming familiar with the challenges associated with building an artificial intelligence architecture for enterprise use.
One key element of the transition from traditional compute architectures to AI architectures has been the rise of GPUs, field-programmable gate arrays (FPGAs) and special purpose AI chips. The adoption of GPU- and FPGA-based architectures enables new levels of performance and flexibility in compute and storage systems, which allows solution providers to offer a variety of advanced services for AI and machine learning applications.
“These are chip architectures that offload many of the more advanced functions [such as AI training] and can then deliver a streamlined compute and storage stack that delivers unmatched performance and efficiency,” said Surya Varanasi, co-founder and CTO of Vexata Inc., a data management solutions provider.
But new chips only get enterprises so far in being able to capitalize on artificial intelligence. Finding the best architecture for AI workloads involves a complicated calculus involving data bandwidth and latency. Faster networks are key. But many AI algorithms also must wait a full cycle to queue up the next set of data, so latency becomes a factor.
Red the source article at TechTarget’s SearchCIO.