Determined AI Envisions AI-Native Infrastructure

1241

By AI Trends Staff

Determined AI officially launched this week, announcing the company with a blog post written by Neil Conway (CTO), Evan Sparks (CEO), and Ameet Talwalkar (Chief Scientist). The company plans to build the first AI-native infrastructure platform.

While this week marks the official launch, the software has been running on production GPUs for more than a year, the three say. The company website lists a team of 16 software engineers and executives. The advisors include Michael Franklin, Chairman of the Computer Science Department at the University of Chicago and Ben Lorica, Chief Data Scientist at O’Reilly Media. The company is backed by GV (formerly Google Ventures), Amplify Partners, CRV, Haystack, SV Angel, The House, and Specialized Types.

“We are entering the golden age of artificial intelligence. Model-driven, statistical AI has already been responsible for breakthroughs on applications such as computer vision, speech recognition and machine translation, with countless more use-cases on the horizon,” the trio write in their blog post. “But if AI is the lynchpin to a new era of innovation, why does the infrastructure it’s built upon feel trapped in the 20th century? Worse, why is advanced AI tooling locked within the walls of a handful of multi-billion-dollar tech companies, inaccessible to anyone else?”

Determined AI plans to solve that with specialized software that directly addresses the challenges deep learning developers struggle with every day. Deep learning is different from traditional computational tasks, the company argues, and needs tools that are specially built.

The Determined AI platform will be holistic. “Traditional DL tools typically focus on solving a single narrow problem (e.g., distributed training of a single neural network),” the team writes. “This forces DL researchers to integrate a handful of tools together to get their work done, often building out extensive ad-hoc plumbing along the way. In contrast, we’ve thought carefully about the key high-level tasks that DL researchers need to perform – such as training a single model, hyperparameter tuning, batch inference, experimentation, collaboration, reproducibility, and deployment in constrained environments – and built first-class support for those workflows directly into our software.”

The company also prioritizes specialization. “Although systems like Spark and Hadoop work well for more traditional analytic tasks, they cannot keep up with the unique challenges posed by deep learning,” the team writes. “AI-native infrastructure must support efficient GPU and TPU access, high performance networking, and seamless integration with deep learning toolkits like TensorFlow and PyTorch.”