Organizations seeking to squeeze as much insight out of their data reserves want to be able to realize connections between different types of data — and across as much data as possible. Deep learning offers an answer, leapfrogging simpler analytic techniques.

Innovators of deep learning algorithms and applications need access to high-performance computing (HPC), leading to an urgent question for any cutting-edge startup: What kind of HPC resources are most cost-effective for the development and delivery of AI-based analytics products?

While some entrepreneurs would look to cloud options alone, deep learning analytics firm Vyasa Analytics finds that the best solution includes top-of-the-line, on-site hardware.

Understanding the HPC Needs of Deep Learning

Massachusetts-based Vyasa Analytics provides a deep learning analytics platform to customers in a variety of markets, including life sciences. A dramatic increase in data scale is central to their value proposition. Rather than perform simplistic analytics algorithms on small sections of siloed data, Vyasa deploys deep learning to tease insight from the reams of data across the entire breadth of a life sciences organization.

The company’s software is called Cortex.

For an enterprise like Vyasa, building cutting-edge software serves as a primary differentiator, but for its flagship offerings to operate efficiently, blazing-fast hardware infrastructure is mandatory. The demands of using deep learning, as compared to other analytics methodologies, intensify this requirement.

Traditional data analytics centers on heuristics-based approaches. In other words, the data scientist needs to give the computer “rules sets” for what to find or do in the data being analyzed. For example, a researcher developing a vaccine for the H1N1 virus might input the text term “H1N1,” and an algorithm would search across different data pools and siloes of an organization to find every instance in which H1N1 was used. The results would be entirely driven by what the researcher had originally told the computer to look for.

Deep learning algorithms have changed that. With deep learning, a researcher might train the algorithm on many different kinds of mentions of H1N1 and related content in text. The deep learning algorithm can learn what mentions of H1N1-specific content “look like” in text and find many more mentions based on its trained understanding of H1N1-related content. In other words, deep learning algorithms are revolutionary because they enable computers to learn for themselves how to recognize complex patterns. Vyasa’s software harnesses this new capability and enables its highly scalable application so that clients can better utilize their organizational data.

These relationships and patterns may occur across not just text data but data in all forms. This organically created “understanding” of a broad, data-type-spanning concept is an extremely powerful tool.

Deep learning allows a company like Vyasa to analyze beyond a simple matching search: It provides results in new shapes and varieties, across diverse data types, and it doesn’t even require an operator to know the relationship or structure they are looking for in order to tease relationships out. These skills augment researchers and users to perform their jobs better and stimulate new discoveries.

Cloud vs. On-Premises

Importantly, data-driven deep learning algorithms are even more dependent on computational infrastructure (actual hardware) than previous machine learning approaches. A critical element of this infrastructure for deep learning is the graphics processing unit (GPU), and large numbers are required to run for long periods.

“We need to train algorithms on very large amounts of data, thousands to millions of images and/or documents, and those training steps can take many hours,” said Christopher Bouton, founder and CEO of Vyasa.

Early on, Vyasa Analytics relied solely on the cloud for GPU capabilities. However, this strategy proved a drag on operations.

“If you do everything on cloud instances, then that can get very costly very quickly because those GPU instances on cloud are very expensive,” Bouton said. “Whereas, if you own some of your own GPU infrastructure, you actually save money.”

Bouton, a successful entrepreneur with a Ph.D. in Neuroscience, did the math.

“The modeling shows that, within six to eight months, you save the full cost of your own hardware — even for machines like the NumberSmasher® or DGX — just by savings in cloud costs,” he said. “It’s a valuable part of the strategy for a deep learning company to own their own hardware.”

Building the Ideal Hardware Solution

Vyasa’s decision to deploy two machines — an NVIDIA® DGX-1™ and the Microway NumberSmasher Tesla GPU Server — came down to the company’s requirements for advanced GPU architectures. “NVIDIA is really one of the leaders in the space, and Microway has a strong relationship with NVIDIA,” Bouton said.

Each machine offers the power of NVIDIA Tesla® V100 GPUs.

 “We call the DGX ‘beast mode,’” Bouton said. “It’s the machine that we use for really heavy number-crunching, and it’s a really wonderful machine.”

The DGX also has NVIDIA NVLink™ Technology, which can effectively connect the GPUs together for large-scale and parallel jobs.

Along with the horsepower of the DGX-1, a smaller NumberSmasher Tesla GPU server deployment contributes valuable flexibility to the operation.

“The NumberSmasher is another wonderful machine because it’s capable of being used in beast mode, or we can segment it out into multiple instances that we use for research and development at Vyasa,” Bouton said.

New Projects, New Products, New Industries

Backed by these computing powerhouses, Vyasa Analytics has been freed to make more and more expansive products available. “People are interested in identifying what’s novel, what’s new,” Bouton said.

That interest guides Vyasa’s projects. The company, for example, analyzes the entirety of the PubMed database and offers a product that surfaces new research trends before they are named.

“About 4,000 to 5,000 papers are published every day,” Bouton said, “and so these models can be continually updated on a daily basis to identify the most recent thing that people are talking about in the context of scientific findings, allowing our clients to stay on the cutting edge of the next trends in scientific informatics and scientific research.”

Those thousands of papers are added to roughly 70 million in the database already. Vyasa analyzes the entire data set daily.

Vyasa’s capabilities provide value for other markets as well, including the legal and competitive intelligence spaces, by analyzing anything from patent filings to newsfeeds to troves of images. The new Microway systems also add to Vyasa’s own competitive edge as a critical asset for research and development on these products.

“These systems have enabled us to branch out into a number of R&D [research and development] areas that were really critical for us to be able to innovate and build out new types of deep learning approaches,” Bouton said.

Growth, in Collaboration with an HPC Provider

Of course, these systems represent significant investments, even if their use proves cost-neutral within a matter of months. Microway worked with Vyasa to find the best solution for their application and their business, enabling a stepwise process of infrastructure deployments and effective investment. The Microway NumberSmasher Server was delivered first, acting as a development platform to prove out the potential of many new projects. Months later, and after receiving a $1.8 million loan from MassDevelopment’s Emerging Technology Fund, Vyasa added the more powerful DGX-1 system.

Additionally, beyond simply building the two machines, Microway supported Vyasa from specification through rigorous device testing.

“They helped us spec the systems and architect the systems, and that’s really important for a machine of this size and complexity,” Bouton said.

The collaborative process ensured Vyasa matched the right architectures to the right workload. Microway’s emphasis on ease of use was an additional benefit for the early-stage startup.

“We were able to architect them so that, literally, it’s almost plug-and-play,” Bouton said.

Each new system was up and running within a day of delivery.

Vyasa’s plans for its strategy of owned infrastructure complemented by cloud instances factor largely into the organization’s overall plan for growth.

“It’s very clear to me that a hybrid architecture, which is a combination of hardware that we own alongside the use of cloud infrastructure, is an essential part of our cost basis,” Bouton said. “I could see us purchasing multiple systems per year for as long as we’re in operation.”

“As a company working in the deep learning space, we see Microway and NVIDIA as key partners in our ability to build innovative novel deep learning algorithms for a wide range of content types,” Bouton said.