A Framework for Deep Learning Performance

PLASTER, the seven major AI challenges

Published June 2018

s

Machine learning (ML) is a key category in artificial intelligence (AI). Both hardware and software advances in deep learning (DL), a type of ML, appear to be catalysts for the early stages of a phenomenal AI growth trend. The challenge at this phase of adoption is twofold: deploying deep learning solutions is a complex proposition, and it is a rapidly moving target. The industry needs a framework to address the opportunities and challenges associated with deep learning.

At the NVIDIA GPU Technology Conference (GTC) 2018, Jensen Huang, NVIDIA President and CEO, put forward the PLASTER framework to contextualize the key challenges delivering AI-based services.

“PLASTER” encompasses seven major challenges for delivering AI-based services:
• Programmability
• Latency
• Accuracy
• Size of Model
• Throughput
• Energy Efficiency
• Rate of Learning

This paper explores each of these AI challenges in the context of NVIDIA’s DL solutions. PLASTER as a whole is greater than the sum of its parts. Anyone interested in developing and deploying AI-based services should factor in all of PLASTER’s elements to arrive at a complete view of deep learning performance. Addressing the challenges described in PLASTER is important in any DL solution, and it is especially useful for developing and delivering the inference engines underpinning AI-based services.