What is an AI Accelerator?
An Artificial Intelligence Accelerator or AI Accelerator is a system or processor that facilitates hardware acceleration for AI applications. They are typically used in processor-intensive tasks such as machine learning, artificial neural networks, machine vision, natural language processing, and other AI workloads.
Data-intensive tasks like robotics and sensor-intensive ones like autonomous vehicle technologies typically require parallel processing, which is provided by the manycore architecture of AI accelerators.
History and Early Examples of AI Accelerator Usage
Hardware acceleration is not a new field. From as early as the 1990s, there have been working models of coprocessors that complemented the CPU’s ability to do specific computational tasks. Some of these task-specific accelerators include video graphics cards, GPUs, sound cards and digital signal processors (DSP).
Such hardware was often used for compute-intensive tasks such as optical character recognition (OCR), audio signal processing and video processing. With the emergence of deep neural networks and other compute-heavy tasks, AI accelerators emerged as a new class of acceleration hardware.
Some of the recent examples of AI accelerators include Google’s Tensor Processing Unit (TPU) used in its TensorFlow software library and Mobileye’s EyeQ, which was previously used in Tesla electric cars (EyeQ3 chip) for its Autopilot function. The tech was so valuable that Intel decided that it was worth acquiring the company for 30 times its projected annual earnings for 2017 for a total of $15 billion.
New Kids on the AI Accelerator Block
There is currently no dominant technology in this space the way Intel’s x86 CPUs dominated the world of personal computing. Various neurosynaptic architectures like ASICs (application-specific integrated circuits), FPGAs (field programmable gate arrays) and NNPUs (neural network processing units) co-exist, and one of the primary reasons is that their roles are super-specific to edge computing applications like computer vision.
New AI accelerator hardware architectures are emerging on a regular basis. AI visionaries like NVIDIA’s CEO Jensen Huang foretell of a Cambrian-explosion-like occurrence akin to its namesake phenomenon that happened 500 million years ago when multi-cellular organisms started evolving rapidly into a myriad of forms encompassing every niche of ecology.
AI accelerators are expected to evolve in a similar way because they can cater to every tier of cloud storage, high-performance computing, distributed cloud-to-edge,