Latest

Blogs

LeapMind Bringing Deep Learning to Edge Devices

Although it may feel like the world has ground to a halt, progress in the world of AI and deep learning is still being made.  Deep learning has made critical contributions to improvements in classification accuracy in fields like image and audio processing, among many others.  The computations involved in performing these operations require a significant amount of memory and processing power, making them difficult to execute on edge devices.  With that in mind, Tokyo-based LeapMind Incorporated recently unveiled Efficiera, which the company describes as an ultra-low power AI inference accelerator IP for developing cost-effective, low-power edge devices. Efficiera is designed specifically for inference calculation processing in the kind of convolutional neural networks (CNNs) typically used for image and video recognition, and functions as a circuit in a field programmable gate array (FPGA) or application-specific integrated circuit (ASIC) devices.

Inference processing traditionally entails maximizing the power and space efficiency associated with convolution operations through advanced semiconductor manufacturing processes or with specialized cell libraries. Efficiera utilizes LeapMind’s core proprietary technology, “extremely low-bit quantization,” to increase energy efficiency, enhance performance and reducing the chip’s footprint. Typically, using numerical expressions with wide bit ranges (16 or 32 bits) enhances inferential accuracy – unfortunately, it also requires more power, time to process and area (i.e. the size of the circuit). By reducing the width of the bit down to 1 or 2 bits, the amount of data transfer – and therefore power required for convolutional processing – is significantly reduced. The number of calculation cycles are also reduced, resulting in improved calculation performance on a per cycle basis. The silicon area is also greatly reduced while maintaining high-level performance by minimizing the calculation logic, thereby reducing the chip’s area-per-computing-unit.

Efficiera.png

Efficiera is designed to enable developers to include deep learning capabilities in a variety of edge devices with power consumptions and cost limitations – everything from industrial machinery and broadcasting equipment to household appliances and surveillance cameras. LeapMind envisions the chip being implemented in hazard proximity detection applications of the kind found in vehicles with automated driving capabilities or large construction equipment, as well as improving video streaming quality in low-light conditions and by blocking noise from image codecs.  Efficiera can also convert low-resolution video data into higher resolution images or video for displays on edge devices like tablets.

As AI and deep learning technology gets smaller and more efficient, the group of devices that can support deep learning capabilities gets larger and larger. Imagine smart phones or ordinary security cameras with facial recognition functionality – soon reality might look like Minority Report, where people’s movements are tracked to an extent that stationary digital signage might target particular customers as they walk by. Hazard and collision detection are becoming standard features in newer automobiles – what if bicyclists could avail themselves of similar safety features with a small camera on their handlebars or helmets? As devices get smaller, lighter and more energy efficient, AI and deep learning IP needs to follow suit. We’ll follow LeapMind’s trajectory in this arena to see if their AI-miniaturization technology makes a big splash.

Efficiera is expected to become available in Fall 2020.