Select

Dedicated Neural Network Acceleration Processor for Edge Intelligence

Technology Overview

Deep Learning has triggered the need for architectural re-design of traditional microprocessors; this new class of dedicated processors enable hardware optimisation and acceleration of neural network computations. In addition, there is a growing trend in designing such accelerators specifically for edge inference processing. Essentially these subclass of devices are less power hungry, hence making them suitable for applications where long battery life is desirable, such as in mobile devices, wearables, and IoT nodes.

This technology offer consists of dedicated chip-level neural network acceleration processor for edge intelligence, and accompanying visual recognition software. Real-time inference and analysis can be done, without the need to connect to the cloud. As such, neural network solutions for edge applications can be made faster and more power efficient, while reducing cost and the cloud burden.

Technology Features & Specifications

• Integrated hardware and software solutions, including ultra-low power dedicated AI edge processor and visual recognition software.

• Edge AI: real time inference at edge devices on dedicated neural network acceleration processor, without need to connect to the cloud.

• Reconfigurable technology: hardware architecture can be reconfigured according to different neural networks to optimise computations.

Potential Applications

• Smart homes
• Smart phones
• Smart surveillance
• Drones and IoT nodes

Market Trends and Opportunities

According to a ResearchandMarkets report, between 2018 and 2023, the mobile AI market is expected to reach USD 17.83 billion by 2023 from USD 5.11 billion in 2018, at a CAGR of 28.41% during the forecast period. Increasing demand for AI-capable processors for use in mobile devices, rise in cognitive computing, and growing number of AI applications are some of the major factors driving the growth of the mobile AI market.

Customer Benefits

• Accelerated neural network calculations in optimised, reconfigurable silicon hardware that enables performance enhancement of applications.

• Dedicated, power efficient edge intelligence chip that enables innovation in edge device applications

• Reduced dependency on cloud processing resulting in cost savings

Make an Enquiry