Embedded AI empowers predictive maintenance and anomaly detection

With its flexible and scalable e-AI concept, Renesas offers a future-proof real-time, low power artificial intelligence processing solution that is unique in the industry and addresses the specific needs for artificial intelligence in embedded devices at the endpoint.


By Knut Dettmer, Renesas Electronics                                  Download PDF version of this article


The evolution of artificial intelligence (AI) technologies such as machine learning and deep learning has been remarkable in recent years. The range of applications is rapidly expanding from cloud-centric applications mainly focused on the IT field to the embedded systems market. AI enables embedded devices to dynamically react and adapt to changes in the operating environment, and to adapt to the constantly changing state and condition of a device or machine.

Figure 1. Motivation for embedded AI in the endpoint

 

The trend to move AI processing from centralized cloud processing platforms to the endpoint is motivated by a variety of reasons. First, bandwidth constraints limit the capability to deliver the required data from the observation point to a cloud processing unit to process the related analytics. For many devices and machines, there may even be no cloud connection available. This may be motivated by infrastructure restriction or by data privacy concerns. Still, one might well like to enjoy the many benefits from using AI technology to improve product performance and the overall equipment efficiency. Secondly, even if cloud connectivity with enough bandwidth is available, many AI applications are required to infer data in real-time, within milli- or microseconds. With connectivity technology today, this is not achievable. A cloud connection is unreliable and non-deterministic in terms of latency and introduces more than several dozens of milliseconds delay. Thirdly, data privacy is another motivation to process the analytics at the embedded endpoint and not in the cloud. Many industry segments regard the processed data itself as proprietary and are sensitive to share this data outside their own network. For example, healthcare devices collect personal data about individual health that must be highly restricted in terms of data distribution. In industrial automation, the analyzed data reveals how processes are controlled in a factory and is considered core know-how of the production company. Finally, data protection laws place a lot of restrictions on how end user data can be stored and processed.

Figure 2. Learning vs Inference

 

Other advantages of working at the endpoint include the possibility to create hierarchical networked AI systems that are robust and scalable while being optimized on an individual use case in terms of performance and power consumption. When looking at these issues, the need for efficient AI inference at the embedded endpoint becomes obvious. It demands efficient endpoints that can infer, pre-process and filter data in real-time. All to optimize device performance and analyze the respective application-specific data points directly at the endpoint, while avoiding all the aforementioned constraints.

A common goal for AI is to improve the overall equipment efficiency, which targets maximizing the device availability, performance and output. Using e-AI methods from Renesas, one can implement predictive maintenance measures that continuously analyze the state and condition of a device to indicate necessary maintenance before the performance of the device degrades, avoiding unplanned downtime at the same time. Through these measurement analytics, the time for maintenance can be optimized individually for a specific device or machine. That results in a dynamic individual maintenance plan which is much more cost-efficient than operating on a static maintenance plan.

Figure 3. Embedding neural network processing onto Renesas devices

 

It is important to understand that embedded AI processing typically means inference processing. In AI terminology, inference is the process in which captured data is analyzed by a pre-trained AI model. In embedded applications, applying a pre-trained model to a specific task is the typical generic use case. In contrast, creating a model is called training (or learning) and requires a completely different scale of processing performance. Therefore, training is typically done by high performance processing entities, often provided by cloud services. Depending on model complexity, training a model can take minutes, hours, weeks, or even months. e-AI processing does not normally attempt to tackle these kinds of model creation tasks. Instead, it will help to improve the performance of a device using pre-trained models. Taking advantage of the data generated by rapidly increasing data received from sensors, e-AI can ensure that the devices output operate at the ideal state, whether in an industrial drive, a washing machine or a footstep detector. This is where Renesas focusses - endpoint intelligence.

As a leading semiconductor manufacturing company, Renesas has implemented such mechanisms in their factories. The technologies developed from these activities put Renesas in a position to share them with our customers and therefore enable our customers to enjoy all their benefits. More concretely, the company has implemented anomaly detection and predictive maintenance algorithms based on neural network architectures to optimize performance of plasma edging machines in its Naka factory. The results were so convincing that further proof of concepts was implemented with a variety of customers and partners. For example, GE Healthcare Japan Hino Factory is utilizing an AI unit which is based on the Renesas e-AI technology for improving their productivity. Our partner Advantech provides this AI unit as easy retrofit option to implement e-AI technologies for existing machines or devices.

Figure 4. Renesas e-AI capability index

 

When moving AI processing to the endpoint, power performance becomes of prime importance. An off-the-shelf graphics card or smartphone accelerator will exceed the power and size limitations of industrial applications by many times. Plus, solutions nowadays need to have a platform approach and must scale depending on the application requirements. For instance, some basic algorithms might be processed simply by the software of a microcontroller. Some others may need some basic hardware accelerators. Still others might need significant hardware acceleration to meet the algorithms performance targets.

To address this variety of performance targets, Renesas has developed a dynamically reconfigurable processor (DRP) module, that can flexibly assign resources to accelerate e-AI tasks. DRP uses highly parallel processing to meet modern AI algorithm demands. As the DRP design is optimized for low power consumption, it can support multiple use cases of customized algorithms with rapid inference which will fit most embedded endpoint requirements. This DRP enables high performance at low power, so that it fits ideally into embedded applications. It is dynamically reconfigurable, thus allowing adaption of use cases and/or algorithms within the same hardware. 

The good news is that whatever level or acceleration is utilized, the software tools and interfaces will stay the same. Renesas does not provide its own learning frameworks but provides tools that translate neural network models into a format that can be executed on its MCUs and MPUs. The translator takes neural network models from common training frameworks such as Google TensorFlow as an input. These frameworks are typically used to train a neural network model. The inference will be executed by the Renesas MCU/MPU devices, simply by embedding the output of the translator tools into the respective program. It is as easy as that. The initial e-AI/DRP roadmap shows four performance classes, each class adding ten times of neural network performance to the previous class. The unique positioning for endpoint inference in combination with Renesas MCU/MPU technology gives users an unmatched power/performance ratio for AI processing.


Related


Slimming program for medical operating devices

Operating devices in the medical sector are not only subject to strict controls and requirements. Nowadays design demands are becoming more and more important for developers of medical HMI devices. De...

 

nVent Schroff at Embedded World 2019

The theme of the nVent Schroff booth at Embedded World 2019 was “Experience Expertise – Modularity, Performance, Protection and Design”. Join us as our experts give an overview of th...


Garz & Fricke Interview at Embedded World 2019 with Dr. Arne Dethlefs: We are strengthening our presence in North America

Through its US subsidiary, located in Minnesota, Garz & Fricke is providing support for its growing HMI and Panel-PC business in the USA and Canada while also strengthening its presence in North A...


SECO's innovations at embedded world 2019

In a much larger stand than in previous years, at embedded world 2019 SECO showcases its wide range of solutions and services for the industrial domain and IoT. Among the main innovations, in this vid...


Design and Manufacturing Services at Portwell

Since about two years Portwell is part of the Posiflex Group. Together with KIOSK, the US market leader in KIOSK systems, the Posiflex Group is a strong player in the Retail, KIOSK and Embedded market...


Arrow capabilities in design support

Florian Freund, Engineering Director DACH at Arrow Electronics talks us through Arrow’s transformation from distributor to Technology Platform Provider and how Arrow is positioned in both, Custo...


Arm launches PSA Certified to improve trust in IoT security

Arm’s Platform Security Architecture (PSA) has taken a step forward with the launch of PSA Certified, a scheme where independent labs will verify that IoT devices have the right level of securit...


DIN-Rail Embedded Computers from MEN Mikro

The DIN-Rail system from MEN is a selection of individual pre-fabricated modules that can variably combine features as required for a range of embedded Rail Onboard and Rail Wayside applications. The ...


Embedded Graphics Accelerates AI at the Edge

The adoption of graphics in embedded and AI applications are growing exponentially. While graphics are widely available in the market, product lifecycle, custom change and harsh operating environments...


ADLINK Optimizes Edge AI with Heterogeneous Computing Platforms

With increasing complexity of applications, no single type of computing core can fulfill all application requirements. To optimize AI performance at the edge, an optimized solution will often employ a...


Synchronized Debugging of Multi-Target Systems

The UDE Multi-Target Debug Solution from PLS provides synchronous debugging of AURIX multi-chip systems. A special adapter handles the communication between two MCUs and the UAD3+ access device and pr...


Smart Panel Fulfills Application Needs with Flexibility

To meet all requirement of vertical applications, ADLINK’s Smart Panel is engineered for flexible configuration and expansion to reduce R&D time and effort and accelerate time to market. The...


Artificial Intelligence

Morten Kreiberg-Block, Director of Supplier & Technology Marketing EMEA at Arrow Electronics talks about the power of AI and enabling platforms. Morten shares some examples of traditional designin...


Arrow’s IoT Technology Platform – Sensor to Sunset

Andrew Bickley, Director IoT EMEA at Arrow Electronics talks about challenges in the IoT world and how Arrow is facing those through the Sensor to Sunset approach. Over the lifecycle of the connected ...


AAEON – Spreading Intelligence in the connected World

AAEON is moving from creating the simple hardware to creating the great solutions within Artificial Intelligence and IoT. AAEON is offering the new solutions for emerging markets, like robotics, drone...


Arrow as a Technology Provider drive Solutions selling approach

Amir Sherman, Director of Engineering Solutions & Embedded Technology at Arrow Electronics talks about the transition started couple of years ago from a components’ distributor to Technology...


Riding the Technology wave

David Spragg, VP, Engineering – EMEA at Arrow Electronics talks about improvements in software and hardware enabling to utilize the AI capabilities. David shares how Arrow with its solutions is ...


ASIC Design Services explains their Core Deep Learning framework for FPGA design

In this video Robert Green from ASIC Design Services describes their Core Deep Learning (CDL) framework for FPGA design at electronica 2018 in Munich, Germany. CDL technology accelerates Convolutional...


Microchip explains some of their latest smart home and facility solutions

In this video Caesar from Microchip talks about the company's latest smart home solutions at electronica 2018 in Munich, Germany. One demonstrator shown highlights the convenience and functionalit...


Infineon explains their latest CoolGaN devices at electronica 2018

In this video Infineon talks about their new CoolGaN 600 V e-mode HEMTs and GaN EiceDRIVER ICs, offering a higher power density enabling smaller and lighter designs, lower overall system cost. The nor...


Analog Devices demonstrates a novel high-efficiency charge pump with hybrid tech

In this video Frederik Dostal from Analog Devices explains a very high-efficiency charge-pump demonstration at their boot at electronica 2018 in Munich, Germany. Able to achieve an operating efficienc...