The world’s machine vision experts are coming together in Stuttgart this month for Vision, which is arguably the industry’s most important trade fair. Visitors can expect to see the latest machine vision products, technologies and trends including hot topics like embedded vision, hyperspectral imaging and deep learning.

For Advantech, this event provides a unique opportunity to promote its latest technologies (Hall 8, Booth 8D12). It is also an opportunity to demonstrate how Artificial intelligence (AI) technology can break away from static rule-based programming, by substituting inference systems using dynamic learning for smarter decisions. After all, today’s advanced AI technology combined with IoT technology is now redefining entire industries and introducing countless smart applications in its wake.

An important trend is the shift of AI inference systems toward the edge, closer to sensors and control elements, thereby reducing latency and improving response. Demand for edge AI hardware of all types, from wearables to embedded systems, is growing fast. One estimate published by Data Bridge Market Research in April 2019, entitled “Global Edge AI Hardware Market – Industry Trends and Forecast to 2026”, sees unit growth at 20.3% CAGR through 2026, reaching over 2.2 billion units.

The big challenge for edge AI inference platforms is feeding high bandwidth data and making decisions in real-time, using limited space and power for AI and control algorithms. Three powerful AI application development pillars from NVIDIA are helping Advantech make edge AI inference solutions a reality, and positively impacting the bottom line of forward-looking companies.

What is AI inference?

There are two types of AI-enabled systems: those for training, and those for inference. Training systems examine data sets and outcomes, looking to create a decision-making algorithm. For large data sets, training systems have the luxury of scaling, using servers, cloud computing resources, or in extreme cases supercomputers. They also can afford days or weeks to analyse data.

The algorithm discovered in training is handed off to an AI inference system for use with real-world, real-time data. While less compute-intensive than training, inference requires efficient AI acceleration to handle decisions quickly, keeping pace with incoming data. One popular option for acceleration is to use GPU cores, thanks to familiar programming tools, high performance, and a strong ecosystem.

Traditionally, AI inference systems have been created on server-class platforms by adding a GPU card in a PCIe expansion slot. Most AI inference still happens on AI-enabled servers or cloud computers, and some applications demand server-class platforms for AI acceleration performance. Where latency and response are concerns, lower power embedded systems can scale AI inference to the edge.

Advantages of Edge AI Inference Architecture

Edge computing offers a big advantage in distributed architectures handling volumes of real-time data. Moving all that data into the cloud or a server for analysis creates networking and storage challenges, impacting both bandwidth and latency. Localized processing closer to data sources, such as pre-processing with AI, can reduce these
bottlenecks, lowering networking and storage costs.

There are other edge computing benefits. Personally identifiable information can be anonymized, improving privacy. Security zones reduce chances of a system-wide breach. Local algorithms enforce real-time determinism, keeping systems under control, and many false alarms or triggers can be eliminated early in the workflow.

Extending edge computing with AI inference adds more benefits. Edge AI inference applications scale efficiently by adding smaller platforms. Any improvements gained by inference on one edge node can be uploaded and deployed across an entire system of nodes. If an edge AI inference platform can accelerate the full application
stack, with data ingestion, inference, localized control, connectivity, and more, it creates compelling possibilities for system architects.

Flexibility of CPU+GPU Engines for the Edge

NVIDIA developed the system-on-chip (SoC) architecture used in the NVIDIA Jetson system-on-module (SoM). As applications for them grew, these small, low power consumption SoCs evolved with faster Arm® CPU cores, advanced NVIDIA GPU cores, and more dedicated processor cores for computer vision, multimedia processing, and deep learning inference. These cores provide enough added processing power for end-to-end applications running on a compact SoM.

AI inference can be implemented many ways. There are single-chip AI inference engines available, most with 8-bit fixed point math and optimized for a particular machine learning framework and AI model. If that framework and fixed point math works, these may do the job.

Many applications call for flexible CPU+GPU engines like those on Jetson modules. With AI models ever changing, accuracy, a choice of frameworks, and processing headroom are important. Inference might need 32-bit floating point instead of 8-bit fixed point math – precision experiments on a CPU+GPU engine are easy. If research suggests an alternative inference algorithm, GPU cores can be reprogrammed easily for a new framework or model. As control algorithms get more
intense, a scalable multicore CPU handles increased workloads.

 

Pillar 1: Scalable System-on-Modules for Edge AI

From entry-level to server-class performance, NVIDIA Jetson modules are the first of three pillars for edge AI inference. Sharing the same code base, Jetson modules vary slightly in size and pinout, with features like memory, eMMC storage, video encode/decode, Ethernet, display interfaces, and more.

A complete comparison of NVIDIA Jetson module features can be found at: nvidia.com/en-us/autonomous-machines/embedded-systems/

Pillar 2: SDK for Edge AI Inference Applications

The second pillar converts a large base of NVIDIA CUDA® developers into AI inference developers, with a software stack running on any NVIDIA Jetson module for “develop once, deploy anywhere”.

The NVIDIA JetPack SDK runs on top of L4T with an LTS Linux kernel. It includes accelerated libraries for cuDNN and TensorRT frameworks, as well as scientific libraries, multimedia APIs, and the VPI and OpenCV computer vision libraries.

JetPack also has a NVIDIA container runtime with Docker integration, allowing edge device deployment in cloud-native workflows. It has containers for TensorFlow, PyTorch, JupyterLab and other machine learning frameworks, and data science frameworks like scikit-learn, scipy and Pandas, all pre-installed in a Python environment.

Developer tools include a range of debugging and system profiling tools including CPU and GPU tracing and optimization. Developers can quickly move applications from existing rule-based programming into the Jetson environment, adding AI inference alongside control.

For a complete description of NVIDIA Jetson software features, visit: developer.nvidia.com/embedded/develop/software

Pillar 3: Ecosystem Add-ons for Complete Solutions

The third pillar is an ecosystem of machine vision cameras, sensors, software, tools, and systems ready for AI-enabled applications. Over 100 partners work within the NVIDIA Jetson environment, with qualified compatibility for easy integration. For example, several third parties work on advanced sensors such as lidar and stereo cameras, helpful for robotics platforms to determine their surroundings.

Systems for Mission-Critical Edge AI Inference

Many edge AI inference applications are deemed mission-critical, calling for small form factor computers with extended operating specifications. Advantech created the compact MIC-700AI Series systems, targeting two different scenarios with full range of performance options.

The first scenario is the classic industrial computer, with a rugged form factor installed anywhere near equipment requiring real-time data capture and control processing. These scenarios often have little or no forced air cooling, only DC power available, and DIN rail mounting for protection against vibration.

For this, the MIC-700AI series brings AI inference to the edge. Designed around the low-power NVIDIA Jetson Nano using advanced thermal engineering, the fanless MIC-710AI operates on 24VDC power in temperatures from -10° to +60°C. With an M.2 SSD, it handles a 3G, 5 to 500 Hz vibration profile.

The MIC-710AI features two GigE ports, one HDMI port, two external USB ports, and serial and digital I/O. For expansion, Advantech iDoor modules are mPCIe cards with cabled I/O panels (cutout seen on left side above). iDoor modules handle Fieldbus, wireless, and more I/O.

Advantech PCM-24S2WF iDoor Module with Wi-Fi and Bluetooth Longevity and revision control With extended availability of all components, Advantech offers a 5-year lifecycle on all MIC-700AI Series platforms. Additionally, system revision notification is standard, with full revision control services available.

The second scenario involves machine vision and image classification, where cameras look for objects or conditions. Systems often use Power over Ethernet (PoE) to simplify wiring.

At the high end with the NVIDIA Jetson AGX Xavier, the MIC-730IVA provides eight PoE channels for connecting industrial video cameras. It also provides two bays for 3.5” hard drives, enabling direct-to-disk video recording. The system runs from 0° to 50°C, using AC power.

All MIC-700AI Series systems run the same software, enabling developers to move up or down and get applications to market faster.

The latest MIC-710AIL features a Jetson Nano or Jetson Xavier NX in an ultra-compact enclosure, also with iDoor module expansion These systems bring AI inference to the edge in reliable, durable platforms ready for a wide range of applications including manufacturing, material handling, robotics, smart agriculture, smart cities, smart healthcare, smart monitoring, transportation, and more.

Visitors to the show can learn more about Advantech’s Edge AI systems, or simply access this webpage for more details: https://www.advantech.com/products/edge-ai-system/sub_9140b94e-bcfa-4aa4-8df2-1145026ad613