Vision systems are a key part of industrial automation, critical to maintaining quality control throughout the system. Simple automated processes expect objects to be in specific places, for example, on a pallet or a conveyor belt.
These systems use basic sensors, but are relatively unintelligent and can fail when objects are missing or not aligned.
As robotic automation has evolved these process lines now employ vision systems integrated at a system level to provide input and feedback. The vision system will tell the robotic element if an item is present and where the object is on the process line.
The introduction of artificial intelligence (AI) into these integrated solutions is advancing their capability rapidly. Even simple vision systems can detect the orientation of objects, while more advanced systems can identify and classify an object. Object classification and orientation are an important part of automation, and as such the system-level integration makes the vision system part of the closed control loop.
Inspection is a standalone process to identify defective products and is normally carried out by trained personnel tending to the automated part of the process. It can be expensive to maintain these personnel, especially when, in continuous processes, it may be necessary to have at least three trained operatives for every process to cover each shift.
Narrow AI
Automated visual inspection is a complementary function to machine vision. The latest developments, using vision systems powered by AI, provide at-speed object inspection. With this AI integration defect detection at high speed and as part of an automated production and assembly environment is now possible.
Image processing takes place after each pixel is captured and fed into the main processing element. Like all AI algorithms the system must first be trained. Training AI for image recognition can often be compute-intensive and requires large datasets of images that have been classified, such as in the case of supervised learning.
Training is a limiting factor for machine vision used as part of the control loop in production environments. The system must be able to identify different objects, their orientation and position. For automated production, training these systems is time-consuming and expensive, but will increase productivity.
Automated defect detection is a different proposition. OEMs require them to be easy to use and fast to train. These systems should also be cost-effective and easily repurposed for different tasks. Being able to train AI by providing unlabelled data, also known as unsupervised learning, overcomes part of the challenge; using a system that has been designed specifically for defect detection makes it affordable and easily deployed.
Dedicated defect detection using unsupervised learning is as simple as putting an object in the field of view of the image sensor. The AI system will study the object in question and learn to recognise its features. Once trained, the vision system can make relatively simple decisions about that object. For defect detection, for example, this would include any features that fall outside of “the norm,” as defined by the samples used to train the system.
It is important to note that, unlike other machine vision systems, it is not necessary for an AI-based defect detection system to identify what the inspected object is. It is not intended to provide object classification. For example, while it may not know the difference between a green apple and a red tomato, if it has been trained to detect defects in tomatoes, it will differentiate between one that is ripe and one that is bruised.
Unsupervised learning at the edge
The relatively simple nature of AI-enabled defect detection means the training can take place at the edge with the system’s highly capable hardware and software, offering a lower power and cost training in relation to cloud-based training.
As an example, a defect detection system using AI, called Defect Visual Inspection (DVI), has been developed by Avnet Silica and Deep Vision Consulting. It uses a system-on-module based on the NXP i.MX 8M Plus application processor, with the NXP i.MX9, for low power consumption.
An image sensor operating in the visible light part of the spectrum and with a standard optical lens can be used, but equally sensors operating in the infrared, ultraviolet or even X-ray part of the spectrum, or working at a microscopic level, can be deployed.
A system using a conventional image sensor requires a few relatively simple elements in the production environment, that is, good and even lighting and a defined field of view in which the object is observed.
AI at the edge
Once trained on good examples of objects presented in different orientations, DVI can detect all the features of an object in its field of view. When inspecting a similar item on a production line it compares the object’s features against the samples used to train the system. An operator is not required to identify these features, as they are automatically extracted through the AI algorithm from the training images.
In operation, the system returns an anomaly score that measures the degree of defectiveness found in each inspected image. The only requirement is to define the threshold between good and bad, pass and fail, which defines the sensitivity of the inspection. The range of the anomaly score changes depending on the training images. For example, in the case of fast moving consumer goods such as baked products, the anomaly score may span a range different to one used for medical supplies, such as hypodermic syringes. When training, the system returns a suggested threshold, which the operator can freely adjust to target a specific system sensitivity.
The system can tag images of defective objects, showing where the defect has been detected. This can be useful for reworking high value items or building operational data for improving processes.
Processing requirements
DVI is built on a Yocto distribution using a proprietary software library with a C++ interface and a Python API. The library, including training, is provided with an unlimited and perpetual licence.
A graphical user interface written in Python is included for evaluation purposes. The core library can be wrapped in an OEM’s own application, designed to interface with the vision system, a custom HMI, and interface to operational equipment, such as PLCs over SCADA or other protocols.
The software application is partitioned across the resources of the i.MX 8M Plus. This includes the device’s Arm Cortex-A53 application multiprocessor core and NXP’s neural processing unit (NPU), which has been developed specifically to accelerate machine learning and AI applications. It can deliver 2.3TOPS to accelerate the AI algorithm in the vision system.
The application code uses the homogeneous and heterogeneous resources of the processor, using the CPUs and NPU when training the system and inspecting objects. Currently, the software framework does not use the graphics processor available on the i.MX 8M Plus, but it is possible to exploit this resource to accelerate the application.
Although the loading on the processor resources varies in operation, the NPU is used close to 100% to accelerate the system’s AI features. The core CPUs are used to around 50%, leaving capacity for other customer features to be added according to the production application. Even without using the graphics processor, training the system takes seconds to minutes. Inspection requires only tens to hundreds of milliseconds.
In this way, defect detection using AI-enabled vision systems is complementary to an automated production environment, without requiring full integration into existing operational equipment. It can be added to existing manual and automated workflows.
Low-power and cost-optimised hardware platforms, coupled with ready-to-use application-specific software libraries, provide OEMs with easy access to the means to boost productivity.
Turnkey solutions provi
de all the gains without any of the development pains associated with using AI in an industrial environment.