The machine vision community agrees that artificial intelligence (AI) is a key enabler for the digital age. The self-learning algorithms have the potential to improve existing machine vision processes and products and also show new possibilities. However, these also require new ways of thinking. But many companies in the machine vision sector still have reservations about the new technology. There is a lack of expertise and time to get a detailed understanding of the topic. Due to low control and certification possibilities, the technology and its conclusions are difficult to comprehend for many users. There is thus often a lack of acceptance and explanation. Can manufacturers increase transparency and lower barriers to entry? Or are AI methods not yet ready for industrial use and currently only a game for young start-ups?

The answer, as so often, lies in the eye of the beholder. Every user has different expectations of what a technology must achieve or bring to the table in order to be recognised and ultimately used. In any case, the necessary hardware for productive and efficient use is available. Many manufacturers of machine vision hardware have recognised this. The range of AI platforms in different performance classes is growing steadily. But providing hardware alone is not.

It doesn’t help that AI or machine learning (ML) works quite differently from rule-based image processing and thus the approach and processing of vision tasks is also different. The quality of the results is also no longer the product of manually developed programme code, but is determined by the learning process with suitable image data. What sounds so simple only leads to the desired goal with sufficient expertise and experience. Without a trained eye for the right data, errors often occur, which in turn leads to the incorrect application of ML methods. Tests have shown that different users achieve very different training qualities of the artificial neural networks (ANN) for the same task, because in some cases images with too much unimportant content, poor exposure, blurriness or even wrong labels were used for training.

Despite the well-known advantages of seeing AI and the high accuracy of the ANN used, diagnosis in the event of a fault is often difficult. Lack of insight into the way it works or inexplicable results are the other side of the coin, inhibiting the spread of algorithms. Commonly, ANNs are often wrongly perceived as a black boxes whose decisions are not comprehensible. “Although DL models are undoubtedly complex, they are not black boxes. In fact, it would be more accurate to call them glass boxes, because we can literally look inside and see what each component is doing.” [Quote from “The black box metaphor in machine learning”]. Inference decisions of neural networks are not based on classical logical rules and the complex interactions of their artificial neurons may not be easily understandable to humans, but they are nevertheless results of a mathematical system and thus reproducible and analysable. We just lack the right tools to support us. It is precisely in this area of AI that there is still a lot of room for improvement. And it is precisely here that it becomes apparent how well the various AI systems on the market can support the user in his endeavour.

IDS Imaging Development Systems researches and works in this field together with institutes and universities to develop precisely these tools. The IDS NXT ocean software already contains the results of this collaboration. A visualisation in the form of so-called attention maps (heat maps) makes it easier to understand critical decisions of the AI in order to ultimately increase the acceptance of neural networks in the industrial environment. It can also be used to recognise and avoid trained data biases (see figure “Attention Maps”). Statistical analyses using a so-called confusion matrix will also soon be possible both in the cloud-based training software IDS NXT lighthouse and in the IDS NXT camera itself, in order to be able to determine and understand the quality of the trained ANN more easily. With the help of these software tools, users can more directly trace the behaviour and results of their IDS NXT AI back to weaknesses within the training data set and specifically correct them. This makes AI explainable and comprehensible for everyone, even without in-depth knowledge of machine learning, image processing or application programming.

_______________________

IDS Imaging Development Systems Limited

Landmark House, Station Road

RG27 9HA Hook

United Kingdom

Phone:  +44 1256 962910

uksales@ids-imaging.com

www.ids-imaging.com