We have developed a vision system based on standard commercial components.
This system is the heart of our computer vision applications.
Below some information about the algorithms and the hardware used.
ArtificialInspection is a versatile and complete vision system.
Thanks to this system it is possible to solve the most varied problems requested by the customer, in fact it implements the most sophisticated algorithms, always updated with the latest news in the artificial vision sector.
ArtificialInspection is a fully programmable package by the end user, it supports 2D or 3D cameras,
with B/W or color sensor.
It is a high-level system that contains the potential of the HALCON © library, therefore with access to functions that are not
present in the most well-known commercial systems.
It is also possible to enable only the categories of functions necessary for the application in order to reduce the cost of the final system.
There are numerous filters to apply to enhance the image and simplify the processing process.
Among the changes that the available filters can make are:
The system contains powerful functions for processing blobs and subsequently extracting numerous features.
In addition, it is possible to filter and subject blobs to morphological changes to better perform the necessary subsequent processing.
Deep Learning is a subcategory of Machine Learning and indicates that branch of Artificial Intelligence
that refers to algorithms inspired by the structure and functioning of the brain called artificial neural networks.
The main algorithms available are:
It recognizes letters and numbers using a deep-learning approach,
which provides many pre-trained fonts included from a wide range of domains (dot matrix fonts,
semi fonts, industrial fonts, handwritten fonts, etc.).
This increases performance and reduces the risk of misinterpretation with similar characters, ensuring
excellent recognition rates.
The unique ability to set arbitrary regions of interest, combined with leading blob analysis tools
and comprehensive image filtering techniques, allows to effectively isolate and extract characters from
complex backgrounds resulting in more accurate character classification and improved reading speeds.
Common barcode types can be read in any orientation and size,
with modules smaller than 2x2 pixels.
Robust recognition enables data reading on distorted images and under varying lighting conditions.
In addition to printed codes, the software reliably reads "Direct Part Mark" (DPM) codes and
codes engraved on different surfaces and under varying lighting conditions different surfaces.
Codes that can be recognized include ECC 200, QR, QR Micro, DotCode, Aztec and PDF417.
Object identification without encoding is possible using an algorithm based on learning identification patterns.
With minimal training of the algorithm, it can distinguish various types of objects based on characteristics such as color or texture, thus eliminating the need to use special encodings such as barcodes or data codes to identify such objects.
It also works with deformed objects or different perspective views of the object, rather than low contrast and high noise scenarios.
Sophisticated and robust Pattern Matching algorithms detect the position of the learned objects, allowing
manipulation with anthropomorphic robots.
It is also possible to perform the localization of irregularly deformed objects using perspective.
The position and orientation of objects within 2D images can be identified with different technologies such as:
This function determines the position and orientation of objects, represented by their CAD model, within 3D images.
Setting the viewpoint that defines the sensor position is optional, increasing the usability of this particular function.
Using small details such as holes or notches in objects to determine their orientation increases the accuracy and robustness of the result, even in the face of particularly disturbed point clouds.
Among the processing modes we find:
It is possible to perform measurements of objects to verify the tolerances of pieces in production, reaching precisions of up to a few microns per meter.
Using 3D reconstruction, you can measure objects in three dimensions.
You can measure and extract various features from 3D point clouds and segmented point clouds.
Background points can be easily removed by thresholding, and point clouds can be intersected by a plane to create a 2D cross-section profile.
The system is able to perform an automatic inspection of the surfaces of different materials that allows the recognition and segmentation of defects such as holes, wrinkles, cracks on the edges, incisions, contaminants, lack of coating, scratches, stains or dents.
The adjustment of the necessary parameters occurs automatically and a limited number of model images is sufficient for each defect to be able to recognize them infallibly and independently of each other on any subsequent acquisition.
Furthermore, some objects with reflective surfaces can also be inspected, using the principle of deflectometry.
Thanks to this functionality, it is possible to perform fast and precise quality and integrity checks on production, allowing 100% inspection of various characteristics of the most varied parts during the production phase.
It is possible to analyze silkscreens and prints on any scanned surface, flat or round.
Among the detectable defects are:
Classification is the assignment of an object to one of several categories of interest based on selected features.
In images, classified objects are typically pixels or regions. Therefore, to assign an object to a specific class,
they must first be defined by a training procedure.
When classifying an unknown object, the class with the closest match between the
training features and the features of the unknown object is returned.
Some typical applications of classification are:
From the images acquired via 3D Camera
it is possible to compare two objects by superimposing their respective point clouds.
Alternatively, the acquired object can be directly compared with its CAD model to identify the differences.
The system then reports the parts out of tolerance that deviate from the model.
Deep learning contour extraction can be set up with just a few template images, resulting in reliable recognition
of the desired edges even on images with multiple contour lines, low contrast and high noise.
ArtificialInspection is mainly available on three different controllers, depending on the application requirements, such as the number of cameras and the image processing time.
With these configurations the system covers all possible applications.
Basic Applications
Intermediate applications
Advanced Applications
ArtificialInspection can interface with both matrix and linear cameras,
with any resolution, in color or black and white, and is therefore compatible with cameras from major manufacturers.
The system supports the most common interface standards such as USB, USB3Vision, FireWire and GigEVision GenIcam.
Artificial Inspection is able to interface and manage 3D cameras from leading brands.