ArtificialInspection is a versatile and complete vision system.
Thanks to this system it is possible to solve the most varied problems requested by the customer, in fact it implements the most sophisticated algorithms, always updated with the latest news in the artificial vision sector.

ArtificialInspection is a fully programmable package by the end user, it supports 2D or 3D cameras, with B/W or color sensor. It is a high-level system that contains the potential of the HALCON © library, therefore with access to functions that are not present in the most well-known commercial systems.
It is also possible to enable only the categories of functions necessary for the application in order to reduce the cost of the final system.

CUSTOMIZABLE

  • Programming and customization of the system front-end, by writing VbScript macros, based on the type of control and the number of cameras present in the system.
  • Customizable communication with the various devices connected to the system, such as PLC and ROBOT.
  • Creation of control programs for each camera by inserting variables, tools and vision algorithms as desired
  • It is possible to implement dedicated pages or functions through Visual Studio, by writing libraries in VB.NET or C # .NET and calling the functions created via script.

POWERFUL

  • Based on HALCON library, one of the most advanced in the world
  • The system can use the full potential of the library by calling functions written with HALCON HDevelop ©.
  • Possibility of connection with various models of Matrix, Linear and 3D cameras
  • Connection via USB interface, USB3Vision, GigaEthernet and the main standards on the market.

USEFUL

  • Interaction with software through an intuitive graphic interface that is easy to use for the end user.
  • Sampling and saving functions of the acquired images as well as Backup / Restore of the program.

SAFE

  • Presence of different levels of access authorization.
  • Management of users and customizable passwords to access the system functions with different privileges depending on the job of the user of the system.

REMOTE SUPPORT

  • Remote assistance via the Internet with the possibility of connecting the controller to the network and having remote support.

Deep Learning is a sub-category of Machine Learning and indicates that branch of Artificial Intelligence which refers to algorithms inspired by the structure and functioning of the brain called artificial neural networks.

The main available algorithms are:

  • classification;
  • segmentation;
  • object recognition;
  • detection of anomalies;
  • edge extrapolation.

It allows to recognize letters and numbers thanks to a deep-learning approach, which provides many pre-trained fonts included from a wide range of industries (dot matrix fonts, semi fonts, industrial fonts, handwritten fonts, etc.). This increases performance and reduces the risk of misinterpretation with similar characters, guaranteeing excellent recognition rates.

The unique ability to set arbitrary regions of interest, combined with leading blob analysis tools and comprehensive image filtering techniques, allows you to effectively isolate and extract fonts from backgrounds complex resulting in more accurate character classification and better reading speeds.


Common types of barcodes can be read in any orientation and size, with modules smaller than 2x2 pixels.
Robust recognition allows reading of data on distorted images and in varying lighting conditions.

In addition to printed codes, the software reliably reads "Direct Part Mark" (DPM) and codes the codes engraved on different surfaces and in varying lighting conditions on different surfaces.
Codes that can be recognized include ECC 200, QR, QR Micro, DotCode, Aztec and PDF417.


Sophisticated and robust Pattern Matching algorithms detect the position of learned objects, permitting handling with anthropomorphic robots.
It is also possible to localize irregularly deformed objects using perspective.

Matching 2D

The position and orientation of objects within 2D images can be identified with different technologies such as:

  • Correlation-based matching;
  • Shape-based matching;
  • Descriptor-based matching.

Matching 3D

This function determines the position and orientation of objects, represented by their CAD model, within 3D images.
Setting the point of view that defines the position of the sensor is optional, increasing the usability of this particular function.
The use of small details such as holes or notches of objects to determine their orientation increases accuracy and the robustness of the result, even in the face of particularly disturbed point clouds.

Among the processing methods we find:

  • Shape-based 3D matching, starting from a simple 2D image;
  • Surface-based 3D matching, searching for the shape of the object in the acquired 3D point cloud with special 3D laser scanning, stereo vision or TOF cameras.
Shape-based 3D matching
Bin picking
Surface-based 3D matching

Il sistema è in grado di eseguire un'ispezione automatica delle superfici di materiali diversi che consente il riconoscimento e la segmentazione di difetti delle stesse come fori, rughe, crepe sui bordi, incisioni, contaminanti, mancanza di rivestimento, graffi, macchie o ammaccature.
La regolazione dei parametri necessari avviene in automatico ed è sufficiente un numero limitato di immagini modello per ciascun difetto affinchè sia possibile riconoscerli in modo infallibile ed indipendente l’uno dall’altro su qualsiasi acquisizione successiva.
Inoltre, possono essere ispezionati anche alcuni oggetti con superfici riflettenti, utilizzando il principio della deflettometria.
Grazie a questa funzionalità è possibile eseguire controlli qualità e integrità veloci e precisi sulla produzione, permettendo di ispezionare al 100% diverse caratteristiche dei particolari più svariati durante la fase di produzione.

This includes the ability to analyze serigraphs and prints on any scanned surface, flat or round.
Among the detectable defects are:

  • lack or excess of printing;
  • surface defects;
  • print out of position;
  • tonal defects;
  • dirt spots or imperfections.

Classification is the assignment of an object to one of several categories of interest based on the characteristics selected.
In images, classified objects are typically pixels or regions. Therefore, to assign an object to a specific class, they must first be defined through a training procedure.
When classifying an unknown object, the class with the greatest match of the characteristics used is returned for training and characteristics of the unknown object.
Some typical applications of the classification are:

  • segmentation of images to divide them into regions of similar color or texture;
  • object recognition for identifying a specific type within a set of different types of objects;
  • quality control to decide if an object is compliant or defective;
  • detection of differences between objects to detect any defects of the same.

The identification of the objects without coding is possible through an algorithm based on the learning of the identification samples.
With minimal training of the algorithm, it is able to distinguish various types of objects based on characteristics such as color or texture, thus eliminating the need for special encodings such as barcodes or data codes for identifying such objects.
It also works with warped objects or different perspective views of the object, rather than low-contrast, high-noise scenarios.


Measurements 2D
Measurements 3D

It is possible to carry out measurements of objects to check the tolerances of the pieces in production, even reaching accuracies up to a few micron meters.

Through the 3D reconstruction it is possible to measure objects in three dimensions.
There is the possibility to measure and extract various features from 3D point clouds and segmented point clouds.
Background points can be easily removed by thresholding and point clouds can be intersected from a plane to create a 2D cross section profile.


The 3D camera captures an object and extracts the point cloud.
There is the possibility to compare two objects acquired by the same camera by superimposing their respective point clouds.
Alternatively, the captured object can be directly compared with its CAD model to identify differences.


The system contains powerful functions for processing blobs and subsequent extraction of numerous features.
Furthermore, it is possible to filter and subject the blobs to morphological changes to better perform the necessary subsequent processing.
Below is an example with the steps of a process for separating adjacent objects.


The extrapolation of the contours using deep-learning can be set with a few model images, obtaining recognition reliable of the desired edges even on images with different contour lines, low contrast and high noise.


There are numerous filters to apply to enhance the image and simplify the processing process.

Among the changes that the available filters can make are:

  • contrast enhancement;
  • lighting correction;
  • resizing;
  • histogram equalization.

ArtificialInspection is available on three different controllers, based on application requirements, for example the number of cameras and the image processing time.
With these configurations the system covers all possible applications.

Basic applications

HX330




Intermediate applications

HX500




Advanced applications

KARBON803




ArtificialInspection is able to interface with both matrix and linear cameras, with any resolution, in color or black and white, and is therefore compatible with the cameras of the main manufacturers.
ArtificialInspection supports the most common interface standards such as USB, USB3Vision, FireWire and GigEVision GenIcam.

Our vision system is able to interface and manage 3D cameras of the main brands.

Example of a system with 2 cameras connected to a PLC/ROBOT.

Insights:


Visione Artificiale srl - Via Mezzana n.20, 25038 Rovato (Bs) Italy - Tel.: +39 333 8630339 - Mail: saveriofazio@visioneartificiale.net

Designed by Visione Artificiale srl