We use machine vision, machine learning, and augmented reality (AR) in order to be able to give real-time feedback if a process or a component has been done correctly or not. This is achieved by modelling through our algorithms a process performed by an expert and then asked to monitor it while it is done by a trainee or by another user on the production line. Alternatively, the verification can be done straight from a CAD model. Thus, either with a process done by an expert, or taking design specifications from Computer-Aided Design (CAD), it is possible to guide a user through a procedure, verify, and provide real-time feedback if they are doing it correctly or not as they are building it. This is without the need to wait until they have finished the process before being able to inspect it. At the end of the process an automated report can be created with the results of the build, who worked on it, under what conditions was it built, and the inspection results. We build our own cameras and software. Thus, it can be customised to required specifications.
We have also developed our own AR indoor and outdoor navigation system. Thus, by combining this system with the platform mentioned above it is possible to guide a user to a certain place (i.e. where a specific part is located), track, monitor, inspect, and verify a process, material and people flow, real-time production updates and live inventory tracking.
The technology consists of our own hardware and software. The hardware consists of our proprietary cameras which are customised specifically for the tasks required. These can be a combination of normal cameras, infrared for depth sensing, or even thermographic cameras. It all depends on the task at hand, and accuracies needed. We also provide our own servers if the application requires to have an on-site server which can give extra processing power and robustness.
The software is composed of machine learning algorithms, and the required interfaces in order to allow the communication between all the different software and hardware such as cameras, sensors, phones, tablets, laptops, headsets, servers, CAD software, etc. This is what then enables the possibility to do real-time verification with AR as well as AR indoor and outdoor navigation and tracking.
Yes, there currently is guidance on how to carry out procedures through AR. However, currently there is rarely a platform which can allow a user to verify if an activity has been performed correctly in real-time as he or she is executing it. The current practice is that the worker builds it first and then a supervisor inspects it. By that time, it may be too late. Extra time and money will be required to rectify the problem and perform the required re-works and modifications. Furthermore, the current procedures rely heavily on the user confirming if they have done something or not, and how well they performed. With our platform, the monitoring and inspection can be automated. In this way, it can immediately be ascertained if they have done it correctly or not and if they have completed the tasks.
Also, this platform gives users the skills, tools, and capabilities which they previously did not have. They will be able to go through different camera lenses to see the environment in different ways such as infrared, thermographic, or a normal camera feed but all of these will be augmented with valuable information which can enable a user to make decisions. Is there something wrong, is it a go or no go, is there anything missing, or that we should be aware about? All of these is possible without the user needing to be experts in the areas or procedures.