Analysis Features
Depending on your objective, you can select either ‘Skeletal-based analysis’ or ‘Image-based analysis’ in VP-Motion to analyze behaviors or actions.
The ‘Skeletal-based’ method focuses on analyzing actions based on the skeletal movements while the ‘Image-based’ method takes into account not only the skeletal movements, but surrounding objects.
Merits of skeletal-based analysis
Skeletal-based analysis takes information from a person’s skeleton as its starting point. The system is most effective when there is a large amount of characteristics to detect in movement and actions, and it can detect these, even when just seeing the upper body of a person, or having multiple people overlap. Furthermore, the software can accurately learn someone’s movements in a short amount of time.
Our recent update has enhanced camera positioning flexibility, making it even easier to integrate seamlessly into your existing workspace. As a result, checking routine procedures and human movements becomes a piece of cake. VP-Motion is useful when it comes to getting rid of slipups or accidents during work, because previously unnoticed details are suddenly clear as day. Thus, the application contributes greatly towards the increase in work effectiveness.
Merits of image-based analysis
Image-based analysis takes far more factors into consideration, such as background, objects and colors. It is most effective when there is a lack of pronounced movement, such as the detailed movements of a hand holding a tool, or the correct way to handle different tools, because this mode of detection can also detect factors outside of the human body, such as tools held in the hand or devices that are being worked on.
Furthermore, newly added ‘body part analysis’ mode allows VP-Motion to single out the selected body part, learn what they are, and then analyze them. With the addition of the body part analysis mode, the surrounding environment no longer interferes, which allows you to create training data and analysis focused on your preferred area as well as being a huge increase to the accuracy of VP-Motion. You can select from several focus areas, including ‘head,’ ‘upper body,’ ‘full body,’ ‘right hand,’ or ‘custom*.’
*The ‘custom’ option uses a unique settings file, allowing you to choose any combination of keypoints for analysis.
- CASE 1
- Classification changes depending on what is being held, such as a tool or machine component
- CASE 2
- Classification changes depending on the details of the work being performed, such as specific equipment being used.
- CASE 3
- Behavior analysis of images where only the arms or upper half of the body is being shown, such as work performed on a workbench.
Ver.1.3.1 brings a new capability to learn and
analyze selected segments of the body
The image-based analysis mode adds a new ‘Body Part Analysis’ mode, allowing VP-Motion to identify and learn about specific body parts.You can choose which area to focus on, including options like ‘Head,’ ‘Upper Body,’ ‘Full Body,’ ‘Both Hands,’ ‘(Classic) Hand-Crop,’ or ‘Custom’ for any keypoint you want.
This mode excels at identifying a range of danger-related behaviors, like ‘walking with hands in pockets,’ and can also assess tool-related tasks or determine if personal protective equipment is being used.
Detection of people walking with hands in pocket
(both hands)
Detection of missing safety equipment on individuals (head)
Video Analysis App
‘VP-Motion Analyzer’
The ‘VP-Motion Analyzer’ is an application meant for the analysis of prerecorded footage that allows you to calculate and export the time spent performing behavior which you have individually labeled. It also allows you to analyze multiple video files at once. Because you can now process them at high speed, the efficiency of analysis has taken a dramatic leap upwards.
Analyzer’s
merits
- Analyze multiple videos in batches through folders, severely reducing required steps needed for analysis
- No more time spent waiting, as real-time processing in the background speeds up your analysis
- Combinable with every automatic system due to being operated through command line
- Possibility to calculate time spent performing each video’s individually labelled behavior and export this to a .csv file, giving you the ability to use this as big data.
Ver.1.3.1 adds support for exporting analytic graphs and
a correlation chart in Analyzer
Analyzer has been enhanced to export detailed analysis results in HTML, providing valuable insights for accuracy improvement, threshold adjustments, and the analysis of overall data patterns. These exports also provide clear visualizations to assess the degree of interference between labels and to determine if the data is suitable for analysis. The exported graphs are provided in three formats, all of which can be zoomed in for a detailed frame-by-frame review of the data. By utilizing the exported graphs, users can significantly improve the accuracy of their analyses.
Analysis Graph 1Label Detection Timeline
This graph shows the amount of time each label was detected (vertical axis) and its occurrence throughout the total length of the video (horizontal axis). The result is considered more reliable when there is significant overlap of the red PRED (inference) segments on the blue GT (ground truth) segments.
Analysis Graph 2 Confidence Level Timeline for Each Label
This graph shows the changes in confidence level (vertical axis) for each label throughout the video (horizontal axis). When there is a label that displays a distinct peak compared to other labels, it reflects a high level of confidence due to minimal interference from those other labels.
In contrast, when multiple labels fluctuate and overlap in the chart, it suggests that the AI struggles to infer actions due to the interference among them. Any segments of ‘Total Time for Model-Detected Label’ that are not visualized in Analysis Graph 1 can be found in this graph.
Correlation Chart Analysis of Positive and Negative Correlation Between Labels
This chart shows the correlation between the movements of each label, ranging from -1.0 to 1.0. A value closer to 1.0 indicates a strong positive correlation, meaning the movements are similar. Conversely, a value closer to -1.0 indicates a strong negative correlation, and values near 0 indicate no correlation. When the movements of labels are close together, it implies they are interfering with each other, which can be used as an indicator for generating training data and trained models.