AI Behavior Analysis SystemVP-Motion

  • Employee Tracking
  • Patient Monitoring
  • Posture Correction
Ver 1.3.1 Update Information

Real-time Behavior Analysis through Machine Learning
The AI Solution to Improve Efficiency in the Medical,
Manufacturing and Sports Industries

About

What is VP-Motion?

Imagine the following. You own a factory and want to monitor your employees to see whether they are performing their duties correctly and safely. Or perhaps you are in charge of a medical facility and you want to keep an eye on your patients or clients. What if you are involved in sports, and you want to track one player in particular? These examples require a watchful eye and unwavering focus. But what if your eye strays, or you lose concentration? A potential disaster.

Fear no longer. VP-Motion is Japan’s leading AI-powered human behavior analysis software, that learns and automatically detects human movements. Through high-speed, video-based machine learning, the integrated AI automatically detects a wide range of movement and behavioral patterns. VP-Motion allows real-time detection of a diverse range of abnormal behaviors, from mistakes or accidents at the workplace—such as falls or people collapsing—to suspicious activities. VP-Motion is essentially an autonomous surveillance system with many functions.

Think of an emergency medical alert system. Or use it in sports. For example as a substitute AI fitness coach helping you with posture correction, or during sports rehabilitation. Whenever the app detects something strange, you get notified instantly. This reduces the need for continuous monitoring, optimizes workflow, prevents injuries, eliminates work errors and elevates worker safety. All in one small but powerful package.

New features added!
Ver 1.3.1 update information

Adding the new analysis mode
‘Body Part Analysis Mode’ for improved accuracy
New mode allows learning and analysis
through customizable body part cropping Read more
New feature for splitting
excessively long annotations
Minimizes the distribution of uneven annotation lengths,
leading to improved training accuracy. Read more
‘Analyzer’ update adds graph
and correlation chart exports
・Assessment of how labels interact with each other
・Visualizing analytic results for comprehensive data review Read more
Feature

System Components of VP-Motion

VP-Motion is comprised of three applications that all work together to let you generate trained AI-models through machine-learning, simply by feeding the program your own video data. The trained model is then used to monitor video images including, but not limited to, prerecorded video and live feeds.


How the System Works

Step.1Annotator: Label Behaviors to Create Training Data

The first step is to create training data. You do this by importing the video and labeling the desired behavior.

Annotator makes this process child’s play. All you have to do is draw a rectangle around the action and give it a name.
Click here to see a tutorial on how to use Annotator.

Step.2Trainer: Train the AI With Training Data

The second step involves teaching our AI-engine. You do this by importing the training data from step 1. This takes a couple of minutes, and a small amount of data.

After the importing is complete, and the engine has successfully learned the behavior you want it to, you will get a Trained AI Model.

Ver1.3.1 introduces function to split excessively long annotations

VP-Motion Trainer now includes a feature that allows you to create versions that split annotations exceeding the set threshold (default: 3.0 seconds) into your preferred length (default: 2 seconds).

In previous versions, learning behaviors were hindered when datasets contained excessively long annotations, causing some frames to be missing from the video data. This lack of continuity makes the data unsuitable for training, resulting in fluctuations in accuracy.
To address this, Version 1.3.1 has implemented a feature to split annotations while maintaining the chronological data and adjusting the number of input frames to facilitate the classification of actions, thereby reducing instability during training.
By enabling this feature, you’ll see a notable improvement in accuracy, particularly when managing long annotations, as shown in the pictures below.

Training/Validation accuracy before annotation split
Training/Validation accuracy after annotation split

Step.3Monitor: See everything with as much as eight cameras

Finally, in the Monitor software you will use the trained model you created in step two. You can use both real-time footage or prerecorded video. Monitor allows you to analyze the data through two different methods. These methods are ‘skeletal-based’ and ‘image-based’, and both methods allow you to detect and/or analyze behavior.


VP-Motion can also be used as surveillance camera recording software with the capability to log detected behavior. Users can also freely develop unique capabilities, such as sending out an alert when specific behavior is detected.

Listed below are some already accessible features
・Monitor and record up to eight cameras
・Export logged behavior analysis
・TCP/UDP communication through specified addresses and ports(TCP/UDP)

Model

Utilizes a unique pose estimation AI engine to generate highly accurate trained models and accelerate deduction

VP-Motion utilizes skeletal information generated by our in-house developed pose estimation AI engine, ‘VisionPose’. By converting the movements of people into skeletal movements for learning and analysis, the system significantly reduces deduction time and lowers data volume, setting it apart from conventional methods where video and training data are analyzed directly from their original format. Furthermore, the new image-based analysis method has been added in Version 1.1.0 to capture the movements with minimal skeletal movements.

Furthermore, the new image-based analysis method has been added in Version 1.1.0 to capture the movements with minimal skeletal movements. In conventional AI solutions, models must learn behaviors from scratch using large volumes of training data and video footage. This process requires a significant amount of time and money, which has stopped many companies from implementing such a system. Typically, to create training data, it is necessary to annotate a huge amount of photo and video data, at least hundreds of thousands of points of data, using an annotation tool.

To address these, VP-Motion comes with an already integrated large-scale pre-trained model to enable highly accurate analysis with a significant reduction in learning time by being able to receive additional training with a small amount of data.

System Benefits

Freely define behaviors

Easily create your data through the training data creation tool ‘Annotator’

A few videos and several minutes is all you need

High-speed machine-learning through state-of-the-art AI technology.

No more worrying about camera angles, VP-Motion detects everything

Thanks to our proprietary 3D-coordinate-detection technology, VP-Motion recognizes behaviors, even if the camera angle is different from the one used to train the model.

Annotate smarter and faster

Annotator can be freely copied to other PCs, allowing multiple people to annotate different videos
at the same time, significantly cutting costs and boosting productivity.

Don’t be tied to one spot

‘Standalone Monitor’ allows you to keep an eye on everything from whatever computer you choose
outside of the installed environment, by simply loading the training data into the software.

No special equipment needed

VP-Motion supports all cameras, whether it’s a webcam or a surveillance camera;
all cameras allow a fast detection of the skeleton.

Issues in Other Systems

Pre-trained models often need to be built from scratch

The annotation process (creating training data) is time-consuming and costly

Collected data is often heavily restricted in terms of use and distribution

Pre-trained models are often quite large in size, taking up too much memory

The systems are too intricate to handle, even for skilled engineers and experts

Implement the system and start saving right away

As an example of what can be achieved by implementing VP-Motion specifically for manufacturing, we compared the man-hours and costs required to classify the content of video data for 100 employees performing their work manually versus using VP-Motion. Below, you will find an estimation of the potential savings.

Reduction of manhours

Save roughly 90% in terms of manhours

Reduction of costs

Save roughly 86% in terms of costs

Use case

“Tailor the System to Your Needs
Create Your Own Training Data”

VP-Motion can be used in a wide variety of settings, and the user can freely customize the software to target specific behaviors. Here are just some examples of the multitude of possible fields and scenarios in which VP-Motion could be an asset. You can also see how VP-Motion detects behaviors by visiting our demo page.

Detect tasks involving the use of tools or components Assembly Error Reduction

Detect routine tasks Workflow Optimization in Manufacturing

Detect someone collapsing Incident Detection

Detect safety practices Hazard Prevention and Mitigation

Customer Reviews

A selection of reviews we have received from our many customers since first releasing VP-Motion in 2022.

Major construction company
We are able to supply the system with additional learning because VP-Motion comes with an annotator and learning function. At other companies, we would have to pay extra every time we would request additional learning.
Major car manufacturer
Because VP-Motion is a standalone program, we had no worries that our sensitive data would leak. Other companies use cloud computing, so we would have to upload our data to the cloud, creating unnecessary risks.
Major machineries manufacturer
We can perform real time analysis. Other companies only allow us to analyze prerecorded footage in batches.
Major parts manufacturer
Compared to other companies, VP-Motion has a much higher accuracy.
Multiple companies
The price is fantastic. At other companies, we would have to add at least one or two more zeroes to the final price.
Major car manufacturer
The fact that VP-Motion is a one-time purchase is great! Other companies only have subscription models.
Major car manufacturer
The fact that VP-Motion supports up to 8 cameras is perfect. Other companies limit us to just 1 camera.
Download PDF
Products

Products

VP-Motion All-in-One Package

VP-Motion for Windows
All-in-One Package

  • VP-Motion (Annotator/Trainer/Monitor) x 1
    Integrated environment for Annotator/Trainer/Monitor.
  • Standalone Monitor x 1
    Monitor app for a PC separate from the integrated environment.
  • Product Key for VP-Motion x 1
  • Product Key for Standalone Monitor x 1

PRICE $7,900 $5,510

Options
VP-Motion Standalone Monitor

VP-Motion for Windows
Standalone Monitor

  • Standalone Monitor x 1
    Monitoring app for a PC separate from the integrated environment.
  • Product Key for Standalone Monitor x 1

PRICE $990$710

Order Information

  • This product is provided as a download, and will not be physically shipped.
  • After your order is completed, we will email you the product key as well as a link to download the software in 1-3 business days. For any questions about our products, please contact us via this form.
  • Payments are handled by MyCommerce, an ecommerce platform by Digital River. For all matters related to purchase and payment, please see the MyCommerce FAQ, or contact MyCommerce directly.

Specification

Supported OSWindows 10 (64bit) / Windows 11
PC Specifications CPU: Intel® Core i7-6700 or equivalent, or higher
Memory: 32GB or more
Minimum required GPU : NVIDIA® GPU with a minimum of 6GB of VRAM
Recommended GPU: NVIDIA® GeForce RTX 3060 or equivalent*, or higher
*VP-Motion specifically requires a GPU with NVIDIA® chipset; NEXT-SYSTEM cannot guarantee the product’s functionality when an AMD GPU is used.
Number of Monitoring CamerasMax. 8 cameras per PC (Depending on PC performance)
Multiple monitoring PCs can be used through utilization of Standalone Monitor.
Download PDF
Information

News / Topics

Release Notes

  • ver 1.3.1_EN
    2024.10.15 Update
    Major updates have been made to enhance learning and analysis accuracy. This includes a new ‘Body Part Analysis Mode’, which allows for analysis of body parts, improved accuracy through annotation segmentation, and provides HTML analysis graphs for better data visualization. Additionally, various bug fixes have been addressed.
    For more details, please refer to the included manual.
  • ver 1.2.3_EN
    06/28/2024 Update
    Added Analyzer: Analyzes multiple videos simultaneously and calculates time spent on individually labeled behaviors.
    Enhanced Image-based Analysis:Includes a new hand-crop function allowing VP-Motion to isolate and analyze hands and tools they hold.
    Updated Skeleton-based Analysis:Upgraded to the latest pose estimation AI for improved accuracy with reduced background interference.

    Also, we’ve added video playback features and addressed various bug fixes. Check out the manual for more info.
  • ver 1.1.6_EN
    10/02/2023 First Version