Yes, it supports remote control. Through the remote control API in the SDK panel, you can remotely operate the software in real-time, including connecting devices, starting/stopping recording, switching post-processing modes, obtaining total data frames, etc. Alternatively, a sync box can be used to control capture via synchronization signals.
The number of cameras required depends on the specific experimental scene and the subjects being captured. Generally, larger capture volumes or a greater number of simultaneous subjects require a corresponding increase in the number of cameras.
You can contact our engineers via the after-sales technical support chat group to request them. Each SDK comes with corresponding documentation. You can also download the SDKs from the Downloads page on our official website.
The accuracy of the markerless technology can reach the centimeter level. Currently, it is mainly applied to human motion capture.
Yes, outdoor use is supported. For related application cases in outdoor environments, please refer to: https://www.nokov.com/support/case_studies_detail/outdoor-sunlight-balance-infantry-trajectory-tracking.html
Please refer to the specifications for details. The "FPS" in the product parameters refers to the maximum frequency at the camera's full resolution. The frame rate can be further increased by reducing the capture resolution. For example, the 26H model camera has been tested to support over 10,000 FPS, while the 2H model camera can reach 380 FPS at full resolution.
Yes, it supports simultaneous capture of multiple rigid bodies or humans. The currently released software version supports a maximum of 100 rigid bodies. Custom versions can be provided for special needs.
Yes, export is supported. This 29-point model data is primarily used for gait analysis. In practical applications, the data is typically exported in the standard C3D file format, which can then be imported into professional gait analysis software like Visual3D or OpenSim for direct calculation and analysis of metrics like joint angles.
The specific testing range needs to be determined based on the actual application scenario. We recommend you contact our business or technical support colleagues for a detailed assessment of your specific site. For underwater scenarios requiring large area coverage, a solution utilizing underwater active optical markers can be employed.
Yes, simultaneous capture is supported.
Yes, custom creation is supported. You can learn the detailed steps by referring to the chapter on creating custom templates in the official manual: https://xingying-docs.nokov.com/xingying/XINGYING4.4-CN/jiu-chuang-jian-markerset/san-zi-ding-yi-mu-ban/. If you need assistance during the operation, you can also contact our technical support engineers via the after-sales group.
Using tripods for calibration outdoors is feasible. The total time required is directly related to the number of cameras and mainly involves two stages: First, the direction and aperture of each camera need to be adjusted individually. The time for this stage varies with the number of cameras. After adjustment, the formal calibration operation itself takes approximately 3-5 minutes. For outdoor environments with strong light, it is recommended to use AS-Type camera.
The software's IP address should be set according to the actual IP address of the computer running the software on the current network.
You usually need to adjust the camera height to ensure the full body is captured. Capturing a human typically requires 8-12 cameras. If the number is insufficient, data from the edges of limbs may be lost during movement.
To ensure system stability, it is recommended to connect the motion capture system to the switch via Ethernet cables. If you only need to receive the already captured data, you can use a Gigabit router to connect via Wi-Fi.
Yes. Click "Create Human Body" in the software, select a hand marker template (e.g., the "Both Hands" template includes 24 markers per hand). After placing markers according to the template and positioning the hands in the center of the camera view, you can generate the hand skeleton models with one click.
Optical motion capture can achieve sub-millimeter accuracy. High-quality data is typically characterized by smooth overall data curves with minimal jitter. Such data often requires no additional processing and can be directly applied in fields like robot training.
The software comes pre-loaded with multiple model skins for use, including 1 UAV appearance, 1 UGV appearance, and 4 human skins, suitable for different demonstration and application scenarios. For human models, the software also supports importing and binding standard skins. For specific operation methods, please refer to the software user manual or contact our technical support for guidance.
The handling method depends on the severity of the loss. For sporadic loss of a small number of points, you can use post-processing functions like cubic join to fill the gaps. If there is substantial, persistent loss, you should check if the camera layout is reasonable, ensuring it fully covers the object's entire motion path, and consider recapturing the data.
Yes. Within the post-processing module, functions like "Rectify" and "Marker Swaps" can be used to repair erroneous marker data in a selected segment. Please refer to the help manual for specific operations, or contact technical support via the after-sales group.
Yes, the post-processing module can be used to locate and fix abnormal data . For example, Cubic-Join can supplement a small number of missing points, or smoothing algorithms can be applied to the data.
Timestamp are already included in both exported data and stream data via the SDK. Currently, the software does not support adding additional timestamps to the data. If you have a specific need for this, please contact our technical support engineers for further assistance.
This is usually caused by missing marker data. First, complete the missing marker point data, and then re-calcute the skeletal data.
Find the "Frame Rate" setting in the software's Device panel. You can adjust it while playback is paused. Click play to apply the new setting.
We recommend troubleshooting in the following steps: 1. Check the aperture and focus of each camera individually to ensure all markers are captured clearly and stably. 2. Perform a TL calibration again. If the problem persists, you may need to increase the number of cameras to improve capture coverage.
No. The software provides an ‘Origin Calibration’ function specifically for resetting the origin point.
Body movement during capture may cause marker points to be lost. The software includes stabilization algorithms that can maintain skeletal stability even with a few missing points. Typically, the corners and edges are more prone to point loss. Adding cameras specifically to cover these areas can help.
First, confirm whether the configuration file has been loaded, and then verify if the T-bar length setting for calibration is correct. If these two steps have been correctly completed, further confirm whether any noise points in the scene have been removed during calibration.
Yes. We provide specialized underwater cameras that can be used for robot motion capture in underwater scenarios.
It involves integrating the Python SDK of the motion capture system with the robot platform's SDK to establish a mapping relationship between human skeletal data and robot skeletal data, thereby realizing the retargeting of motion capture data to the robot.
Yes. You can use custom template. Subsequently, use the connection function in the software to link the markers into bones, enabling the capture of data for each joint of the quadruped.
GMR software fully supports processing the data output by our motion capture system. Please note that when creating a project in GMR, you must select a compatible model (e.g., the 53-point V2 model) to ensure the correct model is chosen. We recommend using the resources provided in the official GMR repository and selecting the "Baseline+Toe,Headband(53) V2" template created for motion capture data for retargeting operations. Our future updated human data templates will also support such applications.
The data captured by the motion capture system is typically of very high precision. In most cases, such data can be directly applied to humanoid robot training tasks without requiring additional cleaning steps.
Due to the limited performance of the robot's main unit, real-time data solving and inference are typically performed on a local computer via Ethernet, and then the data is transmitted to the robot in real-time. To facilitate this data forwarding, equipping the motion capture computer with dual network cards or a docking station with a Gigabit Ethernet port is a widely adopted connection method.
The motion capture system accuracy can reach sub-millimeter . The specific accuracy depends on the product model selected. Accuracy varies between models, and we can recommend products based on your specific requirements.
For such customized solutions, please contact our sales or technical support colleagues. We will provide you with a dedicated configuration plan.
The number of cameras needs to be determined comprehensively based on factors like area size and number of subjects. For high-speed motion, it is recommended to appropriately increase the "Exposure" parameter within the software.
Motion capture data is typically transmitted to the robot via SDK or VRPN protocols and used as truth data, which generally requires no adjustment.
This depends on factors like specific scene size and number of robots. Typically, for a single humanoid robot, we recommend using 8 to 12 cameras.
The current Mapping Algorithm for humanoid robots is developed based on a human model. In the future, we will release versions supporting other human marker models.
You can contact our technical support engineers via the after-sales group or reach out to our sales manager to apply for a trial version of the software.
A large number of academic papers have already used our equipment. You can search for the relevant papers on our official website.