Our system uses the UDP protocol.
We recommend wearing a motion capture suit. The suit has a fuzzy (loop) surface, and hook-type reflective markers can be attached firmly to it.
This depends on the size of the objects. For a standard PX4 drone (wheelbase approx. 250–300 mm), it is sufficient to ensure that the spatial marker layout is different for each drone. For smaller objects, you can design a mount that extends the markers outward with varying lengths to enable stable identification.
Yes. If you only need to track a single point, make sure there are no noise points in the capture volume. The single‑point data will be output in an "unnamed" data file.
You can transmit the data via VRPN to ROS, then use Marvros to forward it. For detailed instructions, please refer to our online manual.
Yes. The latest software version supports this feature. We recommend upgrading your software to the latest version. You can download the software installer from our official website at: https://en.nokov.com/support/downloads.html.
Our software has a built-in VRPN function. You can find and enable it in the Data Broadcast panel to start receiving data. For the client side, please refer to the "Communication between Xingying and the robot" section in our manual for instructions on installing vrpn_client_ros according to your ROS version. If you need the specific link, please contact our engineers.
Our motion capture devices allow you to adjust the data output frame rate. In the VRPN parameters, you can also adjust the "update frequency" parameter. Setting it to match our capture settings will allow data reception at the desired rate.
Yes, VRPN data can then be received wirelessly; you only need to switch the IP address to your wireless adapter in the software.
Yes. The exported data (.xrs file) contains rigid body acceleration data, quaternion rotation data, as well as angular velocity and angular acceleration data.
This issue is usually caused by missing marker points. You can resolve it as follows:
1. Locate the first frame where the recognition error occurs.
2. Click "Select Forward Frames."
3. Use "Quick ID" to reselect the markers until they are correctly identified.
4. Click "Rectify."
5. Advance frame by frame to check whether any recognition errors persist. If so, repeat the steps above.
Please verify whether you are using the latest version of the software. If not, upgrade to the latest version and try again. You can obtain the latest installer from our official website at: https://en.nokov.com/support/downloads.html.
Yes. Our system currently supports Ubuntu 22.04. You can download the software installer from our official website at: https://en.nokov.com/support/downloads.html
The reflective material is typically 3M 8850 or 3M 7610.
Check the "Reference Video" option next to the "Connect Cameras" button. Enabling this option will allow synchronous recording of the reference video.
Yes. You can download the SDKs and plugins from our official website at: https://en.nokov.com/support/downloads.html
Please check whether the "Play" button has been clicked in the software interface to start capture.
After installation, it is recommended to check each camera‘s view in the software’s 2D view to confirm that the cameras cover the target capture volume, especially the center of the field. Also check for any occlusions or reflective interference within the field of view. If insufficient coverage or occlusion is found, adjust the camera angles appropriately, or move obstacles that affect the view, to achieve more stable capture results.
Yes, it supports remote control. Through the remote control API in the SDK panel, you can remotely operate the software in real-time, including connecting devices, starting/stopping recording, switching post-processing modes, obtaining total data frames, etc. Alternatively, a sync box can be used to control capture via synchronization signals.
The number of cameras required depends on the specific experimental scene and the subjects being captured. Generally, larger capture volumes or a greater number of simultaneous subjects require a corresponding increase in the number of cameras.
You can contact our engineers via the after-sales technical support chat group to request them. Each SDK comes with corresponding documentation. You can also download the SDKs from the Downloads page on our official website.
The accuracy of the markerless technology can reach the centimeter level. Currently, it is mainly applied to human motion capture.
Yes, outdoor use is supported. For related application cases in outdoor environments, please refer to: https://www.nokov.com/support/case_studies_detail/outdoor-sunlight-balance-infantry-trajectory-tracking.html
Please refer to the specifications for details. The "FPS" in the product parameters refers to the maximum frequency at the camera's full resolution. The frame rate can be further increased by reducing the capture resolution. For example, the 26H model camera has been tested to support over 10,000 FPS, while the 2H model camera can reach 380 FPS at full resolution.
Yes, it supports simultaneous capture of multiple rigid bodies or humans. The currently released software version supports a maximum of 100 rigid bodies. Custom versions can be provided for special needs.
Yes, export is supported. This 29-point model data is primarily used for gait analysis. In practical applications, the data is typically exported in the standard C3D file format, which can then be imported into professional gait analysis software like Visual3D or OpenSim for direct calculation and analysis of metrics like joint angles.
The specific testing range needs to be determined based on the actual application scenario. We recommend you contact our business or technical support colleagues for a detailed assessment of your specific site. For underwater scenarios requiring large area coverage, a solution utilizing underwater active optical markers can be employed.
Yes, simultaneous capture is supported.
Yes, custom creation is supported. You can learn the detailed steps by referring to the chapter on creating custom templates in the official manual: https://xingying-docs.nokov.com/xingying/XINGYING4.4-CN/jiu-chuang-jian-markerset/san-zi-ding-yi-mu-ban/. If you need assistance during the operation, you can also contact our technical support engineers via the after-sales group.
Using tripods for calibration outdoors is feasible. The total time required is directly related to the number of cameras and mainly involves two stages: First, the direction and aperture of each camera need to be adjusted individually. The time for this stage varies with the number of cameras. After adjustment, the formal calibration operation itself takes approximately 3-5 minutes. For outdoor environments with strong light, it is recommended to use AS-Type camera.
The software's IP address should be set according to the actual IP address of the computer running the software on the current network.
You usually need to adjust the camera height to ensure the full body is captured. Capturing a human typically requires 8-12 cameras. If the number is insufficient, data from the edges of limbs may be lost during movement.
To ensure system stability, it is recommended to connect the motion capture system to the switch via Ethernet cables. If you only need to receive the already captured data, you can use a Gigabit router to connect via Wi-Fi.
Yes. Click "Create Human Body" in the software, select a hand marker template (e.g., the "Both Hands" template includes 24 markers per hand). After placing markers according to the template and positioning the hands in the center of the camera view, you can generate the hand skeleton models with one click.
Optical motion capture can achieve sub-millimeter accuracy. High-quality data is typically characterized by smooth overall data curves with minimal jitter. Such data often requires no additional processing and can be directly applied in fields like robot training.
The software comes pre-loaded with multiple model skins for use, including 1 UAV appearance, 1 UGV appearance, and 4 human skins, suitable for different demonstration and application scenarios. For human models, the software also supports importing and binding standard skins. For specific operation methods, please refer to the software user manual or contact our technical support for guidance.
The handling method depends on the severity of the loss. For sporadic loss of a small number of points, you can use post-processing functions like cubic join to fill the gaps. If there is substantial, persistent loss, you should check if the camera layout is reasonable, ensuring it fully covers the object's entire motion path, and consider recapturing the data.
Yes. Within the post-processing module, functions like "Rectify" and "Marker Swaps" can be used to repair erroneous marker data in a selected segment. Please refer to the help manual for specific operations, or contact technical support via the after-sales group.
Yes, the post-processing module can be used to locate and fix abnormal data . For example, Cubic-Join can supplement a small number of missing points, or smoothing algorithms can be applied to the data.
Timestamp are already included in both exported data and stream data via the SDK. Currently, the software does not support adding additional timestamps to the data. If you have a specific need for this, please contact our technical support engineers for further assistance.
This is usually caused by missing marker data. First, complete the missing marker point data, and then re-calcute the skeletal data.
Find the "Frame Rate" setting in the software's Device panel. You can adjust it while playback is paused. Click play to apply the new setting.
We recommend troubleshooting in the following steps: 1. Check the aperture and focus of each camera individually to ensure all markers are captured clearly and stably. 2. Perform a TL calibration again. If the problem persists, you may need to increase the number of cameras to improve capture coverage.
No. The software provides an ‘Origin Calibration’ function specifically for resetting the origin point.
Body movement during capture may cause marker points to be lost. The software includes stabilization algorithms that can maintain skeletal stability even with a few missing points. Typically, the corners and edges are more prone to point loss. Adding cameras specifically to cover these areas can help.
First, confirm whether the configuration file has been loaded, and then verify if the T-bar length setting for calibration is correct. If these two steps have been correctly completed, further confirm whether any noise points in the scene have been removed during calibration.
Our software includes built-in templates such as the Helen Hayes (CGM2) full-body and lower-body models. If you use this marker placement, the software will automatically create the lower-body model. You do not need to manually create joints or rigid bodies.
Our software offers several human templates, for example, a 53-point full-body pose template, a 43-point CGM2 template, as well as lower-body or upper-body only templates. You can choose according to your actual needs.
Yes. The system accurately extracts the position information of each marker. If you need to transfer the data to third-party software, our plugin can accomplish that. Our motion capture system is a high-precision device with sub-millimeter accuracy, so the results are very reliable.
Currently, our supported robots are Unitree G1, Tiangong Pro, and Booster T1. For other robots, you first need to provide us with the robot’s URDF file. We will then modify the retargeting algorithm to adapt it for teleoperation. If needed, you can contact our engineers.
Yes. Our software supports a human template that includes fingers. You can also use an IMU glove plus full-body markers, and then map the data to the robot via our retargeting algorithm.
First, you need to obtain the robot’s URDF file. Our algorithm then maps the joint data output from our motion capture system to the robot’s joints. Minimize the difference between the model's joint orientation and the motion capture data's joint orientation within the joint limit range.Currently, we support robots such as Unitree G1, Booster T1, and Tiangong Pro.
Yes. We provide specialized underwater cameras that can be used for robot motion capture in underwater scenarios.
It involves integrating the Python SDK of the motion capture system with the robot platform's SDK to establish a mapping relationship between human skeletal data and robot skeletal data, thereby realizing the retargeting of motion capture data to the robot.
Yes. You can use custom template. Subsequently, use the connection function in the software to link the markers into bones, enabling the capture of data for each joint of the quadruped.
GMR software fully supports processing the data output by our motion capture system. Please note that when creating a project in GMR, you must select a compatible model (e.g., the 53-point V2 model) to ensure the correct model is chosen. We recommend using the resources provided in the official GMR repository and selecting the "Baseline+Toe,Headband(53) V2" template created for motion capture data for retargeting operations. Our future updated human data templates will also support such applications.
The data captured by the motion capture system is typically of very high precision. In most cases, such data can be directly applied to humanoid robot training tasks without requiring additional cleaning steps.
Due to the limited performance of the robot's main unit, real-time data solving and inference are typically performed on a local computer via Ethernet, and then the data is transmitted to the robot in real-time. To facilitate this data forwarding, equipping the motion capture computer with dual network cards or a docking station with a Gigabit Ethernet port is a widely adopted connection method.
The motion capture system accuracy can reach sub-millimeter . The specific accuracy depends on the product model selected. Accuracy varies between models, and we can recommend products based on your specific requirements.
For such customized solutions, please contact our sales or technical support colleagues. We will provide you with a dedicated configuration plan.
The number of cameras needs to be determined comprehensively based on factors like area size and number of subjects. For high-speed motion, it is recommended to appropriately increase the "Exposure" parameter within the software.
Motion capture data is typically transmitted to the robot via SDK or VRPN protocols and used as truth data, which generally requires no adjustment.
This depends on factors like specific scene size and number of robots. Typically, for a single humanoid robot, we recommend using 8 to 12 cameras.
The current Mapping Algorithm for humanoid robots is developed based on a human model. In the future, we will release versions supporting other human marker models.
You can contact our technical support engineers via the after-sales group or reach out to our sales manager to apply for a trial version of the software.
A large number of academic papers have already used our equipment. You can search for the relevant papers on our official website.