English 中文 日本語 Русский
FAQs Banner

FAQs

Welcome to NOKOV FAQ Center! Find quick answers to all your motion capture queries. We're here to help you capture every movement seamiessiy:
If vou need more assistance, feel free to reach out to us at support@nokov.cn and we'll be happy to assist vou.
All Motion Capture Engineering Virtual Reality Life Sciences Entertainment Business
Motion Capture
Does the software support remote control, for example, to start or stop capture remotely? How is it implemented?

Yes, it supports remote control. Through the remote control API in the SDK panel, you can remotely operate the software in real-time, including connecting devices, starting/stopping recording, switching post-processing modes, obtaining total data frames, etc. Alternatively, a sync box can be used to control capture via synchronization signals.

Are there fixed requirements for the number of cameras in a motion capture system?

The number of cameras required depends on the specific experimental scene and the subjects being captured. Generally, larger capture volumes or a greater number of simultaneous subjects require a corresponding increase in the number of cameras.

How to obtain the SDKs for Matlab and Python?

You can contact our engineers via the after-sales technical support chat group to request them. Each SDK comes with corresponding documentation. You can also download the SDKs from the Downloads page on our official website.

What is the accuracy of the markerless motion capture technology mentioned on the website, and what are its current main applications?

The accuracy of the markerless technology can reach the centimeter level. Currently, it is mainly applied to human motion capture.

Can the motion capture system be used outdoors?

Yes, outdoor use is supported. For related application cases in outdoor environments, please refer to: https://www.nokov.com/support/case_studies_detail/outdoor-sunlight-balance-infantry-trajectory-tracking.html

What is the relationship between the motion capture frame rate of system and the camera frame rate? For example, what is the highest achievable system frame rate when using eight 2H model cameras?

Please refer to the specifications for details. The "FPS" in the product parameters refers to the maximum frequency at the camera's full resolution. The frame rate can be further increased by reducing the capture resolution. For example, the 26H model camera has been tested to support over 10,000 FPS, while the 2H model camera can reach 380 FPS at full resolution.

Does the system support capturing multiple rigid bodies simultaneously? Is there a limit?

Yes, it supports simultaneous capture of multiple rigid bodies or humans. The currently released software version supports a maximum of 100 rigid bodies. Custom versions can be provided for special needs.

Can the joint angle data be directly exported from the XINGYING 29-point human model?

Yes, export is supported. This 29-point model data is primarily used for gait analysis. In practical applications, the data is typically exported in the standard C3D file format, which can then be imported into professional gait analysis software like Visual3D or OpenSim for direct calculation and analysis of metrics like joint angles.

What is the effective testing range of the underwater motion capture system?

The specific testing range needs to be determined based on the actual application scenario. We recommend you contact our business or technical support colleagues for a detailed assessment of your specific site. For underwater scenarios requiring large area coverage, a solution utilizing underwater active optical markers can be employed.

Does the optical motion capture system support simultaneous capture of active and passive optical markers?

Yes, simultaneous capture is supported.

When creating a human model in the software, can the layout of the marker points be customized? How is it done?

Yes, custom creation is supported. You can learn the detailed steps by referring to the chapter on creating custom templates in the official manual: https://xingying-docs.nokov.com/xingying/XINGYING4.4-CN/jiu-chuang-jian-markerset/san-zi-ding-yi-mu-ban/. If you need assistance during the operation, you can also contact our technical support engineers via the after-sales group.

When setting up a motion capture system for humanoid robot training using tripods in an outdoor scenario, is the calibration process convenient? How long does the entire process typically take?

Using tripods for calibration outdoors is feasible. The total time required is directly related to the number of cameras and mainly involves two stages: First, the direction and aperture of each camera need to be adjusted individually. The time for this stage varies with the number of cameras. After adjustment, the formal calibration operation itself takes approximately 3-5 minutes. For outdoor environments with strong light, it is recommended to use AS-Type camera.

How is the software‘s IP address set?

The software's IP address should be set according to the actual IP address of the computer running the software on the current network.

What might be the reason for poor recognition when creating a human body model?

You usually need to adjust the camera height to ensure the full body is captured. Capturing a human typically requires 8-12 cameras. If the number is insufficient, data from the edges of limbs may be lost during movement.

Must the computer use an Ethernet cable? Can Wi-Fi be used?

To ensure system stability, it is recommended to connect the motion capture system to the switch via Ethernet cables. If you only need to receive the already captured data, you can use a Gigabit router to connect via Wi-Fi.

Does the software support calibration and modeling of hands?

Yes. Click "Create Human Body" in the software, select a hand marker template (e.g., the "Both Hands" template includes 24 markers per hand). After placing markers according to the template and positioning the hands in the center of the camera view, you can generate the hand skeleton models with one click.

How is high-quality motion capture data defined?

Optical motion capture can achieve sub-millimeter accuracy. High-quality data is typically characterized by smooth overall data curves with minimal jitter. Such data often requires no additional processing and can be directly applied in fields like robot training.

After motion capture is complete, can skins or appearances be added to the model within the software?

The software comes pre-loaded with multiple model skins for use, including 1 UAV appearance, 1 UGV appearance, and 4 human skins, suitable for different demonstration and application scenarios. For human models, the software also supports importing and binding standard skins. For specific operation methods, please refer to the software user manual or contact our technical support for guidance.

How should marker point loss during data capture be handled?

The handling method depends on the severity of the loss. For sporadic loss of a small number of points, you can use post-processing functions like cubic join to fill the gaps. If there is substantial, persistent loss, you should check if the camera layout is reasonable, ensuring it fully covers the object's entire motion path, and consider recapturing the data.

Does the software support manual editing or repair of data points?

Yes. Within the post-processing module, functions like "Rectify" and "Marker Swaps" can be used to repair erroneous marker data in a selected segment. Please refer to the help manual for specific operations, or contact technical support via the after-sales group.

Can data processing be done in post-processing?

Yes, the post-processing module can be used to locate and fix abnormal data . For example, Cubic-Join can supplement a small number of missing points, or smoothing algorithms can be applied to the data.

How do I add timestamps? Is it possible to add a timestamp at a specific point?

Timestamp are already included in both exported data and stream data via the SDK. Currently, the software does not support adding additional timestamps to the data. If you have a specific need for this, please contact our technical support engineers for further assistance.

After importing and solving a human model, why are the connections between markers and the human skeleton not displaying completely?

This is usually caused by missing marker data. First, complete the missing marker point data, and then re-calcute the skeletal data.

How to modify the recording frame rate?

Find the "Frame Rate" setting in the software's Device panel. You can adjust it while playback is paused. Click play to apply the new setting.

When creating a human model, markers frequently disappear, flicker, making it hard to freeze a complete set, or data remains severely incomplete even after a successful freeze. What could be the issue?

We recommend troubleshooting in the following steps: 1. Check the aperture and focus of each camera individually to ensure all markers are captured clearly and stably. 2. Perform a TL calibration again. If the problem persists, you may need to increase the number of cameras to improve capture coverage.

If resetting the origin is needed after calibration, is a TL recalibration required?

No. The software provides an Origin Calibration’ function specifically for resetting the origin point.

How to resolve marker point loss during human motion capture?

Body movement during capture may cause marker points to be lost. The software includes stabilization algorithms that can maintain skeletal stability even with a few missing points. Typically, the corners and edges are more prone to point loss. Adding cameras specifically to cover these areas can help.

Calibration failed multiple times?

First, confirm whether the configuration file has been loaded, and then verify if the T-bar length setting for calibration is correct. If these two steps have been correctly completed, further confirm whether any noise points in the scene have been removed during calibration.

How to create a coordinate system?
The coordinate system is created during calibration, and the geodetic coordinate system cannot be arbitrarily changed later.
After calibration, why do many noise points appear in the field?
One possibility is that noise was not completely removed during the T-bar calibration, and calibration passed just at the critical condition. Noise points appeared during the calibration process, and it is natural for noise points to appear during subsequent data collection. Another situation is the presence of reflective phenomena from equipment in the field. The equipment might have been removed from the field during calibration and brought back afterward, causing reflections.
Why is the ground in the software oriented vertically?
Because the previous settings file was set with the Z-axis or Y-axis pointing upwards, changing the position of the coordinate axis now will alter the geodetic coordinate system. You only need to recalibrate.
During calibration, why does one camera's 2D view have little or no gray coverage, preventing calibration?
Check the numbers in parentheses at the bottom left corner of the 2D view to see if there are any noise points other than the T-bar in the camera's field of view. If there are, they need to be removed or masked in the software.
Why are the window and button layouts abnormal in the software, with incomplete text display?
This is related to the Windows operating system's scaling settings. There are two solutions: 1. Right-click on the Windows desktop — Display Settings — Scaling, set to 100%. 2. Right-click on the XINGYING software icon — Properties — Compatibility — Change high DPI settings — Check "Override high DPI scaling behavior" and set the scaling performed by "System."
Calibration failed with the message "Wand calibration failed, please recalibrate"?
After confirming that the camera position, angle, aperture, and focus have been adjusted, the following causes need to be investigated: 1. The physical size of the L-shaped calibration frame matches the settings in the software. According to the software settings menu — Calibration — Calibration Frame — Calibration Rod Type, select the corresponding calibration rod type based on the distance between Marker1-Marker3 on the physical L-shaped calibration frame. 2. Check if there are any extraneous noise points in the 2D view, especially checking if there are any reflections on the person swinging the T-bar calibration rod to avoid noise points other than the T-bar during the swinging process.
Error message "Failed to connect camera" when connecting the camera?
You can gradually analyze the following situations: 1. Hardware analysis: Check whether the switch is powered properly and whether the network cables at both ends of the computer and switch are loose; 2. Software analysis: Check whether the network card IP connected to the switch is 10.1.1.198, and whether the firewall and antivirus software are disabled.
After connecting the cameras, what to do if the number of cameras detected does not match the actual number used?
After connecting the cameras, in the 2D view, select "Settings" - "2D View" - "Display IP" to show the 2D view of all cameras and check the continuity of IP addresses to see which camera is not connected; or visually inspect if the camera's digital display is off, which may indicate a loose network cable or POE. Reconnect the network cable and POE splitter.
Is there a more convenient way to mask noise points in the field instead of masking them one by one?
You can remove the markers and calibration tools from the field and use the "Calibration" window's "Start Occlusion" function to automatically occlude any noise points present in the current field with one click. After completion, manually check each camera to ensure complete occlusion.
During calibration, what to do if one camera's 2D view has little or no gray coverage, preventing calibration?
Check the number in brackets at the bottom left corner of the 2D view to determine if there are any noise points in the camera's field of view other than the T-bar. If there are, they need to be removed or occluded in the software.
When viewing the camera grayscale image, why do some cameras show horizontal or vertical lines, obscuring the image?
Check whether the camera network cables are securely connected, restart the switch, or change the switch's network cable interface.
When viewing the camera grayscale image, why is the image still not bright enough even with the aperture set to maximum?
Calibration is not affected as long as you can see bright reflective markers in the camera's 2D view.
Calibration results are often not satisfactory, frequently showing as Normal or Poor?
1. Increase the swing time, and ensure the swing motion is not too fast, so that the gray area in the camera view is as deep as possible, thereby increasing the effective data volume. 2. Contact sales or engineers in the group to confirm whether the software or camera firmware is the latest version.
In the 3D view, why do white marker points appear fragmented?
The camera in the system may have been touched/moved in position or angle, or the system has not been calibrated for a long time; recalibration will suffice.
In the 3D view, why is there incorrect recognition of marker points in the rigid body, or incorrect recognition of the rigid body's axis?
This is mostly due to a high degree of symmetry in the arrangement of multiple markers on the object being measured. It is recommended to arrange them in an irregular, asymmetrical structure. You can refer to the video at https://www.bilibili.com/video/BV1Rr4y1c7Yp.
During L-shaped calibration, what to do if some cameras in the software's 2D view show a red square in the top right corner instead of all turning green when clicking next?
Some cameras have not turned white because they have not completely recognized the four points of the L-shaped calibration. You can proceed with T-shaped calibration, and subsequent calculations will be performed using the calibration data.
When collecting data in the 3D view, rigid bodies and points can be seen, but the screen remains static when objects move. What is the reason?
Check whether the "Unfreeze" button has not been clicked.
How to display the trajectory information of markers?
Supports displaying the trajectory information of markers in post-processing mode. Right-click with the mouse, check "Trajectory" in the 3D view display settings, and then select the marker point for which you need trajectory information to display it.
Does the created rigid body support axial modification?
In post-processing mode, load motion capture data and manually create a markerset to adjust the position and angle of the rigid body. Rigid bodies created with one click cannot be modified in real-time; modifications can be made in post-processing using collected data. In post-processing, select the rigid body and modify its axis in the "Bone Axis" field on the right-side properties panel.
What to do if there is an error message about marker points when establishing a human skeleton?
Check if the number of markers is 53 points. If fewer than 53, verify if any markers are missing or obstructed. If more than 53, check for extra markers or noise points in the environment.
Why can't the human skeleton template be recognized after a period of time, even though it was initially set up correctly?
Verify if the number and position of human markers have changed. Click "Reset Recognition."
Robotics & Engnieering
Can the system be used for underwater robot capture?

Yes. We provide specialized underwater cameras that can be used for robot motion capture in underwater scenarios.

Can you provide more details about your self-developed retargeting algorithm?

It involves integrating the Python SDK of the motion capture system with the robot platform's SDK to establish a mapping relationship between human skeletal data and robot skeletal data, thereby realizing the retargeting of motion capture data to the robot.

Does the system support creating rigid body models for quadruped robots?

Yes. You can use custom template. Subsequently, use the connection function in the software to link the markers into bones, enabling the capture of data for each joint of the quadruped.

Does the NOKOV motion capture system support outputting data into GMR software?

GMR software fully supports processing the data output by our motion capture system. Please note that when creating a project in GMR, you must select a compatible model (e.g., the 53-point V2 model) to ensure the correct model is chosen. We recommend using the resources provided in the official GMR repository and selecting the "Baseline+Toe,Headband(53) V2" template created for motion capture data for retargeting operations. Our future updated human data templates will also support such applications.

Does the captured motion capture data require cleaning? Can it be used directly for humanoid robot training?

The data captured by the motion capture system is typically of very high precision. In most cases, such data can be directly applied to humanoid robot training tasks without requiring additional cleaning steps.

In robot teleoperation, how does the computer connected to the motion capture device connect to the robot?

Due to the limited performance of the robot's main unit, real-time data solving and inference are typically performed on a local computer via Ethernet, and then the data is transmitted to the robot in real-time. To facilitate this data forwarding, equipping the motion capture computer with dual network cards or a docking station with a Gigabit Ethernet port is a widely adopted connection method.

What is the accuracy for capturing robotic arm end pose, and what does it depend on?

The motion capture system accuracy can reach sub-millimeter . The specific accuracy depends on the product model selected. Accuracy varies between models, and we can recommend products based on your specific requirements.

How many cameras are needed for a quadruped robot application in a 6m*6m area?

For such customized solutions, please contact our sales or technical support colleagues. We will provide you with a dedicated configuration plan.

How many cameras are needed to capture high-speed (approx. 10m/s) rigid body motion in a 7m*7m area?

The number of cameras needs to be determined comprehensively based on factors like area size and number of subjects. For high-speed motion, it is recommended to appropriately increase the "Exposure" parameter within the software.

After importing human motion capture data files into a robot, how can fine adjustments be made?

Motion capture data is typically transmitted to the robot via SDK or VRPN protocols and used as truth data, which generally requires no adjustment.

How many cameras are recommended for capturing a single humanoid robot?

This depends on factors like specific scene size and number of robots. Typically, for a single humanoid robot, we recommend using 8 to 12 cameras.

For humanoid robot teleoperation, is the 53-point V2 human model mandatory?

The current Mapping Algorithm for humanoid robots is developed based on a human model. In the future, we will release versions supporting other human marker models.

How can data from motion capture software be imported into Matlab?
1. The software has completed calibration, and the tested markers or rigid bodies can be observed normally in the 3D view. 2. In the "Data Broadcast" view of the software, select the server-side IP address and enable the SDK. 3. Use our Matlab SDK to obtain data in real-time in Matlab.
Why is rigid body data not being received in ROS?
The following points need to be checked: 1. Whether a markerset has been established for the tested object in XINGYING, and whether the rigid body on the tested object is visible in the 3D view; 2. Confirm that VRPN is selected in the settings interface and that the correct server-side address is chosen; 3. On the ROS side, ping the server-side address selected in XINGYING to see if it can be reached.
Virtual Reality
How can data from motion capture software be imported into software like CATIA and DELMIA?
Transmission via SDK.
In VR-related applications, why is there noticeable shaking of the controllers/glasses in the virtual scene?
In the system settings, selecting "3D Smoothing," "Jitter Reduction," "Rigid Body Smoothing," and "IK Compensation" will result in improvements.
Life Sciences
How to measure specific joint angles without using the Helen Hayes model?
First, confirm how to define the joints that need to be measured, determine the measurement points, which are generally bony landmarks, and then place reflective markers at these measurement points. Through post-processing, name and connect the reflective markers. Use the Analysis tool to measure the angles formed by the lines between the reflective markers, which allows you to calculate the angles and export the data.
How to integrate and synchronize the use of a force plate with motion capture cameras?
Different force plate models have different operating methods, and even the same model may have different connection methods and operations. Please consult a NOKOV measurement engineer.
Entertainment
How is data from motion capture software transmitted to animation software like Maya and Motion Builder?
Through plugin transmission, the motion capture software sends an SDK outward at a certain frequency, and the animation software receives it at the same frequency. For specific inquiries, please consult sales or an engineer.
Business
How can I obtain the software for a trial before purchase?

You can contact our technical support engineers via the after-sales group or reach out to our sales manager to apply for a trial version of the software.

Have any academic papers used the NOKOV motion capture system equipment?

A large number of academic papers have already used our equipment. You can search for the relevant papers on our official website.

How to choose a motion capture system based on specific needs?
You can contact us by phone (010-64922321), email (info@nokov.cn), or by leaving a message. We will reach out to you at the earliest opportunity upon receiving your message and create a corresponding solution based on your needs.
How to become a distributor or agent?
You can contact us by phone (010-64922321), email (info@nokov.cn), or by leaving a message. We will reach out to you at the earliest opportunity upon receiving your message and create a corresponding solution based on your needs.

By using this site, you agree to our terms, which outline our use of cookies. CLOSE ×

Contact us
We are committed to responding promptly and will connect with you through our local distributors for further assistance.
Engineering Virtual Reality Life Sciences Entertainment
I would like to receive a quote
Beijing NOKOV Science & Technology Co., Ltd (Headquarter)
LocationRoom820, China Minmetals Tower, Chaoyang Dist., Beijing
Emailinfo@nokov.cn
Phone+ 86-10-64922321
Capture Volume*
Objective*
Full Bodies Drones/Robots Others
Quantity
Camera Type
Pluto1.3C Mars1.3H Mars2H Mars4H Underwater Others/I do not know
Camera Count
4 6 8 12 16 20 24 Others/I don't know