Does the software support remote control, for example, to start or stop capture remotely? How is it implemented?
Yes, it supports remote control. Through the remote control API in the SDK panel, you can remotely operate the software in real-time, including connecting devices, starting/stopping recording, switching post-processing modes, obtaining total data frames, etc. Alternatively, a sync box can be used to control capture via synchronization signals.
Are there fixed requirements for the number of cameras in a motion capture system?
The number of cameras required depends on the specific experimental scene and the subjects being captured. Generally, larger capture volumes or a greater number of simultaneous subjects require a corresponding increase in the number of cameras.
How to obtain the SDKs for Matlab and Python?
You can contact our engineers via the after-sales technical support chat group to request them. Each SDK comes with corresponding documentation. You can also download the SDKs from the Downloads page on our official website.
What is the accuracy of the markerless motion capture technology mentioned on the website, and what are its current main applications?
The accuracy of the markerless technology can reach the centimeter level. Currently, it is mainly applied to human motion capture.
Can the motion capture system be used outdoors?
What is the relationship between the motion capture frame rate of system and the camera frame rate? For example, what is the highest achievable system frame rate when using eight 2H model cameras?
Please refer to the specifications for details. The "FPS" in the product parameters refers to the maximum frequency at the camera's full resolution. The frame rate can be further increased by reducing the capture resolution. For example, the 26H model camera has been tested to support over 10,000 FPS, while the 2H model camera can reach 380 FPS at full resolution.
Does the system support capturing multiple rigid bodies simultaneously? Is there a limit?
Yes, it supports simultaneous capture of multiple rigid bodies or humans. The currently released software version supports a maximum of 100 rigid bodies. Custom versions can be provided for special needs.
Can the joint angle data be directly exported from the XINGYING 29-point human model?
Yes, export is supported. This 29-point model data is primarily used for gait analysis. In practical applications, the data is typically exported in the standard C3D file format, which can then be imported into professional gait analysis software like Visual3D or OpenSim for direct calculation and analysis of metrics like joint angles.
What is the effective testing range of the underwater motion capture system?
The specific testing range needs to be determined based on the actual application scenario. We recommend you contact our business or technical support colleagues for a detailed assessment of your specific site. For underwater scenarios requiring large area coverage, a solution utilizing underwater active optical markers can be employed.
Does the optical motion capture system support simultaneous capture of active and passive optical markers?
Yes, simultaneous capture is supported.
When creating a human model in the software, can the layout of the marker points be customized? How is it done?
When setting up a motion capture system for humanoid robot training using tripods in an outdoor scenario, is the calibration process convenient? How long does the entire process typically take?
Using tripods for calibration outdoors is feasible. The total time required is directly related to the number of cameras and mainly involves two stages: First, the direction and aperture of each camera need to be adjusted individually. The time for this stage varies with the number of cameras. After adjustment, the formal calibration operation itself takes approximately 3-5 minutes. For outdoor environments with strong light, it is recommended to use AS-Type camera.
How is the software‘s IP address set?
The software's IP address should be set according to the actual IP address of the computer running the software on the current network.
What might be the reason for poor recognition when creating a human body model?
You usually need to adjust the camera height to ensure the full body is captured. Capturing a human typically requires 8-12 cameras. If the number is insufficient, data from the edges of limbs may be lost during movement.
Must the computer use an Ethernet cable? Can Wi-Fi be used?
To ensure system stability, it is recommended to connect the motion capture system to the switch via Ethernet cables. If you only need to receive the already captured data, you can use a Gigabit router to connect via Wi-Fi.
Does the software support calibration and modeling of hands?
Yes. Click "Create Human Body" in the software, select a hand marker template (e.g., the "Both Hands" template includes 24 markers per hand). After placing markers according to the template and positioning the hands in the center of the camera view, you can generate the hand skeleton models with one click.
How is high-quality motion capture data defined?
Optical motion capture can achieve sub-millimeter accuracy. High-quality data is typically characterized by smooth overall data curves with minimal jitter. Such data often requires no additional processing and can be directly applied in fields like robot training.
Can data processing be done in post-processing?
Yes, the post-processing module can be used to locate and fix abnormal data . For example, Cubic-Join can supplement a small number of missing points, or smoothing algorithms can be applied to the data.
How do I add timestamps? Is it possible to add a timestamp at a specific point?
Timestamp are already included in both exported data and stream data via the SDK. Currently, the software does not support adding additional timestamps to the data. If you have a specific need for this, please contact our technical support engineers for further assistance.
After importing and solving a human model, why are the connections between markers and the human skeleton not displaying completely?
This is usually caused by missing marker data. First, complete the missing marker point data, and then re-calcute the skeletal data.
How to modify the recording frame rate?
Find the "Frame Rate" setting in the software's Device panel. You can adjust it while playback is paused. Click play to apply the new setting.
When creating a human model, markers frequently disappear, flicker, making it hard to freeze a complete set, or data remains severely incomplete even after a successful freeze. What could be the issue?
We recommend troubleshooting in the following steps: 1. Check the aperture and focus of each camera individually to ensure all markers are captured clearly and stably. 2. Perform a TL calibration again. If the problem persists, you may need to increase the number of cameras to improve capture coverage.
If resetting the origin is needed after calibration, is a TL recalibration required?
No. The software provides an ‘Origin Calibration’ function specifically for resetting the origin point.
How to resolve marker point loss during human motion capture?
Body movement during capture may cause marker points to be lost. The software includes stabilization algorithms that can maintain skeletal stability even with a few missing points. Typically, the corners and edges are more prone to point loss. Adding cameras specifically to cover these areas can help.
Calibration failed multiple times?
First, confirm whether the configuration file has been loaded, and then verify if the T-bar length setting for calibration is correct. If these two steps have been correctly completed, further confirm whether any noise points in the scene have been removed during calibration.
How to create a coordinate system?
The coordinate system is created during calibration, and the geodetic coordinate system cannot be arbitrarily changed later.
After calibration, why do many noise points appear in the field?
One possibility is that noise was not completely removed during the T-bar calibration, and calibration passed just at the critical condition. Noise points appeared during the calibration process, and it is natural for noise points to appear during subsequent data collection. Another situation is the presence of reflective phenomena from equipment in the field. The equipment might have been removed from the field during calibration and brought back afterward, causing reflections.
Why is the ground in the software oriented vertically?
Because the previous settings file was set with the Z-axis or Y-axis pointing upwards, changing the position of the coordinate axis now will alter the geodetic coordinate system. You only need to recalibrate.
During calibration, why does one camera's 2D view have little or no gray coverage, preventing calibration?
Check the numbers in parentheses at the bottom left corner of the 2D view to see if there are any noise points other than the T-bar in the camera's field of view. If there are, they need to be removed or masked in the software.
Why are the window and button layouts abnormal in the software, with incomplete text display?
This is related to the Windows operating system's scaling settings. There are two solutions: 1. Right-click on the Windows desktop — Display Settings — Scaling, set to 100%. 2. Right-click on the XINGYING software icon — Properties — Compatibility — Change high DPI settings — Check "Override high DPI scaling behavior" and set the scaling performed by "System."
Calibration failed with the message "Wand calibration failed, please recalibrate"?
After confirming that the camera position, angle, aperture, and focus have been adjusted, the following causes need to be investigated: 1. The physical size of the L-shaped calibration frame matches the settings in the software. According to the software settings menu — Calibration — Calibration Frame — Calibration Rod Type, select the corresponding calibration rod type based on the distance between Marker1-Marker3 on the physical L-shaped calibration frame. 2. Check if there are any extraneous noise points in the 2D view, especially checking if there are any reflections on the person swinging the T-bar calibration rod to avoid noise points other than the T-bar during the swinging process.
Error message "Failed to connect camera" when connecting the camera?
You can gradually analyze the following situations: 1. Hardware analysis: Check whether the switch is powered properly and whether the network cables at both ends of the computer and switch are loose; 2. Software analysis: Check whether the network card IP connected to the switch is 10.1.1.198, and whether the firewall and antivirus software are disabled.
After connecting the cameras, what to do if the number of cameras detected does not match the actual number used?
After connecting the cameras, in the 2D view, select "Settings" - "2D View" - "Display IP" to show the 2D view of all cameras and check the continuity of IP addresses to see which camera is not connected; or visually inspect if the camera's digital display is off, which may indicate a loose network cable or POE. Reconnect the network cable and POE splitter.
Is there a more convenient way to mask noise points in the field instead of masking them one by one?
You can remove the markers and calibration tools from the field and use the "Calibration" window's "Start Occlusion" function to automatically occlude any noise points present in the current field with one click. After completion, manually check each camera to ensure complete occlusion.
During calibration, what to do if one camera's 2D view has little or no gray coverage, preventing calibration?
Check the number in brackets at the bottom left corner of the 2D view to determine if there are any noise points in the camera's field of view other than the T-bar. If there are, they need to be removed or occluded in the software.
When viewing the camera grayscale image, why do some cameras show horizontal or vertical lines, obscuring the image?
Check whether the camera network cables are securely connected, restart the switch, or change the switch's network cable interface.
When viewing the camera grayscale image, why is the image still not bright enough even with the aperture set to maximum?
Calibration is not affected as long as you can see bright reflective markers in the camera's 2D view.
Calibration results are often not satisfactory, frequently showing as Normal or Poor?
1. Increase the swing time, and ensure the swing motion is not too fast, so that the gray area in the camera view is as deep as possible, thereby increasing the effective data volume.
2. Contact sales or engineers in the group to confirm whether the software or camera firmware is the latest version.
In the 3D view, why do white marker points appear fragmented?
The camera in the system may have been touched/moved in position or angle, or the system has not been calibrated for a long time; recalibration will suffice.
In the 3D view, why is there incorrect recognition of marker points in the rigid body, or incorrect recognition of the rigid body's axis?
This is mostly due to a high degree of symmetry in the arrangement of multiple markers on the object being measured. It is recommended to arrange them in an irregular, asymmetrical structure. You can refer to the video at https://www.bilibili.com/video/BV1Rr4y1c7Yp.
During L-shaped calibration, what to do if some cameras in the software's 2D view show a red square in the top right corner instead of all turning green when clicking next?
Some cameras have not turned white because they have not completely recognized the four points of the L-shaped calibration. You can proceed with T-shaped calibration, and subsequent calculations will be performed using the calibration data.
When collecting data in the 3D view, rigid bodies and points can be seen, but the screen remains static when objects move. What is the reason?
Check whether the "Unfreeze" button has not been clicked.
How to display the trajectory information of markers?
Supports displaying the trajectory information of markers in post-processing mode. Right-click with the mouse, check "Trajectory" in the 3D view display settings, and then select the marker point for which you need trajectory information to display it.
Does the created rigid body support axial modification?
In post-processing mode, load motion capture data and manually create a markerset to adjust the position and angle of the rigid body. Rigid bodies created with one click cannot be modified in real-time; modifications can be made in post-processing using collected data. In post-processing, select the rigid body and modify its axis in the "Bone Axis" field on the right-side properties panel.
What to do if there is an error message about marker points when establishing a human skeleton?
Check if the number of markers is 53 points. If fewer than 53, verify if any markers are missing or obstructed. If more than 53, check for extra markers or noise points in the environment.
Why can't the human skeleton template be recognized after a period of time, even though it was initially set up correctly?
Verify if the number and position of human markers have changed. Click "Reset Recognition."