Our software includes built-in templates such as the Helen Hayes (CGM2) full-body and lower-body models. If you use this marker placement, the software will automatically create the lower-body model. You do not need to manually create joints or rigid bodies.
Our software offers several human templates, for example, a 53-point full-body pose template, a 43-point CGM2 template, as well as lower-body or upper-body only templates. You can choose according to your actual needs.
Yes. The system accurately extracts the position information of each marker. If you need to transfer the data to third-party software, our plugin can accomplish that. Our motion capture system is a high-precision device with sub-millimeter accuracy, so the results are very reliable.
Currently, our supported robots are Unitree G1, Tiangong Pro, and Booster T1. For other robots, you first need to provide us with the robot’s URDF file. We will then modify the retargeting algorithm to adapt it for teleoperation. If needed, you can contact our engineers.
Yes. Our software supports a human template that includes fingers. You can also use an IMU glove plus full-body markers, and then map the data to the robot via our retargeting algorithm.
First, you need to obtain the robot’s URDF file. Our algorithm then maps the joint data output from our motion capture system to the robot’s joints. Minimize the difference between the model's joint orientation and the motion capture data's joint orientation within the joint limit range.Currently, we support robots such as Unitree G1, Booster T1, and Tiangong Pro.
Yes. We provide specialized underwater cameras that can be used for robot motion capture in underwater scenarios.
It involves integrating the Python SDK of the motion capture system with the robot platform's SDK to establish a mapping relationship between human skeletal data and robot skeletal data, thereby realizing the retargeting of motion capture data to the robot.
Yes. You can use custom template. Subsequently, use the connection function in the software to link the markers into bones, enabling the capture of data for each joint of the quadruped.
GMR software fully supports processing the data output by our motion capture system. Please note that when creating a project in GMR, you must select a compatible model (e.g., the 53-point V2 model) to ensure the correct model is chosen. We recommend using the resources provided in the official GMR repository and selecting the "Baseline+Toe,Headband(53) V2" template created for motion capture data for retargeting operations. Our future updated human data templates will also support such applications.
The data captured by the motion capture system is typically of very high precision. In most cases, such data can be directly applied to humanoid robot training tasks without requiring additional cleaning steps.
Due to the limited performance of the robot's main unit, real-time data solving and inference are typically performed on a local computer via Ethernet, and then the data is transmitted to the robot in real-time. To facilitate this data forwarding, equipping the motion capture computer with dual network cards or a docking station with a Gigabit Ethernet port is a widely adopted connection method.
The motion capture system accuracy can reach sub-millimeter . The specific accuracy depends on the product model selected. Accuracy varies between models, and we can recommend products based on your specific requirements.
For such customized solutions, please contact our sales or technical support colleagues. We will provide you with a dedicated configuration plan.
The number of cameras needs to be determined comprehensively based on factors like area size and number of subjects. For high-speed motion, it is recommended to appropriately increase the "Exposure" parameter within the software.
Motion capture data is typically transmitted to the robot via SDK or VRPN protocols and used as truth data, which generally requires no adjustment.
This depends on factors like specific scene size and number of robots. Typically, for a single humanoid robot, we recommend using 8 to 12 cameras.
The current Mapping Algorithm for humanoid robots is developed based on a human model. In the future, we will release versions supporting other human marker models.
You can contact our technical support engineers via the after-sales group or reach out to our sales manager to apply for a trial version of the software.