English 中文 日本語 Русский
FAQs Banner

FAQs

Welcome to NOKOV FAQ Center! Find quick answers to all your motion capture queries. We're here to help you capture every movement seamiessiy:
If vou need more assistance, feel free to reach out to us at support@nokov.cn and we'll be happy to assist vou.
All Motion Capture Engineering Virtual Reality Life Sciences Entertainment Business
Robotics & Engnieering
Can the system be used for underwater robot capture?

Yes. We provide specialized underwater cameras that can be used for robot motion capture in underwater scenarios.

Can you provide more details about your self-developed retargeting algorithm?

It involves integrating the Python SDK of the motion capture system with the robot platform's SDK to establish a mapping relationship between human skeletal data and robot skeletal data, thereby realizing the retargeting of motion capture data to the robot.

Does the system support creating rigid body models for quadruped robots?

Yes. You can use custom template. Subsequently, use the connection function in the software to link the markers into bones, enabling the capture of data for each joint of the quadruped.

Does the NOKOV motion capture system support outputting data into GMR software?

GMR software fully supports processing the data output by our motion capture system. Please note that when creating a project in GMR, you must select a compatible model (e.g., the 53-point V2 model) to ensure the correct model is chosen. We recommend using the resources provided in the official GMR repository and selecting the "Baseline+Toe,Headband(53) V2" template created for motion capture data for retargeting operations. Our future updated human data templates will also support such applications.

Does the captured motion capture data require cleaning? Can it be used directly for humanoid robot training?

The data captured by the motion capture system is typically of very high precision. In most cases, such data can be directly applied to humanoid robot training tasks without requiring additional cleaning steps.

In robot teleoperation, how does the computer connected to the motion capture device connect to the robot?

Due to the limited performance of the robot's main unit, real-time data solving and inference are typically performed on a local computer via Ethernet, and then the data is transmitted to the robot in real-time. To facilitate this data forwarding, equipping the motion capture computer with dual network cards or a docking station with a Gigabit Ethernet port is a widely adopted connection method.

What is the accuracy for capturing robotic arm end pose, and what does it depend on?

The motion capture system accuracy can reach sub-millimeter . The specific accuracy depends on the product model selected. Accuracy varies between models, and we can recommend products based on your specific requirements.

How many cameras are needed for a quadruped robot application in a 6m*6m area?

For such customized solutions, please contact our sales or technical support colleagues. We will provide you with a dedicated configuration plan.

How many cameras are needed to capture high-speed (approx. 10m/s) rigid body motion in a 7m*7m area?

The number of cameras needs to be determined comprehensively based on factors like area size and number of subjects. For high-speed motion, it is recommended to appropriately increase the "Exposure" parameter within the software.

After importing human motion capture data files into a robot, how can fine adjustments be made?

Motion capture data is typically transmitted to the robot via SDK or VRPN protocols and used as truth data, which generally requires no adjustment.

How many cameras are recommended for capturing a single humanoid robot?

This depends on factors like specific scene size and number of robots. Typically, for a single humanoid robot, we recommend using 8 to 12 cameras.

For humanoid robot teleoperation, is the 53-point V2 human model mandatory?

The current Mapping Algorithm for humanoid robots is developed based on a human model. In the future, we will release versions supporting other human marker models.

How can data from motion capture software be imported into Matlab?
1. The software has completed calibration, and the tested markers or rigid bodies can be observed normally in the 3D view. 2. In the "Data Broadcast" view of the software, select the server-side IP address and enable the SDK. 3. Use our Matlab SDK to obtain data in real-time in Matlab.
Why is rigid body data not being received in ROS?
The following points need to be checked: 1. Whether a markerset has been established for the tested object in XINGYING, and whether the rigid body on the tested object is visible in the 3D view; 2. Confirm that VRPN is selected in the settings interface and that the correct server-side address is chosen; 3. On the ROS side, ping the server-side address selected in XINGYING to see if it can be reached.
Business
How can I obtain the software for a trial before purchase?

You can contact our technical support engineers via the after-sales group or reach out to our sales manager to apply for a trial version of the software.

By using this site, you agree to our terms, which outline our use of cookies. CLOSE ×

Contact us
We are committed to responding promptly and will connect with you through our local distributors for further assistance.
Engineering Virtual Reality Life Sciences Entertainment
I would like to receive a quote
Beijing NOKOV Science & Technology Co., Ltd (Headquarter)
LocationRoom820, China Minmetals Tower, Chaoyang Dist., Beijing
Emailinfo@nokov.cn
Phone+ 86-10-64922321
Capture Volume*
Objective*
Full Bodies Drones/Robots Others
Quantity
Camera Type
Pluto1.3C Mars1.3H Mars2H Mars4H Underwater Others/I do not know
Camera Count
4 6 8 12 16 20 24 Others/I don't know