Yaohang Xu (许耀航)
I am a highly-motivated person interested in robotics. My research goal is to establish a robot system with high generalization, robustness, intelligence and the ability to independently complete complex and dexterous long-series operation tasks. So my research is the intersection of robots, dexterous operation, large models and reinforcement learning. Since 2023, I have been pursuing a M.Sc. in School of Artificial Intelligence and Automation at Huazhong University of Science and Technology (HUST) under the supervision of Prof. Lijun Zhu. I received my B.Eng. in School of Artificial Intelligence and Automation at Huazhong University of Science and Technology (HUST) in 2023.
Awards & Honors
• National College Robot Competition ROBOCON(2024, Second Place of National First Prize)
• National College Robot Competition ROBOCON(2023, National First Prize)
• Huazhong University of Science and Technology Outstanding Graduate(2023, University Level)
• Mathematical Contest In Modeling(2022, Honorable Mention)
• Outstanding student leader of the school(2020, University Level)
Skills
• Software: GAZEBO, MUJOCO, Isaac Sim, MATLAB, Altium Designer, Keil, SolidWorks
• Programming: Pytorch, JAX, C/C++, Python for Linux(ROS)
• Engineering: Circuit/PCB Design and Debug, Mechanical Assembly
• Soft Skills: Planned, Responsible, Organized, Self-Motivating, Teamwork, Adaptability, Analytical Thinking
Project
Long-order operation tasks for skill reinforcement learning of residual hypernetworks
A reinforcement learning system based on residual hypernetwork pools and real-time human demonstrations is proposed to achieve robust learning for long-order and complex arm operation tasks. This system enables the robotic arm to continuously try and explore through reinforcement learning. It first trains the basic model for grasping, and then learns different branch skills through the hypernetwork pool. Under the real-time teaching of the operator, it acquires more refined operation skills and features high robustness and a 100% success rate.
Data acquisition and training of humanoid dual-arm teleoperation based on VisionPro
The image presents a human - like dual - arm remote operation data collection and training platform based on VisionPro. It includes a dual - arm remote operation experimental platform comprising a dual - depth - camera, a 7 - DOF robotic arm, and a 6 - DOF dexterous hand. VisionPro is used for multi - sensor fusion and tracking. Data processing and collection involve acquiring and handling data from the sensors. The dexterous hand is reoriented to adjust its position and movement. Mechanical arm motion control manages the movement of the robotic arm. Real - time visual feedback provides immediate visual information for the operation. The collected dataset is used for imitation learning training, where the robot learns to imitate human actions and operations. Imitation learning strategy inference involves deducing the optimal strategies for the robot to perform tasks by mimicking human behavior. This system enables effective training and data collection for improving the performance of dual - arm remote operations, leveraging VisionPro's capabilities for sensor fusion and tracking to enhance the robot's ability to learn and execute complex tasks.
ACT Imitation Learning of Operate Box with ROS/Gazebo simulation
This experiment uses imitation learning in a ROS/Gazebo simulation to train a robot for box - operating tasks. The robot learns to grasp, move, and place colored cubes by mimicking human actions. The learning outcome is demonstrated in the simulation environment, with a window in the top - left showing the robot's - perspective image, aiding environmental perception and decision - making. This approach allows the robot to acquire efficient manipulation skills through observation and imitation, leveraging the high - fidelity simulation capabilities of ROS/Gazebo to facilitate the development of complex robotic behaviors in a controlled and safe setting.
Research
Prescribed-Time Robust Synchronization of Networked Heterogeneous Euler-Lagrange Systems
In this paper, we propose a prescribed-time synchronization (PTS) algorithm for networked Euler-Lagrange systems subjected to external disturbances. Notably, the system matrix and the state of the leader agent are not accessible to all agents. The algorithm consists of distributed prescribed-time observers and local prescribed-time tracking controllers, dividing the PTS problem into prescribed-time convergence of distributed estimation errors and local tracking errors. Unlike most existing prescribed-time control methods, which achieve prescribed-time convergence by introducing specific time-varying gains and adjusting feedback values, we establish a class of KT functions and incorporate them into comparison functions to represent time-varying gains. By analyzing the properties of class KT and comparison functions, we ensure the prescribed-time convergence of distributed estimation errors and local tracking errors, as well as the uniform boundedness of internal signals in the closed-loop systems. External disturbances are handled and dominated by the time-varying gains that tend to infinity as time approaches the prescribed time, while the control signal is still guaranteed to be bounded. Finally, a numerical example and a practical experiment demonstrate the effectiveness and innovation of the algorithm.
Competition
Robocon2023 & Robocon2024
Robocon is a popular Chinese robotics competition. It is an annual event that attracts many universities and young talents. Teams build robots to complete challenging tasks like racing, combat or cooperation. It promotes innovation, engineering skills and teamwork. The competition has a fun and exciting atmosphere, showing the great potential of China's robotics and young makers.

