Project
Embodied AI for biotech experiments robot
Internship project in Centrillion Technology
keywords: Embodied AI, Imitation Learning, LLM, Biotech Experimental Robot
• Use Mobile-ALOHA platform to to collect expert trajectories (more than 500 episodes per subtask) to build a dataset with 15+ subtasks.
• Use ACT algorithm to train 15+ biological experiment fine operation subtasks. Single subtask duration 10-20s, success rate of more than 80%, with basic generalization and adaptive ability.
• Design an embodied AI framework using LLM and imitation learning, conducting high-level task planning, estimating the success of the previous subtask, and executing the sequenced subtask under the premise of safety to ensure seamless integration of robot functions in the complex experimental workflow.
Example video of LLM driven embodied AI (manuplation + base moving) can be found here
Example video of generalized pick-and-place task can be found here
Example video of using syringe to conduct biochemical experimet task can be found here
Example video of accurate manipulation in moving wafer can be found here
Example video of generalized manipulation in moving wafer can be found here
Auto-tuning Bipedal Robot MPC Controller under Challenge Terrian with DiffTune
Research Assistant, Advanced Controls and Research Laboratory, UIUC
Supervisor: Dr. Naira Hovakimyan, Professor of Mechanical Science and Engineering Department, UIUC & Dr. Quan Nguyen, Assistant Professor of Aerospace and Mechanical Engineering, USC
• Developed a legged robot MPC controller auto-tuning framework that conducts sensitivity analysis on bipedal robot’s stance force over MPC parameters. Auto-tuning MPC decreased the control smooth loss and tracking loss up to 40% compared to hand-tunned MPC.
• Trained a ground reaction force & moment network with real sensor data that maps MPC solution to real ground reactions to decrease sim-toreal error.
• The work has been submitted to ICRA 2025 and is under review now.
Deep reinforcement learning based quadrupedal robot control and locomotion development
Internship project in Unitree Robotics
Deep reinforcement learning involves training a robot to navigate and control its movements by learning from its interactions with the environment. The project focuses on the development of robot locomotion and controls through the implementation of deep reinforcement learning (DRL) techniques. The project aims to enhance the robustness, efficiency, and adaptability of robotic systems, paving the way for more sophisticated and autonomous robots capable of navigating complex environments with precision and intelligence.
What I have done in this project:
• Developed the novel quadrupedal robot locomotion and controls framework with deep reinforcement learning that increases the robot payload by 15% compared with traditional model-based control framework
• Trained the quadrupedal robot locomotion and controls policy based on deep reinforcement learning with Isaac Gym
• Developed the quadrupedal robot learning-based locomotion and controls model deployment program with C++
• Conduct quadrupedal robot multi-gaits walking test using trained deep reinforcement learning control policy and analyzed the test data for sim-to-real evaluation
• Developed the quadrupedal robot state estimator using sensor fusion techniques based on Extended Kalman Filter, increasing the estimation accuracy by 23%
Example video can be found here
Development on Autonomous Unmanned Aerial Vehicles (UAV)
Research Assistant, Advanced Controls and Research Laboratory, UIUC
Supervisor: Dr. Naira Hovakimyan, Professor of Mechanical Science and Engineering Department, UIUC
We propose a framework for fast trajectory planning for unmanned aerial vehicles (UAVs). Our framework is reformulated from an existing bilevel optimization, in which the lower-level problem solves for the optimal trajectory with a fixed time allocation, whereas the upper-level problem updates the time allocation using analytical gradients. The lower-level problem incorporates the safety-set constraints (in the form of inequality constraints) and is cast as a convex quadratic program (QP). Our formulation modifies the lower-level QP by excluding the inequality constraints for the safety sets, which significantly reduces the computation time. The safety-set constraints are moved to the upper-level problem, where the feasible waypoints are updated together with the time allocation using analytical gradients enabled by the OptNet. We validate our approach in simulations, where our method’s computation time scales linearly with respect to the number of safety sets, in contrast to the state-of-the-art that scales exponentially.
What I have done in this project:
• Developed a collision-free bilevel trajectory optimization system with optimal waypoints’ temporal and spatial assignment for autonomous quadrotor’s motion planning based on convex optimization, increasing the computational efficiency by 150%. The work has been published on IEEE RA-L and presented on IROS 2023, the paper can be found here
• Deployed the trajectory generation program together with path planning system on Nvidia TX2 onboard computer
• Co-designed and manufactured the prototype of omnidrone, a new type of fully-actuated UAV with six motors
• Designed and conducted experiments to evaluate different motors’ thrust- and torque-throttle curves under different battery conditions based on NI-DAQ and LabView
Intelligent Sign Language Robot Development
Team Leader, Capstone Project of ZJU-UIUC Institute
This project proposes a novel intelligent assistant to help people with speech or hearing impairments communicate and seek help. The intelligent assistant includes a bionic hand of 17 degrees of freedom (DOFs) and an innovative neural network that recognizes American Sign Language (ASL). The users can prompt a question in ASL, and the assistant would recognize the problem and search for the answer online, answering and helping the user with ASL co-generated by the microcontroller unit and the bionic hand. Meanwhile, the answer would be demonstrated on a digital screen for inspection.
What I have done in this project:
• Designed and developed the sign language robot hardware system, including a 17 degrees of freedom dexterous bionic hand, a STM-32 microcontroller, a NVIDIA Jetson Nano onboard computer
• Developed the control algorithm and program of the robot, designed the movement of each joint motor based on standard American Sign Language
• Co-developed the image detection program to identify customer’s sign based on YOLO-V5
Project final paper can be found here, demonstration video can be found here