To train and deploy an RL policy for the humanoid, clone the following repository:
GitHub - generalroboticslab/DukeHumanoidv1: RL simulation and hardware control for Duke Humanoid V1
The repository contains two submodules:
The git repository for training the humanoid in simulation is:
GitHub - generalroboticslab/legged_env: isaac gym env for training legged robots
The git repository for deploying the trained policy in the hardware is:
GitHub - generalroboticslab/dukeHumanoidHardwareControl: Biped hardware control code
Please refer to the documentation within each submodule for detailed setup and execution instructions.
Here are some videos to demonstrate its walking:
Simulation:
Baseline walking demo:
Passive walking demo:
To ensure accurate robot movements, the motor positions must be initialized to a known initial position every time the robot is powered on. This is necessary because encoder positioning are divided into several sections, which number equals the reduction ratio of the actuator, and joint initialization finds the offset that matches the actual joint positions.
Initialization is achieved by using alignment pieces to physically guide the motors to a predefined home position, as demonstrated in the accompanying video. Subsequently, the motor_controller.py
script records the encoder values at this home position and stores them as offsets in individual files for each motor. These offset values are then incorporated into all motor control calculations, effectively compensating for any encoder drift and ensuring the robot consistently reaches its intended targets.