Deployment code

To train and deploy an RL policy for the humanoid, clone the following repository:

GitHub - generalroboticslab/DukeHumanoidv1: RL simulation and hardware control for Duke Humanoid V1

The repository contains two submodules:

Please refer to the documentation within each submodule for detailed setup and execution instructions.

Here are some videos to demonstrate its walking:

Simulation:

https://youtu.be/aqq8W3iJsgs

https://youtu.be/qkSN3-9EpRs

Baseline walking demo:

https://youtu.be/3RHkjrCOSM8

https://youtu.be/nD0XgPQs5vU

Passive walking demo:

https://youtu.be/XlapxvUC0Jw

https://youtu.be/b1S4yCmfq3M

Joint initialization

To ensure accurate robot movements, the motor positions must be initialized to a known initial position every time the robot is powered on. This is necessary because encoder positioning are divided into several sections, which number equals the reduction ratio of the actuator, and joint initialization finds the offset that matches the actual joint positions.

Initialization is achieved by using alignment pieces to physically guide the motors to a predefined home position, as demonstrated in the accompanying video. Subsequently, the motor_controller.py script records the encoder values at this home position and stores them as offsets in individual files for each motor. These offset values are then incorporated into all motor control calculations, effectively compensating for any encoder drift and ensuring the robot consistently reaches its intended targets.