Zero-shot reinforcement learning (RL) algorithms aim to learn a family of policies from a reward-free dataset, and recover optimal policies for any reward function directly at test time. Naturally, the quality of the pretraining dataset determines the performance of the recovered policies across tasks. However, pre-collecting a relevant, diverse dataset without prior knowledge of the downstream tasks of interest remains a challenge.
In this work, we study online zero-shot RL for quadrupedal control on real robotic systems, building upon the Forward-Backward (FB) algorithm. We observe that undirected exploration yields low-diversity data, leading to poor downstream performance and rendering policies impractical for direct hardware deployment.
Therefore, we introduce FB-MEBE, an online zero-shot RL algorithm that combines an unsupervised behavior exploration strategy with a regularization critic. FB-MEBE promotes exploration by maximizing the entropy of the achieved behavior distribution. Additionally, a regularization critic shapes the recovered policies toward more natural and physically plausible behaviors. We empirically demonstrate that FB-MEBE achieves and improved performance compared to other exploration strategies in a range of simulated downstream tasks, and that it renders natural policies that can be seamlessly deployed to hardware without further finetuning.
FB-MEBE is a behavioral foundation model that controls a quadrupedal robot using a single policy. At test time, the same policy can be conditioned on different reward functions or goals to solve unseen tasks, without any additional finetuning. Below are some zero-shot tasks on reward optimization:
Reward: Walk Forward
walk forward with velocity \(v_x = +1 ~ \text{m/s} \).
\( r = \exp\left(-\frac{\|\mathbf{v} - \mathbf{v}^*\|^2}{0.3^2}\right) \times \exp\left(-\frac{|\omega_z - \omega_z^*|^2}{0.2^2}\right) \times \exp\left(-\frac{\|\mathbf{g} - \mathbf{g}^*\|^2}{0.1^2}\right) \)
Reward: Walk Backward
walk backward with velocity \(v_x = -1 ~ \text{m/s} \).
\( r = \exp\left(-\frac{\|\mathbf{v} - \mathbf{v}^*\|^2}{0.3^2}\right) \times \exp\left(-\frac{|\omega_z - \omega_z^*|^2}{0.2^2}\right) \times \exp\left(-\frac{\|\mathbf{g} - \mathbf{g}^*\|^2}{0.1^2}\right) \)
Reward: Walk Sideway
walk sideway with velocity \(v_y = +0.5 ~ \text{m/s} \).
\( r = \exp\left(-\frac{\|\mathbf{v} - \mathbf{v}^*\|^2}{0.3^2}\right) \times \exp\left(-\frac{|\omega_z - \omega_z^*|^2}{0.2^2}\right) \times \exp\left(-\frac{\|\mathbf{g} - \mathbf{g}^*\|^2}{0.1^2}\right) \)
Reward: Pitch Control
pitch to angle \( +20^\circ \) and \( +60^\circ \).
\( r = \exp\left(-\frac{\|\mathbf{g} - \mathbf{g}^*\|^2}{0.1^2}\right) \times \exp\left(-\frac{|h - h^*|^2}{0.05^2}\right) \)
Reward: Turn CCW
turn counterclockwise with \(\omega_z = 1 ~ \text{rad/s} \).
\( r = \exp\left(-\frac{\|\mathbf{v} - \mathbf{v}^*\|^2}{0.3^2}\right) \times \exp\left(-\frac{|\omega_z - \omega_z^*|^2}{0.2^2}\right) \times \exp\left(-\frac{\|\mathbf{g} - \mathbf{g}^*\|^2}{0.1^2}\right) \)
Reward: Height Change
change height to \( h = 0.24 ~ \text{m} \) and \( h = 0.32 ~ \text{m} \).
\( r = \exp\left(-\frac{|h - h^*|^2}{0.05^2}\right) \)
We demonstrate that the reward function can be modified online using a joystick, enabling real-time inference of the latent variable z. Remarkably, FB-MEBE adapts instantly to these changes, allowing interactive and responsive control of behaviors.
FB shows much higher action rate and foot slippage behavior than FB-MEBE.