On Wednesday, researchers from DeepMind released a paper ostensibly about using deep reinforcement learning to train miniature humanoid robots in complex movement skills and strategic understanding, resulting in efficient performance in a simulated one-on-one soccer game.
But few paid attention to the details because to accompany the paper, the researchers also released a 27-second video showing one experimenter repeatedly pushing a tiny humanoid robot to the ground as it attempts to score. Despite the interference (which no doubt violates the rules of soccer), the tiny robot manages to punt the ball into the goal anyway, marking a small but notable victory for underdogs everywhere.
On the demo website for “Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning,” the researchers frame the merciless toppling of the robots as a key part of a “robustness to pushes” evaluation, writing, “Although the robots are inherently fragile, minor hardware modifications together with basic regularization of the behavior during training lead to safe and effective movements while still being able to perform in a dynamic and agile way.”