Google AI ping pong robot sweeps players, netizens say: It might represent the US in the Olympics

Achieving a 45% win rate demonstrates a solid level of professional-caliber skills.

DeepMind researchers have unveiled the first AI robot capable of competing against human amateur table tennis players. The system combines an ABB IRB 1100 industrial robotic arm with DeepMind's custom AI software. While human professionals still outperform it, the system demonstrates machines' ability to make split-second decisions and adapt in complex physical tasks.

Table tennis has been crucial in benchmarking robotic arms for a decade due to its requirements for speed, reflexes, and strategy.

The researchers wrote in their preprint paper on arXiv: "This is the first robotic agent capable of competing at a human level in a physical sport, representing another milestone in robotic learning and control."

Link to preprint paper

The unnamed table tennis robot agent (suggested name "AlphaPong") was developed by a research team including David B. D'Ambrosio, Saminda Abeyruwan, and Laura Graesser. It performed well against players of varying skill levels. In a study involving 29 participants, the AI robot achieved a 45% win rate, demonstrating solid amateur-level skills.

Notably, it won 100% of matches against beginners and 55% against intermediate players. However, it lost every match against advanced players.

The robot's physical configuration includes the IRB 1100, a 6-degree-of-freedom robotic arm mounted on two linear tracks for 2D plane movement. High-speed cameras track the ball's position, while a motion capture system observes the human opponent's racket movements.

DeepMind researchers developed a two-tier approach to drive the robotic arm, enabling it to execute specific table tennis tactics while adjusting its strategy in real-time based on each opponent's play style. This adaptability allows it to compete with any amateur-level player without specific training for different opponents.

The system's architecture combines low-level skill controllers (trained to execute specific table tennis techniques) with a high-level strategic decision-maker (a more complex AI system analyzing game state, adapting to opponent styles, and selecting appropriate low-level skill strategies for each incoming ball).

A key innovation is the AI model's training method, using reinforcement learning in simulated physical environments while incorporating real-world examples as training data. This technique allowed the robot to learn from about 17,500 real table tennis ball trajectories.

The researchers used an iterative process to refine the robot's skills, starting with a small dataset of human-robot matches and then having the AI compete against real opponents. Each match generated new data on ball trajectories and human strategies, which was fed back into simulations for further training.

The process was repeated for seven cycles, enabling the robot to continuously adapt to increasingly skilled opponents and diverse playing styles. By the final round, the AI had learned from over 14,000 rallies and 3,000 serves, accumulating extensive table tennis knowledge and bridging the gap between simulation and reality.

Interestingly, Nvidia has been experimenting with similar simulated physics systems. Their Eureka system allows AI models to quickly learn to control robotic arms in simulated spaces rather than the real world.

Beyond technical achievements, the Google study explored the experience of human players competing against AI opponents. Surprisingly, even when losing to the table tennis robot agent, human players reported enjoying the experience.

The researchers noted, "Human players reported that playing against the robot was 'fun and engaging' across all skill groups and win rates." This positive response suggests potential applications for AI in sports training and entertainment.

However, the system has limitations, performing poorly with extremely fast and high balls, struggling to detect severe ball spin, and showing weakness in backhand play.

The Google DeepMind research team is working to address these shortcomings. They propose researching advanced control algorithms and hardware optimizations, possibly including predictive models for ball trajectory and faster communication protocols between the robot's sensors and actuators.

The researchers emphasize that as the results are further refined, they believe the system could potentially compete with high-level table tennis players in the future. DeepMind has extensive experience in developing AI models that defeat human players, including AlphaZero and AlphaGo in the game of Go.

The researchers also state that the impact of this robotic table tennis "prodigy" extends beyond table tennis. The technologies developed for this project could be applied to various robotic tasks requiring quick reactions and adaptation to unpredictable human behavior, including manufacturing and healthcare.