5 remarkable achievements of DeepMind's new self-evolving AI coding agent

5 remarkable achievements of DeepMind's new self-evolving AI coding agent

      In recent years, Google DeepMind’s AI systems have achieved significant advancements in science, including predicting the three-dimensional structures of nearly all known proteins and enhancing weather forecasting accuracy.

      Today, the UK-based lab introduced its newest development: AlphaEvolve, an AI coding agent designed to enhance large language models (LLMs) like Gemini in addressing complex computing and mathematical challenges.

      AlphaEvolve is driven by the same models it aims to improve. Utilizing Gemini, the agent generates code-based programs to solve specified problems. It evaluates each code snippet through automated tests assessing accuracy, efficiency, and innovation. AlphaEvolve retains the best-performing snippets to serve as a foundation for subsequent generations. Over multiple cycles, this approach “evolves” increasingly effective solutions, making it a self-evolving AI.

      DeepMind has already employed AlphaEvolve to address energy consumption in data centers, create better chips, and accelerate AI training. Here are five of its most notable achievements so far:

      1. It found new solutions to some of the toughest mathematical problems worldwide.

      AlphaEvolve was tested on over 50 open mathematical issues, ranging from combinatorics to number theory, improving upon existing best-known solutions in 20% of the cases. A notable accomplishment was its work on the 300-year-old kissing number problem, where it established a new lower bound in 11-dimensional space with a configuration of 593 spheres — a milestone even expert mathematicians had not achieved.

      2. It enhanced Google’s data center efficiency.

      The AI agent developed a method for optimizing power scheduling in Google’s data centers, resulting in a 0.7% improvement in energy efficiency over the past year — a considerable savings given the scale of its operations.

      3. It expedited the training of Gemini.

      AlphaEvolve optimized the process of dividing matrix multiplications into subproblems, a fundamental task in training AI models like Gemini, leading to a 23% acceleration of the training process and a 1% reduction in total training time. In generative AI, even a small percentage can yield substantial cost and energy savings.

      4. It collaborated on the design of Google’s next AI chip.

      The agent utilized its coding capabilities to modify a segment of an arithmetic circuit in Verilog, a chip design language, enhancing its efficiency. This same optimization is now being applied to create Google’s upcoming TPU (Tensor Processing Unit), a sophisticated chip for machine learning.

      5. It surpassed a legendary algorithm from 1969.

      For many years, Strassen’s algorithm was considered the benchmark for multiplying 4×4 complex matrices. AlphaEvolve discovered a more efficient solution that requires fewer scalar multiplications, potentially leading to advancements in LLMs, which heavily depend on matrix multiplication.

      According to DeepMind, these accomplishments are only the beginning for AlphaEvolve. The lab anticipates the agent will tackle a wide range of problems, from discovering new materials and pharmaceuticals to optimizing business processes.

      The evolution of AI will be a prominent discussion topic at the TNW Conference, scheduled for June 19-20 in Amsterdam. Tickets for the event are currently available—use code TNWXMEDIA2025 at checkout for a 30% discount.

Other articles

5 remarkable achievements of DeepMind's new self-evolving AI coding agent

AlphaEvolve enhances large language models (LLMs) such as Gemini in their ability to tackle intricate computing and mathematical challenges.