Skip to content Skip to footer

Google DeepMind uses a large language model to solve an intractable mathematical problem

Google DeepMind has utilized a substantial language model to solve a previously unsolved problem in pure mathematics. In a paper recently published in Nature, researchers assert that this marks the inaugural instance where a large language model has been instrumental in unraveling a long-standing scientific puzzle.

The objective is to generate novel and substantiated information that was not in existence prior.

“It’s not present in the training data—no one was even aware of it,” explained Pushmeet Kohli, co-author and vice president of research at Google DeepMind.

Large language models

Large language models have been notorious for fabricating information rather than presenting fresh, factual insights. However, a recent tool from Google DeepMind, called FunSearch, seems to challenge this perception. It demonstrates that an AI has the potential to make authentic discoveries when it is sufficiently convinced, and if you discard a significant portion of the fabricated output.

FunSearch extends the trend of breakthroughs in fundamental mathematics and computer science achieved by DeepMind through the assistance of AI. The initial AlphaTensor identified a method to accelerate computations crucial to various types of code, shattering a 50-year record. Subsequently, AlphaDev discovered ways to enhance the efficiency of key algorithms that are executed trillions of times daily.

Yet, these tools do not utilize large language models

Constructed upon DeepMind’s AlphaZero AI, both successfully tackled mathematical problems by treating them similarly to Go or chess puzzles. The drawback, as highlighted by Bernardino Romera-Paredes, a researcher at the company involved in both AlphaTensor and FunSearch, is that these tools are confined to their specific domains:

.

AlphaTensor excels at matrix multiplication, but essentially nothing else.

.

FunSearch adopts a distinct strategy

It amalgamates a substantial language model named Codey, a variant of Google’s PaLM 2 fine-tuned for computer code, with other systems that filter out incorrect or nonsensical responses and reintegrate valid ones.

“To be completely honest with you, we have hypotheses, but we don’t precisely understand why this works. In the initial stages of the project, we weren’t certain if this would be effective at all,” admits Alhussain Fauzi, a researcher at Google DeepMind.

The researchers initiated the process by outlining the problem they aimed to solve in Python, a widely-used programming language. However, they omitted the lines in the program that would delineate how to resolve it. This is where FunSearch comes into play. It enlists Codey to complete the gaps – essentially proposing code that will resolve the problem.

A second algorithm subsequently verifies and assesses Codey’s concoction

The most promising suggestions, even if they are not entirely accurate, are stored and fed back to Codey, who then attempts to complete the program once again.

“While many of the suggestions may be nonsensical, some exhibit logical coherence, and a select few are genuinely inspiring. The approach involves taking those exceptionally inspiring ideas and instructing, “Okay, take those and replicate.” Kohli

Following several million suggestions and numerous iterations of the overall process, FunSearch successfully generated code that yielded a correct and previously unknown solution to the finite set problem. This problem revolves around determining the maximum size of a particular type of set.

To illustrate, envision dotting points on graph paper. The set problem resembles the challenge of determining how many dots can be placed without forming a straight line with three of them.

This is highly specialized but crucial

Mathematicians don’t even agree on how to approach it, let alone what the solution entails.

Terence Tao of the University of California, Los Angeles, a recipient of numerous prestigious mathematics awards, shared his insights on the matter. Tao finds the capabilities of FunSearch intriguing.

“It’s a promising paradigm. It’s an interesting way to harness the power of large language models,” notes Tao.

A significant advantage that FunSearch holds over AlphaTensor lies in its potential to address a diverse array of problems in theory. This is because it generates code—a set of instructions for producing the solution, not the solution itself. Different codes can resolve different problems. Moreover, FunSearch results are more comprehensible. A recipe often provides clearer insight than the perplexing mathematical solutions it produces.

To assess its versatility, researchers employed FunSearch to tackle another challenging mathematical problem.

The bin packing problem involves trying to pack items into as many bins as possible.

This optimization is crucial for various computer science applications, ranging from data center management to e-commerce. Impressively, FunSearch devised a faster solution than those created by humans.

“Mathematicians are still exploring the optimal ways to integrate large language models into our research workflow, leveraging their capabilities while mitigating their shortcomings. This certainly suggests a potential pathway forward,” Tao concludes.

Your brand is your best asset 
Contact Us Today!

Email: info@webdesigngalore.com 

Emergency number and Whatsapp: +27 84 470 4260

Website: www.webdesigngalore.com

Web Design Galore – Where Imagination Meets Innovation!

Our site uses cookies. Learn more about our use of cookies: cookie policy