DeepMind, an artificial intelligence company owned by Alphabet, Google’s parent company, has announced in a study published in the journal ‘ Nature ‘ the development of a new automated system called AlphaTensor, capable of discovering new, efficient and faster algorithms for carrying out fundamental tasks such as matrix multiplication.
These types of operations are necessary for elaborating all kinds of processes, from the generation of graphics for video games to the processing of images in ‘smartphones‘ or the compression of data and videos to be shared on the Internet, among many other things.
Matrix multiplication is a fundamental operation in science. In fact, this is an area of longstanding mathematical interest, and finding fast algorithms for matrix multiplication is one of the biggest open questions in computer science, one that remains open despite 50 years of research.
Thanks to the ingenuity, the company hopes to help in future research to determine the most efficient way to solve computational problems.
“In addition, this project allows us to demonstrate that AlphaZero (DeepMind’s AI agent on which AlphaTensor is based) can be applied beyond the realm of games to solve scientific and mathematical problems, and we hope that this new research can fuel a new era of algorithmic discovery using AI,” the DeepMind scientists note.
Companies worldwide spend large amounts of time and money developing dedicated hardware for efficient matrix multiplication. Therefore, they point out from DeepMind, that even minor improvements in the efficiency of matrix multiplication can have a widespread impact.
According to team members, the traditional algorithm taught in school multiplies a 4×5 matrix by 5×5 using 100 multiplications. This number was later reduced to 80, and AlphaTensor has been able to find algorithms that do the same operation. Using only 76 expansions.
To develop the ingenuity, the members of the DeepMind team approached it as if they were playing a game in which the objective was to find the most efficient way to multiply two different matrices. “This game is incredibly challenging: the number of possible algorithms to consider is much larger than the number of atoms in the universe,” say the researchers.
To play this game, the DeepMind team trained AlphaTensor with reinforcement learning -a system that allows artificial intelligence to plan effective strategies based on experimentation with data- And he did it without giving the machine any knowledge about existing matrix multiplication algorithms.
Thanks to machine learning, AI can discover pre-existing matrix multiplication algorithms from scratch, such as the one developed in 1969 by the German mathematician Volken Strassen. “(During tests) AlphaTensor discovered thousands of new correct and efficient algorithms for multiplying matrices of different sizes,” the research team points out.
DeepMind also tested algorithms developed by AlphaTensor on hardware such as the Nvidia V100 GPU or Google TPU v2, managing to multiply matrices between 10 and 20% faster than the algorithms commonly used on these systems.