Published:
Author: Salena Fitzgerald
Category:
Mateo Díaz

Machine learning has revolutionized data processing, making possible innovations such as self-driving vehicles and medical artificial intelligence systems that can spotlight disease in vast collections of medical images. However, most current machine learning models can only process data sets with the same dimension as those they were trained on, restricting their real-world usefulness.  

Mateo Díaz, an assistant professor in the Whiting School of Engineering’s Department of Applied Mathematics and Statistics, and his collaborator Eitan Levin, a graduate student at the California Institute of Technology, offer a new approach: an innovative machine learning method that allows neural networks trained on a dataset of one size or dimension to process another, unlocking potential applications in domains ranging from physics to social networks. (In the world of machine learning, “dimension” refers to the number of features or attributes in a data set. For example, a dataset used to analyze housing prices could include dimensions like square footage, property age, number of bedrooms, and distance from good public schools.) 

The team’s results will be presented at the 27th International Conference on Artificial Intelligence and Statistics (AISTATS) in Valencia, Spain in May.  

“We figured out a way to teach models to tackle problems in dimensions much higher than the one used for learning. Imagine we trained a computer to tell us the most efficient way to visit every bar in Baltimore and then asked the computer the same question for New York City, which is more than 10 times bigger. Current machine learning methods would only be able to handle cities of the same size as Baltimore; ours, on the other hand, can scale,” said Díaz. 

The key to the team’s approach was the discovery of what Díaz called “an unexpected link” between traditional machine learning techniques and an abstract algebraic concept called “representational stability,” which says that certain mathematical objects—numbers, vectors, matrices, and tensors— behave in the same way even when their underlying coordinate systems change.   

“Representation stability lets us design ‘equivariant’ neural networks that keep their capabilities even when the dimension or size of the data inputs changes,” he said. “This general stability phenomenon is rather flexible and allows us to do any-dimensional learning in graphs or networks, in particle systems, and more.” 

 The team first determined characteristics that make neural networks across various dimensions compatible, allowing them to be parameterized using a finite amount of information. The researchers used this connection to develop an algorithm and test it. Díaz said this not only made the implementation feasible but also paved the way for any-dimensional learning.  

“Our approach is one of the first to point out that representation stability can be leveraged this way, and our preliminary experimental results are encouraging,” he said.  

The team hopes to improve the method’s efficiency by exploring ad hoc solutions for specific scenarios and delving deeper into statistical questions regarding the limits of any-dimensional learning.   

“This study not only forges a novel connection between representation stability and machine learning but potentially unlocks new avenues for exploration and introduces the concept of any-dimensional learning, laying the groundwork for addressing previously unexplored questions in the field,” he said.