
Training large AI models to different tasks often requires adjusting millions—or even billions—of internal settings called parameters, using enormous amounts of computing power and energy. But Johns Hopkins computer scientists have developed a method that dramatically cuts the environmental costs of fine-tuning AI. They call it EigenLoRAx and, like the Dr. Seuss character, it’s a nod to a greener future.
Their work appeared at the 2025 IEEE CVPR Workshop on Fair, Data-Efficient, and Trusted Computer Vision, held on June 11 in Nashville.
Instead of creating new adapters from scratch like most computer scientists do, the JHU team recycles existing open-source low-rank adapters (LoRAs)—small neural network modules that help foundation models adapt to new tasks.
“There are thousands of publicly available LoRAs within the growing open-source community that are already tailored to diverse domains, offering a lightweight solution for fine-tuning large models,” explains first author Prakhar Kaushik, a final-year PhD student of computer science. “This efficient, reusable knowledge exists across seemingly unrelated tasks—and we can tap into it to reduce training time and resource consumption to build more sustainable AI systems.”
The team discovered that many different AI tasks share a common pattern in how they adapt a base model. This pattern—called a “universal subspace”—can be extracted from existing LoRAs to learn new tasks with minimal training.
“Imagine you’re a music producer with a library of songs,” Kaushik says. “Instead of composing a brand-new track for every project, you start noticing common rhythms and riffs in your existing music, so you build a library of these reusable sounds. Whenever a new song is needed, you simply remix the elements you already have.”
That’s what EigenLoRAx does for AI. Each LoRA is like a song, and EigenLoRAx finds the shared patterns across them to create new “songs,” or AI tasks, by learning how to combine them, saving computer scientists time, memory, and computing power. This also makes the fine-tuning process less expensive, allowing researchers with limited budgets to take advantage of larger models, too.
The researchers tested their method on various tasks, such as identifying different types of dishes and flowers, and found that EigenLoRAx achieved nearly the same accuracy as full fine-tuning while learning only 1% of the parameters. Their experiments show that the method can be applied to a large range of problems and model architectures, they say.
The team also experimented with Stable Diffusion, a popular model used to generate images from text prompts.
“Typically, people fine-tune separate LoRAs for different styles—think ‘anime’ or ‘Studio Ghibli’—which takes up a lot of storage. EigenLoRAx allowed us to compress all those style adapters into one, reducing storage needs from 4.6 gigabytes to just 261 megabytes—that’s a 94% reduction,” says Kaushik.
Even better, this unified adapter can reproduce each style on demand without a noticeable quality drop, so everyday users can personalize their image generation pipelines on their own devices for a fraction of the cost. And because EigenLoRAx drastically reduces the memory and compute requirements of fine-tuning, it also opens the door to on-device personalization.
“You could, one day, fine-tune a model on your phone—for your voice, writing style, or artistic taste—without sending data to the cloud or burning through energy,” Kaushik says.
Currently, EigenLoRAx requires training a separate model for each base architecture—”one for GPT-style models, another for LLaMA-style, and so on,” Kaushik says. “Ideally, we’d like a single, universal adapter that works across all models—or even different modalities, like text, vision, and audio.”
The method also depends on having access to good existing LoRAs. If none exist, it doesn’t work—like trying to remix a song when you don’t have the original track. But as the open-source AI community grows, new adapters will become available to recycle. That’s why the researchers are working on a continuous version of EigenLoRAx, so that new LoRAs can be added without recomputing everything from scratch. This will let the system grow its knowledge base incrementally, much like how humans build expertise.
The team was also able to find a universal subspace for foundation and generative models—the large models that power today’s popular AI applications. By expanding EigenLoRAx to these larger models directly, the researchers say they may be able to substantially reduce the computational demands of their training, creating a more sustainable and environmentally friendly deep learning paradigm.
“Our work makes AI more accessible, sustainable, and customizable,” Kaushik says. “It also reduces AI’s carbon footprint, helping to reduce the environmental cost of training massive models.
“In short, this is a step toward democratized, green AI: smarter tools that adapt to you, not the other way around.”
Additional authors of this work include Bloomberg Distinguished Professor of Computational Cognitive Science Alan Yuille, CS PhD student Shravan Chaudhari, and Ankit Vaidya, a visiting undergraduate researcher from the Pune Institute of Computer Technology.