MIT researchers have introduced an efficient reinforcement learning algorithm that enhances AI's decision-making in complex scenarios, such as city traffic control.
By strategically selecting optimal tasks for training, the algorithm achieves significantly improved performance with far less data, offering a 50x boost in efficiency. This method not only saves time and resources but also paves the way for more effective AI applications in real-world settings.
AI Decision-Making
Across fields like robotics, medicine, and political science, researchers are working to train AI systems to make meaningful and impactful decisions. For instance, an AI system designed to manage traffic in a congested city could help drivers reach their destinations more quickly while enhancing safety and sustainability.
However, teaching AI to make effective decisions is a complex challenge.
Challenges in Reinforcement Learning
Reinforcement learning models, the foundation of many AI decision-making systems, often struggle when confronted with even slight changes in the tasks they are trained for. For example, in traffic management, a model might falter when handling intersections with varying speed limits, lane configurations, or traffic patterns.
To boost the reliability of reinforcement learning models for complex tasks with variability, MIT researchers have introduced a more efficient algorithm for training them.
Strategic Task Selection in AI Training
The algorithm strategically selects the best tasks for training an AI agent so it can effectively perform all tasks in a collection of related tasks. In the case of traffic signal control, each task could be one intersection in a task space that includes all intersections in the city.
By focusing on a smaller number of intersections that contribute the most to the algorithm's overall effectiveness, this method maximizes performance while keeping the training cost low.
Enhancing AI Efficiency With a Simple Algorithm
The researchers found that their technique was between five and 50 times more efficient than standard approaches on an array of simulated tasks. This gain in efficiency helps the algorithm learn a better solution in a faster manner, ultimately improving the performance of the AI agent.
"We were able to see incredible performance improvements, with a very simple algorithm, by thinking outside the box. An algorithm that is not very complicated stands a better chance of being adopted by the community because it is easier to implement and easier for others to understand," says senior author Cathy Wu, the Thomas D. and Virginia W. Cabot Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS).
She is joined on the paper by lead author Jung-Hoon Cho, a CEE graduate student; Vindula Jayawardana, a graduate student in the Department of Electrical Engineering and Computer Science (EECS); and Sirui Li, an IDSS graduate student. The research will be presented at the Conference on Neural Information Processing Systems.
Balancing Training Approaches
To train an algorithm to control traffic lights at many intersections in a city, an engineer would typically choose between two main approaches. She can train one algorithm for each intersection independently, using only that intersection's data, or train a larger algorithm using data from all intersections and then apply it to each one.
But each approach comes with its share of downsides. Training a separate algorithm for each task (such as a given intersection) is a time-consuming process that requires an enormous amount of data and computation, while training one algorithm for all tasks often leads to subpar performance.
Wu and her collaborators sought a sweet spot between these two approaches.
Advantages of Model-Based Transfer Learning
For their method, they choose a subset of tasks and train one algorithm for each task independently. Importantly, they strategically select individual tasks that are most likely to improve the algorithm's overall performance on all tasks.
They leverage a common trick from the reinforcement learning field called zero-shot transfer learning, in which an already trained model is applied to a new task without being further trained. With transfer learning, the model often performs remarkably well on the new neighbor task.
"We know it would be ideal to train on all the tasks, but we wondered if we could get away with training on a subset of those tasks, apply the result to all the tasks, and still see a performance increase," Wu says.
MBTL Algorithm: Optimizing Task Selection
To identify which tasks they should select to maximize expected performance, the researchers developed an algorithm called Model-Based Transfer Learning (MBTL).
The MBTL algorithm has two pieces. For one, it models how well each algorithm would perform if it were trained independently on one task. Then it models how much each algorithm's performance would degrade if it were transferred to each other task, a concept known as generalization performance.
Explicitly modeling generalization performance allows MBTL to estimate the value of training on a new task.
MBTL does this sequentially, choosing the task which leads to the highest performance gain first, then selecting additional tasks that provide the biggest subsequent marginal improvements to overall performance.
Since MBTL only focuses on the most promising tasks, it can dramatically improve the efficiency of the training process.
Implications for Future AI Development
When the researchers tested this technique on simulated tasks, including controlling traffic signals, managing real-time speed advisories, and executing several classic control tasks, it was five to 50 times more efficient than other methods.
This means they could arrive at the same solution by training on far less data. For instance, with a 50x efficiency boost, the MBTL algorithm could train on just two tasks and achieve the same performance as a standard method which uses data from 100 tasks.
"From the perspective of the two main approaches, that means data from the other 98 tasks was not necessary or that training on all 100 tasks is confusing to the algorithm, so the performance ends up worse than ours," Wu says.
With MBTL, adding even a small amount of additional training time could lead to much better performance.
In the future, the researchers plan to design MBTL algorithms that can extend to more complex problems, such as high-dimensional task spaces. They are also interested in applying their approach to real-world problems, especially in next-generation mobility systems.
Reference: "Model-Based Transfer Learning for Contextual Reinforcement Learning" by Jung-Hoon Cho, Vindula Jayawardana, Sirui Li and Cathy Wu, 21 November 2024, Computer Science > Machine Learning.
arXiv:2408.04498
The research is funded, in part, by a National Science Foundation CAREER Award, the Kwanjeong Educational Foundation PhD Scholarship Program, and an Amazon Robotics PhD Fellowship.
News
The Brain’s Strange Way of Computing Could Explain Consciousness
Consciousness may emerge not from code, but from the way living brains physically compute. Discussions about consciousness often stall between two deeply rooted viewpoints. One is computational functionalism, which holds that cognition can be [...]
First breathing ‘lung-on-chip’ developed using genetically identical cells
Researchers at the Francis Crick Institute and AlveoliX have developed the first human lung-on-chip model using stem cells taken from only one person. These chips simulate breathing motions and lung disease in an individual, [...]
Cell Membranes May Act Like Tiny Power Generators
Living cells may generate electricity through the natural motion of their membranes. These fast electrical signals could play a role in how cells communicate and sense their surroundings. Scientists have proposed a new theoretical [...]
This Viral RNA Structure Could Lead to a Universal Antiviral Drug
Researchers identify a shared RNA-protein interaction that could lead to broad-spectrum antiviral treatments for enteroviruses. A new study from the University of Maryland, Baltimore County (UMBC), published in Nature Communications, explains how enteroviruses begin reproducing [...]
New study suggests a way to rejuvenate the immune system
Stimulating the liver to produce some of the signals of the thymus can reverse age-related declines in T-cell populations and enhance response to vaccination. As people age, their immune system function declines. T cell [...]
Nerve Damage Can Disrupt Immunity Across the Entire Body
A single nerve injury can quietly reshape the immune system across the entire body. Preclinical research from McGill University suggests that nerve injuries may lead to long-lasting changes in the immune system, and these [...]
Fake Science Is Growing Faster Than Legitimate Research, New Study Warns
New research reveals organized networks linking paper mills, intermediaries, and compromised academic journals Organized scientific fraud is becoming increasingly common, ranging from fabricated research to the buying and selling of authorship and citations, according [...]
Scientists Unlock a New Way to Hear the Brain’s Hidden Language
Scientists can finally hear the brain’s quietest messages—unlocking the hidden code behind how neurons think, decide, and remember. Scientists have created a new protein that can capture the incoming chemical signals received by brain [...]
Does being infected or vaccinated first influence COVID-19 immunity?
A new study analyzing the immune response to COVID-19 in a Catalan cohort of health workers sheds light on an important question: does it matter whether a person was first infected or first vaccinated? [...]
We May Never Know if AI Is Conscious, Says Cambridge Philosopher
As claims about conscious AI grow louder, a Cambridge philosopher argues that we lack the evidence to know whether machines can truly be conscious, let alone morally significant. A philosopher at the University of [...]
AI Helped Scientists Stop a Virus With One Tiny Change
Using AI, researchers identified one tiny molecular interaction that viruses need to infect cells. Disrupting it stopped the virus before infection could begin. Washington State University scientists have uncovered a method to interfere with a key [...]
Deadly Hospital Fungus May Finally Have a Weakness
A deadly, drug-resistant hospital fungus may finally have a weakness—and scientists think they’ve found it. Researchers have identified a genetic process that could open the door to new treatments for a dangerous fungal infection [...]
Fever-Proof Bird Flu Variant Could Fuel the Next Pandemic
Bird flu viruses present a significant risk to humans because they can continue replicating at temperatures higher than a typical fever. Fever is one of the body’s main tools for slowing or stopping viral [...]
What could the future of nanoscience look like?
Society has a lot to thank for nanoscience. From improved health monitoring to reducing the size of electronics, scientists’ ability to delve deeper and better understand chemistry at the nanoscale has opened up numerous [...]
Scientists Melt Cancer’s Hidden “Power Hubs” and Stop Tumor Growth
Researchers discovered that in a rare kidney cancer, RNA builds droplet-like hubs that act as growth control centers inside tumor cells. By engineering a molecular switch to dissolve these hubs, they were able to halt cancer [...]
Platelet-inspired nanoparticles could improve treatment of inflammatory diseases
Scientists have developed platelet-inspired nanoparticles that deliver anti-inflammatory drugs directly to brain-computer interface implants, doubling their effectiveness. Scientists have found a way to improve the performance of brain-computer interface (BCI) electrodes by delivering anti-inflammatory drugs directly [...]















