MIT researchers have introduced an efficient reinforcement learning algorithm that enhances AI's decision-making in complex scenarios, such as city traffic control.
By strategically selecting optimal tasks for training, the algorithm achieves significantly improved performance with far less data, offering a 50x boost in efficiency. This method not only saves time and resources but also paves the way for more effective AI applications in real-world settings.
AI Decision-Making
Across fields like robotics, medicine, and political science, researchers are working to train AI systems to make meaningful and impactful decisions. For instance, an AI system designed to manage traffic in a congested city could help drivers reach their destinations more quickly while enhancing safety and sustainability.
However, teaching AI to make effective decisions is a complex challenge.
Challenges in Reinforcement Learning
Reinforcement learning models, the foundation of many AI decision-making systems, often struggle when confronted with even slight changes in the tasks they are trained for. For example, in traffic management, a model might falter when handling intersections with varying speed limits, lane configurations, or traffic patterns.
To boost the reliability of reinforcement learning models for complex tasks with variability, MIT researchers have introduced a more efficient algorithm for training them.
Strategic Task Selection in AI Training
The algorithm strategically selects the best tasks for training an AI agent so it can effectively perform all tasks in a collection of related tasks. In the case of traffic signal control, each task could be one intersection in a task space that includes all intersections in the city.
By focusing on a smaller number of intersections that contribute the most to the algorithm's overall effectiveness, this method maximizes performance while keeping the training cost low.
Enhancing AI Efficiency With a Simple Algorithm
The researchers found that their technique was between five and 50 times more efficient than standard approaches on an array of simulated tasks. This gain in efficiency helps the algorithm learn a better solution in a faster manner, ultimately improving the performance of the AI agent.
"We were able to see incredible performance improvements, with a very simple algorithm, by thinking outside the box. An algorithm that is not very complicated stands a better chance of being adopted by the community because it is easier to implement and easier for others to understand," says senior author Cathy Wu, the Thomas D. and Virginia W. Cabot Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS).
She is joined on the paper by lead author Jung-Hoon Cho, a CEE graduate student; Vindula Jayawardana, a graduate student in the Department of Electrical Engineering and Computer Science (EECS); and Sirui Li, an IDSS graduate student. The research will be presented at the Conference on Neural Information Processing Systems.
Balancing Training Approaches
To train an algorithm to control traffic lights at many intersections in a city, an engineer would typically choose between two main approaches. She can train one algorithm for each intersection independently, using only that intersection's data, or train a larger algorithm using data from all intersections and then apply it to each one.
But each approach comes with its share of downsides. Training a separate algorithm for each task (such as a given intersection) is a time-consuming process that requires an enormous amount of data and computation, while training one algorithm for all tasks often leads to subpar performance.
Wu and her collaborators sought a sweet spot between these two approaches.
Advantages of Model-Based Transfer Learning
For their method, they choose a subset of tasks and train one algorithm for each task independently. Importantly, they strategically select individual tasks that are most likely to improve the algorithm's overall performance on all tasks.
They leverage a common trick from the reinforcement learning field called zero-shot transfer learning, in which an already trained model is applied to a new task without being further trained. With transfer learning, the model often performs remarkably well on the new neighbor task.
"We know it would be ideal to train on all the tasks, but we wondered if we could get away with training on a subset of those tasks, apply the result to all the tasks, and still see a performance increase," Wu says.
MBTL Algorithm: Optimizing Task Selection
To identify which tasks they should select to maximize expected performance, the researchers developed an algorithm called Model-Based Transfer Learning (MBTL).
The MBTL algorithm has two pieces. For one, it models how well each algorithm would perform if it were trained independently on one task. Then it models how much each algorithm's performance would degrade if it were transferred to each other task, a concept known as generalization performance.
Explicitly modeling generalization performance allows MBTL to estimate the value of training on a new task.
MBTL does this sequentially, choosing the task which leads to the highest performance gain first, then selecting additional tasks that provide the biggest subsequent marginal improvements to overall performance.
Since MBTL only focuses on the most promising tasks, it can dramatically improve the efficiency of the training process.
Implications for Future AI Development
When the researchers tested this technique on simulated tasks, including controlling traffic signals, managing real-time speed advisories, and executing several classic control tasks, it was five to 50 times more efficient than other methods.
This means they could arrive at the same solution by training on far less data. For instance, with a 50x efficiency boost, the MBTL algorithm could train on just two tasks and achieve the same performance as a standard method which uses data from 100 tasks.
"From the perspective of the two main approaches, that means data from the other 98 tasks was not necessary or that training on all 100 tasks is confusing to the algorithm, so the performance ends up worse than ours," Wu says.
With MBTL, adding even a small amount of additional training time could lead to much better performance.
In the future, the researchers plan to design MBTL algorithms that can extend to more complex problems, such as high-dimensional task spaces. They are also interested in applying their approach to real-world problems, especially in next-generation mobility systems.
Reference: "Model-Based Transfer Learning for Contextual Reinforcement Learning" by Jung-Hoon Cho, Vindula Jayawardana, Sirui Li and Cathy Wu, 21 November 2024, Computer Science > Machine Learning.
arXiv:2408.04498
The research is funded, in part, by a National Science Foundation CAREER Award, the Kwanjeong Educational Foundation PhD Scholarship Program, and an Amazon Robotics PhD Fellowship.
News
Deadly Pancreatic Cancer Found To “Wire Itself” Into the Body’s Nerves
A newly discovered link between pancreatic cancer and neural signaling reveals a promising drug target that slows tumor growth by blocking glutamate uptake. Pancreatic cancer is among the most deadly cancers, and scientists are [...]
This Simple Brain Exercise May Protect Against Dementia for 20 Years
A long-running study following thousands of older adults suggests that a relatively brief period of targeted brain training may have effects that last decades. Starting in the late 1990s, close to 3,000 older adults [...]
Scientists Crack a 50-Year Tissue Mystery With Major Cancer Implications
Researchers have resolved a 50-year-old scientific mystery by identifying the molecular mechanism that allows tissues to regenerate after severe damage. The discovery could help guide future treatments aimed at reducing the risk of cancer [...]
This New Blood Test Can Detect Cancer Before Tumors Appear
A new CRISPR-powered light sensor can detect the faintest whispers of cancer in a single drop of blood. Scientists have created an advanced light-based sensor capable of identifying extremely small amounts of cancer biomarkers [...]
Blindness Breakthrough? This Snail Regrows Eyes in 30 Days
A snail that regrows its eyes may hold the genetic clues to restoring human sight. Human eyes are intricate organs that cannot regrow once damaged. Surprisingly, they share key structural features with the eyes [...]
This Is Why the Same Virus Hits People So Differently
Scientists have mapped how genetics and life experiences leave lasting epigenetic marks on immune cells. The discovery helps explain why people respond so differently to the same infections and could lead to more personalized [...]
Rejuvenating neurons restores learning and memory in mice
EPFL scientists report that briefly switching on three “reprogramming” genes in a small set of memory-trace neurons restored memory in aged mice and in mouse models of Alzheimer’s disease to level of healthy young [...]
New book from Nanoappsmedical Inc. – Global Health Care Equivalency
A new book by Frank Boehm, NanoappsMedical Inc. Founder. This groundbreaking volume explores the vision of a Global Health Care Equivalency (GHCE) system powered by artificial intelligence and quantum computing technologies, operating on secure [...]
New Molecule Blocks Deadliest Brain Cancer at Its Genetic Root
Researchers have identified a molecule that disrupts a critical gene in glioblastoma. Scientists at the UVA Comprehensive Cancer Center say they have found a small molecule that can shut down a gene tied to glioblastoma, a [...]
Scientists Finally Solve a 30-Year-Old Cancer Mystery Hidden in Rye Pollen
Nearly 30 years after rye pollen molecules were shown to slow tumor growth in animals, scientists have finally determined their exact three-dimensional structures. Nearly 30 years ago, researchers noticed something surprising in rye pollen: [...]
NanoMedical Brain/Cloud Interface – Explorations and Implications. A new book from Frank Boehm
New book from Frank Boehm, NanoappsMedical Inc Founder: This book explores the future hypothetical possibility that the cerebral cortex of the human brain might be seamlessly, safely, and securely connected with the Cloud via [...]
How lipid nanoparticles carrying vaccines release their cargo
A study from FAU has shown that lipid nanoparticles restructure their membrane significantly after being absorbed into a cell and ending up in an acidic environment. Vaccines and other medicines are often packed in [...]
New book from NanoappsMedical Inc – Molecular Manufacturing: The Future of Nanomedicine
This book explores the revolutionary potential of atomically precise manufacturing technologies to transform global healthcare, as well as practically every other sector across society. This forward-thinking volume examines how envisaged Factory@Home systems might enable the cost-effective [...]
A Virus Designed in the Lab Could Help Defeat Antibiotic Resistance
Scientists can now design bacteria-killing viruses from DNA, opening a faster path to fighting superbugs. Bacteriophages have been used as treatments for bacterial infections for more than a century. Interest in these viruses is rising [...]
Sleep Deprivation Triggers a Strange Brain Cleanup
When you don’t sleep enough, your brain may clean itself at the exact moment you need it to think. Most people recognize the sensation. After a night of inadequate sleep, staying focused becomes harder [...]
Lab-grown corticospinal neurons offer new models for ALS and spinal injuries
Researchers have developed a way to grow a highly specialized subset of brain nerve cells that are involved in motor neuron disease and damaged in spinal injuries. Their study, published today in eLife as the final [...]















