Researchers Say Autonomous Robots Can Make Safer Decisions With ‘Rulebooks’ System

Insider Brief

  • Researchers at Iowa State University and ETH Zürich developed a new “rulebooks” framework designed to help autonomous robots make safer and more transparent decisions when rules conflict in real-world situations.
  • Published in IEEE Transactions on Robotics, the study proposes ranking robotic priorities such as human safety, legality and efficiency instead of blending them into a single weighted mathematical score that can produce difficult-to-explain behavior.
  • The researchers said the framework could help autonomous systems justify decisions in complex scenarios involving self-driving vehicles, drones and other AI-powered machines while also supporting regulation, auditing and post-incident analysis.

Researchers at Iowa State University, backed by collaborators at ETH Zürich, have developed a new framework aimed at helping autonomous robots make safer and more transparent decisions when rules conflict in real-world situations.

The study, published in IEEE Transactions on Robotics, proposes a system called “rulebooks” that ranks priorities rather than combining them into a single mathematical score. The researchers note that current autonomous systems often rely on weighted trade-offs between goals such as safety, efficiency and legality, which can produce unpredictable or difficult-to-explain behavior.

The work addresses a growing challenge in robotics as autonomous systems move into more complex environments where they may face situations without a clearly correct answer. Examples cited in the study include self-driving cars deciding whether to briefly cross a center line to avoid a pedestrian or drones choosing between a risky shortcut and a longer route.

“Robots are increasingly expected to operate without human intervention in situations where some rules may have to be bent,” Iowa State University associate professor of computer science Tichakorn Wongpiromsarn said. “What’s been missing is a principled way to justify these decisions.”

According to Wongpiromsarn and fellow researchers Konstantin Slutsky, assistant professor of mathematics at Iowa State, and Emilio Frazzoli, professor of dynamic systems and control at ETH Zürich, many current robotic control systems treat all objectives as variables that can be balanced mathematically against one another. Researchers proposed that this approach can become problematic when critical priorities such as human safety are reduced to one factor among many instead of being treated as overriding principles.

“The problem is that this approach treats all goals as if they can be balanced against each other, even when they shouldn’t be,” Wongpiromsarn noted.

The proposed framework instead organizes robotic priorities into ranked rules. Higher-priority rules, such as avoiding harm to people, take precedence over lower-priority objectives like efficiency or lane positioning. The researchers said this structure is intended to make robotic decisions more understandable, easier to audit and more consistent with how humans reason through difficult trade-offs.

The study also suggests the framework could provide flexibility for regulators and manufacturers. Core safety principles could be established through regulation while allowing companies to define lower-level operational priorities as long as they remain consistent with broader safety requirements.

Researchers tested the framework across multiple robotic planning scenarios and reported that the system could generate workable plans in situations where traditional optimization methods struggled. The authors said the approach could also support post-incident analysis by making it easier to evaluate whether a robot’s actions aligned with predefined priorities.

Beyond robotics, the researchers said that the framework may have broader implications for AI systems making decisions in transportation, healthcare, public safety and other high-stakes environments where systems increasingly need to justify their actions in understandable terms.

“The rulebooks concept offers a way to encode societal values, legal norms and organizational policies directly into machine decision-making,” Wongpiromsarn said. “It won’t solve every ethical dilemma facing autonomous systems, but it may help ensure that when machines make hard choices, they do so according to priorities humans can understand and even hold them accountable for.”

This gap is what motivated Wongpiromsarn and fellow researchers Konstantin Slutsky, assistant professor of mathematics at Iowa State, and Emilio Frazzoli, professor of dynamic systems and control at ETH Zürich, to develop a new framework that helps autonomous systems make these decisions in a way that’s transparent, predictable and defensible.

Image credit: Iowa State University/Tichakorn Wongpiromsarn

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top