Interdisciplinary Research Funded by $3M DARPA Grant

artificial intelligence Image by Pixabay
A team of UML researchers are looking into using artificial intelligence to make difficult decisions.

03/02/2023
By Brooke Coupal

Imagine that you are a doctor managing the emergency room of a large hospital. You suddenly get a call that there has been a mass shooting at a concert a few miles away. In 20 minutes, you will be responsible for triaging over 200 patients with a range of injuries. You barely have enough staff or resources, and the hospital policies are not designed for a situation this dire.

鈥淲hen people respond to emergencies, many decisions they face are quite predictable. They鈥檙e trained on them, and there鈥檚 policy,鈥 says Neil Shortland, associate professor in the School of Criminology and Justice Studies. 鈥淏ut every now and then, they get stuck with a really tough decision that they鈥檝e never trained for and never experienced, and they don鈥檛 have any guidance as to what the right thing to do is. Although these decisions are rare, they occur in the most extreme situations with the highest stakes.鈥

Shortland and an interdisciplinary team of 51视频 researchers are looking into using artificial intelligence (AI) to make those difficult decisions. The team consists of Computer Science Asst. Prof. Ruizhe Ma, Electrical and Computer Engineering Asst. Prof. Paul Robinette, Philosophy Chair and Assoc. Prof. Nicholas Evans and Holly Yanco, professor and chair of the Miner School of Computer & Information Sciences.

The researchers are working in partnership with , a Michigan-based business that builds intelligent systems for defense, government and commercial applications. The is funding the project with a $3 million grant through its , with $1.2 million going to UML and $1.8 million to Soar Technology.

Modeling Human Behavior

The goal of the research is to find the best human attributes that AI can mirror when making difficult decisions in extreme environments, like a battlefield.

鈥淲e鈥檙e harnessing the essence of a person by modeling them as their best self,鈥 says Shortland, the project鈥檚 principal investigator.

Human judgment is fallible. Even if someone is highly qualified to make a decision, their judgment can be skewed by biases, hunger, tiredness, stress and other factors, Shortland says.

51视频 Asst. Prof. of Criminology Neil Shortland Image by K. Webster
Assoc. Prof. Neil Shortland is the project's principal investigator.

鈥淎I eliminates those issues,鈥 he says. 鈥淚t can be the best version of a person each time.鈥

AI also helps increase the number of decision-makers in situations like mass shootings, where instead of having just one doctor assessing victims, dozens of robots could be deployed to evaluate the victims after being programmed with AI that models the doctor鈥檚 decision-making processes.

To study the best human attributes for different decision-making scenarios, the researchers will expose people to emergency situations using a computer research tool developed by Shortland called the Least-worst Uncertain Choice Inventory For Emergency Responses (LUCIFER). They will then measure how a person鈥檚 psychological traits and values impact their decisions.

鈥淲hen we identify the key decision-maker attributes, we will be able to, to some extent, quantify a decision process and develop AI decision systems tailored to specific needs and environments,鈥 Ma says.

A scenario that the research team is focusing on is triaging patients. Using LUCIFER, test subjects will be presented with visuals of patients with various injuries and pulses before determining if they are OK, if they are eventually going to need medical assistance, if they need help right away or if they are deceased.聽

鈥淲e will examine how different traits impact people鈥檚 willingness to give certain tags,鈥 Shortland says.

The researchers are also developing a 3D simulation that immerses test subjects in triage scenarios.

鈥淭he triage micro-world will allow us to evaluate the progress of the overall project,鈥 says Robinette, who is designing the 3D simulation with his students.聽

鈥淚t will help us see if what we鈥檙e finding in our LUCIFER studies transitions into a more real-world environment,鈥 Shortland adds.

For the research project, the team will be utilizing on-campus resources, like the Misinformation Influence Neuroscience and Decision-making (MIND) Lab and the New England Robotics Validation and Experimentation (NERVE) Center, while tapping the researchers鈥 range of skills and expertise.

鈥淚nterdisciplinary teams are required to push research out of the lab and toward the real world, where it can save lives,鈥 Robinette says. 鈥淚鈥檓 looking forward to the great things we can all do together.鈥