$1 Million Prize For AI That Benefits Society
Written by Sue Gee   
Friday, 22 October 2021

Cynthia Rudin, a professor of computer science at Duke University, is the winner of the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity for her work applying machine learning techniques to important problems faced by society. 

C Rudin

Professor Rudin is second recipient of this new annual prize worth $1 Million which is comes from the Association for the Advancement of Artificial Intelligence (AAAI), the international scientific society serving AI researchers, practitioners and educators and funded by the online education company Squirrel AI. The prize will be presented at the 2022 AAAI conference. 

The AAAI inaugurated the prize to honor individuals in the field of artifical intelligence whose work has has a tranformative impact on society, and the first awardee was professor Regina Barzilay of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for her work developing machine learning models to develop antibiotics and other drugs, and to detect and diagnose breast cancer at early stages.

The citation for the 2022 award to Cynthia Rudin is:

For pioneering scientific work in the area of interpretable and transparent AI systems in real-world deployments, the advocacy for these features in highly sensitive areas such as social justice and medical diagnosis, and serving as a role model for researchers and practitioners.

According to AAAI awards committee chair and past president Yolanda Gil: 

“Only world-renowned recognitions, such as the Nobel Prize and the A.M. Turing Award from the Association of Computing Machinery, carry monetary rewards at the million-dollar level. Professor Rudin's work highlights the importance of transparency for AI systems in high-risk domains.  Her courage in tackling controversial issues calls out the importance of research to address critical challenges in responsible and ethical use of AI."

Rudin's main concern over the past 15 years has been to develop “interpretable” machine learning algorithms that allow humans to see inside AI. Her first applied project was a collaboration with Con Edison, the energy company responsible for powering New York City. Her assignment was to use machine learning to predict which manholes were at risk of exploding due to degrading and overloaded electrical circuitry. But she soon discovered that no matter how many newly published academic bells and whistles she added to her code, it struggled to meaningfully improve performance. 

As Rudin explained:

“We were getting more accuracy from simple classical statistics techniques and a better understanding of the data as we continued to work with it. If we could understand what information the predictive models were using, we could ask the Con Edison engineers for useful feedback that improved our whole process. It was the interpretability in the process that helped improve accuracy in our predictions, not any bigger or fancier machine learning model. That’s what I decided to work on, and it is the foundation upon which my lab is built.”

Rudin subsequently worked with Massachusetts General Hospital where with and her former student Berk Ustun, she designed a simple point-based system that can predict which patients are most at risk of having destructive seizures after a stroke or other brain injury. Later with her former MIT student Tong Wang and the Cambridge Police Department, she developed a model that helps discover commonalities between crimes to determine whether they might be part of a series committed by the same criminals. That open-source program eventually became the basis of the New York Police Department’s Patternizr algorithm, a powerful piece of code that determines whether a new crime committed in the city is related to past crimes.

Commenting on the award, Jun Yang, chair of the computer science department at Duke said: 

“Cynthia is changing the landscape of how AI is used in societal applications by redirecting efforts away from black box models and toward interpretable models by showing that the conventional wisdom—that black boxes are typically more accurate—is very often false. This makes it harder to justify subjecting individuals (such as defendants) to black-box models in high-stakes situations. The interpretability of Cynthia's models has been crucial in getting them adopted in practice, since they enable human decision-makers, rather than replace them.”

 

 In the video Rudin talks about her approach and states:

I think the advice I would give to upcoming generations [of  researchers] is to work on real problems. Because when you work on real problems you get a completely different prespective on what's important than if you just work on mathematical theory.  

Banner


Apache Fury Adds Optimized Serializers For Scala
31/10/2024

Apache Fury has been updated to add GraalVM native images and with optimized serializers for Scala collection. The update also reduces Scala collection serialization cost via the use of  encoding [ ... ]



OpenAI Releases Swarm
25/10/2024

OpenAI has released an experimental educational framework for exploring ergonomic, lightweight multi-agent orchestration. Swarm is managed by the OpenAI Solution team, but is not intended to be used i [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

 

Last Updated ( Friday, 22 October 2021 )