Somesh Jha, professor of computer science at the University of Wisconsin-Madison, spoke to Notre Dame students Friday morning on the pervasive dangers of adversarial machine learning and artificial intelligence (AI).
His talk was the first installment of eight in the Mendoza College of Business “Ten Years Hence” speaker series.
According to James S. O’Rourke, teaching professor of management and organization at Notre Dame, the series seeks to “engage in some structured speculation about the future, specifically the decade ahead and the world we’re likely to inhabit ten years from now.” This year’s theme is “Artificial Intelligence: Promise and Peril.”
Jha began the lecture by priming the concept of an adversarial mindset.
“Machine learning is used in settings where there might be adversaries trying to manipulate it,” he said.
This manipulation can yield serious harm, irrespective of whether it is a mistake or induced by a malicious adversary.
“There were 17 fatalities [and] 736 crashes due to the Tesla Autopilot … Amazon had an AI tool for sorting out resumes, and it showed severe bias against women,” Jha said.
While countries have been introducing AI regulations to mandate practices such as using high-quality, diverse datasets to train the models, Jha said they are largely insufficient.
“There is a gap between the laws and what can be done [with AI],” he said.
Jha explained that the age, generational differences and lack of technical expertise of policymakers inhibit the responsive legislation needed to keep up with the rapid developments in AI.
“I don’t know what the solution is other than education and involving more technical actors,” he said.
Jha thus underscored the importance of a security and privacy mindset in the field of machine learning.
“The basic premise of the talk is that you need the security mindset to think about these machine learning systems,” he said, “it’s not enough to just look at accuracy.”
The absence of a security mindset has also been evident with the leading experts in the field, according to Jha.
“If these [experts] can miss some of the security issues, imagine what other people are missing right?” he said.
However, when models do get trained to account for privacy, Jha said there “[is] a big accuracy drop.” With differential privacy models, which seek to protect the identity of individuals, there is a tradeoff with utility.
Moreover, the recent developments in generative AI have complicated the landscape even further, providing another tool for attackers. Jha said spear-phishing, deepfakes and the propagation of fake news have distinctly worsened as a result.
“Large language models have completely changed the game,” he said. “It has basically given the advantage to attackers rather than the defenders … it has put a wrinkle in what we think attackers can do.”
Jha concluded his talk by acknowledging that there is not much that can be done but expressing hope that technology will reach the point of insulating society against trained attackers.