Artificial Intelligence (AI) systems are increasingly being used in various facets of our society. However, we often don't understand how these algorithms function and whether the decisions they make are indeed fair. To address this issue, VUB AI expert Dr. Carmen Mazijn conducted research centered on the question, "How can we better understand AI systems to ensure they don't negatively impact our society by making unjust decisions and discriminating?"

As part of her PhD, Dr. Mazijn undertook interdisciplinary research affiliated with the Data Analytics Laboratory at the Faculty of Social Sciences & Solvay Business School, and the Applied Physics research group at the Faculty of Sciences and Bio-engineering. She focused on decision-making algorithms: AI systems that either support human decision-making or even fully replace humans in the decision-making process. For instance, in a selection process, an AI model might seem to make gender-neutral decisions, but on closer examination, the motivations behind the decisions could be entirely different for men and women.

"An algorithm might appear to make fair decisions on the surface, but not always for the right reasons," Dr. Mazijn observes. "To truly determine if an AI system has a specific bias or if it's making socially acceptable decisions, one must 'crack' the system and the algorithms."

During her PhD, Dr. Mazijn developed a detection technique named LUCID to dissect these AI algorithms and determine if the system uses acceptable logic. This technique also tests whether the system can be effectively deployed in the real world. The research also highlighted that AI systems can easily interact with one another, and bias in one or more of these systems can lead to problematic feedback loops.

Dr. Mazijn gives an example: "A police department might use AI to determine which streets require more patrols. Consequently, by deploying more patrols, more infractions are detected. When this data is fed back into the AI system, any existing bias is amplified, leading to a self-fulfilling prophecy."

The core message is clear: we must handle AI systems intelligently and consider their long-term effects. To make her research findings accessible, Dr. Mazijn also outlined policy recommendations, explaining how the technical and social insights from her PhD could be implemented.

Her dissertation, titled "Black Box Revelation: Interdisciplinary Perspectives on Bias in AI," was published on September 7, 2023, and was supervised by Prof. Dr. Vincent Ginis and Prof. Dr. Jan Danckaert.