You are here

A new system can measure the hidden bias in otherwise secret algorithms

Researchers at Carnegie Mellon University have developed a new system for detecting bias in otherwise opaque algorithms. In a paper presented today at the IEEE Symposium on Security and Privacy, the researchers laid out a new method for assessing the impact of an algorithm's various input, potentially providing a crucial tool for corporations or governments that want to prove a given algorithm isn't inadvertently discriminatory. "These measures provide a foundation for the design of transparency reports that accompany system decisions," the paper reads, "and for testing tools useful for internal and external oversight."

Called "Quantitative Input Influence," or QII, the system would test a given algorithm through a range of different inputs. Based on that data, the QII system could then effectively estimate which inputs or sets of inputs had the greatest causal effect on a given outcome. In the case of a credit score algorithm, the result might tell you that 80 percent of the variation in your credit score was the result of a specific outstanding bill, providing crucial insight into an otherwise opaque process. The same tools could also be used to test whether an algorithm is biased against a specific class of participants.

Read Complete Article