QUB researcher develops algorithm to solve bias in artificial intelligence systems

  • A researcher at Queen's University has developed a new system that could help eliminate bias against gender, race, and other attributes when developing AI models.

    The promise of artificial intelligence and machine learning is that it can reproduce and automate tasks that currently require a great deal of time investment by trained experts. The technology has already been shown to be more accurate than human experts in areas such as diagnosing cancer from scan images, but AI models are prone to inheriting bias from the vast data sets they're trained on.

    AI is being mis-applied currently in fields such as recruitment, where unproven proprietary models are being used to screen CVs and even rate automated video interviews. A model that introduces gender or racial bias in the recruitment process could land companies in legal difficulties, and this problem of eliminating bias is something that all AI researchers face when developing new models.

    Dr Deepak Padmanabhan from Queen’s University Belfast has now developed a new algorithm that he hopes will combat this fairness issue when analysing massive data sets such as large volumes of CCTV footage. Training an accurate model relies on accurate pre-classification of the training data, but in cases with huge volumes of data this can be impractical.

    Working with colleagues Savitha Abraham and Sowmya Sundaram at the Indian Institute of Technology Madras, Dr Padmanabhan developed a system to tackle discrimination within clustering algorithms. These are algorithms designed to help speed up the classification process by grouping similar data together into clusters so that the user can then approve or reject entire clusters at a time.

    Clustering saves time but can introduce bias along protected characteristics such as race, gender, age, religion or country of origin. Dr Padmanabhan explained the problem: "AI techniques for data processing, known as clustering algorithms, are often criticised as being biased in terms of ‘sensitive attributes’ such as race, gender, age, religion and country of origin. It is important that AI techniques are fair while aiding shortlisting decisions, to ensure that they are not discriminatory on such attributes."

    Previous research has developed fair clustering techniques that could prevent bias in data sets for a specific chosen attribute such as gender, but they couldn't control for multiple attributes. “Our fair clustering algorithm, called FairKM, can be invoked with any number of specified sensitive attributes, leading to a much fairer process," explained Dr Padmanabhan. "In a way, FairKM takes a significant step towards algorithms assuming the role of ensuring fairness in shortlisting, especially in terms of human resources. With a fairer process in place, the selection committees can focus on other core job-related criteria."

    Source: Written based on press release

    About the author

    Brendan is a Sync NI writer with a special interest in the gaming sector, programming, emerging technology, and physics. To connect with Brendan, feel free to send him an email or follow him on Twitter.

    Got a news-related tip you’d like to see covered on Sync NI? Email the editorial team for our consideration.

Share this story