Fixing Three Sources of Bias in AI

 

Reading time: 3 min

 
 
 

With the increased interest and use of AI in technology solutions comes the question of fairness and ethics of these algorithms.

It may come as a surprise that machines can be biased.

We read with interest a blog post that talks about this very topic.

The first source of AI bias: unintentionally uploading the implicit human biases that pervade our culture.

Next comes the second source of AI bias: poorly-selected training data for machine learning, or poorly reasoned rules.

And the third source of AI bias is evil programmers. Or corporations, or governments.

How can we practise responsibility in countering AI bias?

 

dcHR.tech's perspective:

The biases highlighted resonate with us. It's certainly true that 'we unintentionally upload(ing) the implicit human biases that pervade our culture.' This tendency needs to be controlled and dealt with. Particularly true in recruitment and selection systems and technology where the development of the AI is often heavily influenced by the experience, views and philosophy of the founder(s).


We also can't agree enough that AI should not be a mystical black hole with technology that is too 'deep' to be decoded or does not necessitate any explanation. Echoing the comment of the author, Joanna Bryson, we must 'insist on the right to explanation, on due process. All algorithms that affect people's lives should be subject to audit.'


Read on to learn how we can be more responsible with the use of AI.

 

 

[Article migrated from dcHR.tech]