NOTE: Article in Progress
One of the goals of machine learning algorithms is to detect patterns from the available data and then predict similar possible patterns.
Human bias and prejudices are well known. Laws and regulations have been created around the world to tackle such bias. Anti-discrimination laws ensure fair treatment of every individual. However, what will happen if our biases and prejudices also appear in the technologies that we create. But impacts of such prediction technologies1 are numerous on the society
disadvantaged
Current generation of algorithms are trained to build models from the data that is fed unto them and the results may not help. Results may confirm our human biases and prejudices2, but won't help to build technology that serves a diverse community.
data diversity: is the amount of data for each group enough? data quantity as well?
Human beings have evolved all these years. Our assumption that our past can predict our future1 will not help the progress of humanity.
Sample data used for analysis must cover a wide range
correlation does not mean causation
'discrimination-aware' data mining
principles and steps to take ensure fairness5 especially in matters of race, nationality, sex, gender identity etc.