Was this AI trained on an unbalanced data set? (Only black folks?) Or has it only been used to identify photos of black people? I have so many questions: some technical, some on media sensationalism
Was this AI trained on an unbalanced data set? (Only black folks?) Or has it only been used to identify photos of black people? I have so many questions: some technical, some on media sensationalism
It’s probably the opposite. the AI was likely trained on a dataset of mostly white people, and thus more easily able to distinguish between white people.
It’s a problem in ML that has been seen before, especially for companies based in the US where it is just easier to find a large amount of white people as opposed to people of other skin colors.
It’s really not dissimilar to how people work either, humans are generally more able to distinguish between two people who are races that they grew up with. You’ll make more mistakes when trying to identify people of races you aren’t as familiar with too.
The problem is when the police use these tools as an authoritative matching algorithm.
Also makes me wonder if our defined digital color spaces being bad at representing darker shades contributes as well.
I thought they would have trained it on mugshots. Either way, it should never be used to make direct arrests. I feel like it’s best use would be something like an anonymous tip line that leads to investigation.
Using mugshots to train AI without consent feels illegal. Plus, it wouldn’t even make a very good training set, as the AI would only be able to identify perfectly straight images shot in ideal lighting conditions.