'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley


Algorithms don't make decisions. People who write algorithms make decisions. So it's essential for that to be a diverse industry, and utterly unsurprising that it's not.

Major tech corporations have launched AI “ethics” boards that not only lack diversity, but sometimes include powerful people with interests that don’t align with the ethics mission. The result is what some see as a systemic failure to take AI ethics concerns seriously, despite widespread evidence that algorithms, facial recognition, machine learning and other automated systems replicate and amplify biases and discriminatory practices.


Want to receive more content like this in your inbox?