(Harvard Data Science Review)
Machine learning techniques are increasingly used throughout society to predict individual’s life outcomes. However, research published in the Proceedings of the National Academy of Sciences raises questions about the accuracy of these predictions. Led by researchers at Princeton University, this mass collaboration involved 160 teams of data and social scientists building statistical and machine learning models to predict six life outcomes for children, parents, and families. They found that none of the teams could make very accurate predictions, despite using advanced techniques and having access to a rich dataset.
"Black box" methods (where the inputs and outputs of an AI model cannot be understood or explained by a human) are compared with explainable ML and traditional four-variable regression in this interview. The findings? There was hardly any difference between these methods' effectiveness for predicting individual life outcomes. While black box AI has been shown to work well for technologies like computer vision, here it underperforms significantly in predicting individual human behaviour and metrics for social success such as high-school graduation or GPA. This challenge marks an important contribution to the ongoing debate about the ability of AI, explainable or no, to make meaningful and ethical decisions about individual livelihoods and futures. - Faun Rice | emailRead more...