Our weird behavior during the pandemic is messing with AI models

www.technologyreview.com

Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should.

This...is not at all surprising. The article gives some interesting examples of where models are producing bad predictions, which are certainly interesting to read through and think about.

What I think is more interesting though is that what capabilities we lose when we outsource a decision-making process to ML today. Sure, we might gain increased predictive power under a normal range of inputs and we absolutely gain an increase in granularity and a reduction in decision-making time and cost.

But we also completely lose the ability to think causally. As Judea Pearl has so clearly pointed out in his work of late, modern ML doesn't think causally. And when the world sees a massive change, the only type of reasoning that can continue to make predictions about the future is causal.

As such, the more we migrate our supply chains, financial flows, etc. to baking ML into their cores, the more fragile we make them to systemic shocks. This was a new thought for me.

Read more...
Linkedin

Want to receive more content like this in your inbox?