Induction, Inductive Biases, and Infusing Knowledge into Learned Representations

sgfin.github.io

Really fascinating, but if you click through do make sure you're ready to engage your brain. My favorite part is encapsulated by two quotes:

In his 1980 report The Need for Biases in Learning Generalizations, Tom M. Mitchell argues that inductive biases constitute the heart of generalization and indeed a key basis for learning itself.
A key challenge of machine learning, therefore, is to design systems whose inductive biases align with the structure of the problem at hand.

Essentially: since all ML reasons by induction, removing bias is not a desired goal of the field (induction is essentially the application of learned biases to new contexts). Rather, the goal of ML is to design systems with appropriate (useful / desirable) biases.

While the term "bias" here is being used in a slightly different way than we use it when we talk about "algorithm bias", I thought it was a really interesting point. What we actually want is not unbiased algorithms, it's algorithms that are biased in desirable ways.

Read more...
Linkedin

Want to receive more content like this in your inbox?