One of the concerns that this article lays out is that the new restrictions on the use of personal data will limit the size of the datasets used to make effective AI algorithms. An additional concern, though, is what happens if "opting-out" correlates with other demographic or social factors, analogous to selection bias in opinion polling? The result would be algorithms that are more biased, not less.
We think one possible solution (among others) would be to develop new differential privacy techniques that add noise to the dataset to make individuals harder--ideally impossible--to identify without meaningfully changing the properties of the dataset itself. Most current anonymization techniques are easily reversible with a few known data points, so the field will need to advance to satisfy both sides.