Are we in an AI overhang?

In a followup to my recent GPT-3 posts, this article is fascinating, and fundamentally quite practical:

I am worried we're in an overhang right now. I think we right now have the ability to build an orders-of-magnitude more powerful system than we already have, and I think GPT-3 is the trigger for 100x larger projects at Google, Facebook and the like, with timelines measured in months.

It's focused on a thought experiment: "What if, by scaling up existing GPT-3-style NLP by a factor of 1000, we could achieve human-level performance on a wide range of tasks?"


Want to receive more content like this in your inbox?