The Hacker Learns to Trust

Wow, this was quite a saga—it really captured a lot of attention in the AI community. Here's what happened:

  • Back in February, OpenAI announced that they had trained a groundbreaking language model called GPT-2. Uncharacteristically for them, they decided to not release the model out of concern for what malevolent actors would be able to do with it.
  • On June 6th, Connor Leahy wrote this post announcing that he had replicated the OpenAI result and would be releasing the full model.
  • On June 13th, Connor wrote this followup stating that he had spoken to a bunch of industry folks (including OpenAI) and that he wouldn't be releasing the model after all.

These specific events (and the linked posts) are not that interesting. What is interesting, however, is the very active conversation surrounding AI safety. There's so much to say on this topic—far more than I can get into here—but seeing just how much interest these events got over the past week made it clear just how seriously at least parts of the AI community are starting to take this issue.


Want to receive more content like this in your inbox?