OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless

www.technologyreview.com

(MIT Technology Review)

“Playing with GPT-3 feels like seeing the future,” Arram Sabeti, a San Francisco–based developer and artist, tweeted last week. That pretty much sums up the response on social media in the last few days to OpenAI’s latest language-generating AI. OpenAI first described GPT-3 in a research paper published in May. But last week it began drip-feeding the software to selected people who requested access to a private beta.

GPT-3 is the most powerful language model ever. Its predecessor, GPT-2, released last year, was already able to spit out convincing streams of text in a range of different styles when prompted with an opening sentence. But GPT-3 is a big leap forward. The model has 175 billion parameters (the values that a neural network tries to optimize during training), compared with GPT-2’s already vast 1.5 billion. And with language models, size really does matter.

Sabeti linked to a blog post where he showed off short stories, songs, press releases, technical manuals, and more that he had used the AI to generate. GPT-3 can also produce pastiches of particular writers. Mario Klingemann, an artist who works with machine learning, shared a short story called “The importance of being on Twitter,” written in the style of Jerome K. Jerome, which starts: “It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.”

Talking Points:

GPT-3 ("Generative Pretrained Transformer," a language algorithm that uses machine learning) is definitely a great achievement and a huge improvement from previous models. However, it is nowhere near genuine intelligence, or "strong AI," yet. In this article from MIT Technology Review, Will Douglas Heaven comments that as “the result of excellent engineering," GPT-3 is a better model which speaks less robotically than previous versions, but the formula is still familiar. By studying word/phrase presentations and combinations from old text, the model generates new words with the highest probabilities learned from its training materials. As per the article's concerns about racist and sexist language, GPT-3 is just a tool that tries to mimic human patterns. Biased sentences are generated not because GPT-3 is intuitively racist or sexist, but because these sentiments are visible in existing human texts today. We should watch our own behaviors rather than blaming the tool. - Olivia Lin | email

Read more...
Linkedin

Want to receive more content like this in your inbox?