Tech Twitter kept itself busy over the weekend, pumping out layout generators, board meeting itineraries, reams of code, and even Shakespeare sonnets. Who said engineers don’t appreciate the arts?
But the catch is that it wasn’t engineers making these things - at least, not directly. All of this was made possible by GPT-3, OpenAI’s ($PRIVATE:OPENAI) latest Artificial Intelligence language model. GPT-3 represents a massive breakthrough in AI that has some salivating at the world of possibilities it opens, and others chilled by the jobs this sort of AI could replace.
holy fucking shit— delian (@zebulgar) July 17, 2020
I fed it GPT-3 the first half of my Sword Health memo I have on my website...
And it actually generated a few paragraphs of relatively cohesive follow-on... including a section on risk and long-term strategy
WE ARE FUCKIN FUCKED YOOOOOOOOO https://t.co/4HGQeya6pS pic.twitter.com/jzQwdbektd
Like, really chilled.
OpenAI, a startup founded in part by Elon Musk and Sam Altman, made the tool available to a limited number of users last week, and it’s kept the company buzzing on Twitter. Since it first published research on GPT-3 in May, OpenAI’s Twitter following has increased by 16,000. And 5,000 of those new followers came in the last 5 days alone, just after the first viral tweets about the potential of GPT-3 started popping up.
I keep seeing all kinds of crazy reports about people's experiences with GPT-3, so I figured that I'd collect a thread of them.— Kaj Sotala (@xuenay) July 15, 2020
While GPT-3 is both remarkable and frightening, discussion has already started popping up as to the tool’s limitations. In a particularly entertaining read, Kevin Lacker put GPT-3 to a Turing Test, and explained that GPT-3 is essentially a very sophisticated text predictor which uses a catalog of trillions of words across the internet to guess what the next logical part of a sentence or request might be.
But those limitations haven’t stopped techies across social media from clamoring to get access to the tool - proof that OpenAI’s strategy of letting GPT-3 speak for itself is working. On Github, GPT-3 has already racked up more than 100,000 stars from users, with 255 “watching” the page for updates.
GPT-3 is clearly leaps and bounds more sophisticated than its predecessor, GPT-2. But OpenAI was hesitant to release even the full version of GPT-2 back in February of last year, worried that it could potentially be used to cause harm. With a tool this advanced and generating equal zeitgeist and concern this early after its limited release, we’ll have to wait and find out if GPT-3 ever sees the light of day. For the time being, though, it seems everyone is eager to see where it’s popping up.