What happens when AI can write like a human? And how can it help us?
I recently did a talk on creative AI to a group of MBA students.
In the Q&A session someone asked me how AI will impact human writers. “Are you worried?” she asked.
It’s a good question. And following the release of Open AI’s GPT-3 (a text generator whose output is almost indistinguishable from human writing), it’s a particularly pertinent one.
What is GPT-3?
AI can write. We already know that. Multiple news agencies are using AI to generate articles. AI copywriting companies like Persado have sealed deals with Dell and JP Morgan Chase’s marketing teams.
But perhaps the biggest breakthrough in terms of AI’s writing capabilities comes from Open AI’s GPT (Generative Pre-trained Transformer) models.
In 2019, GPT-2 arrived with a huge amount of press attention. The model was given a limited release, due to fears it could generate fake news and harmful content. It was released in full six months later.
Then in May 2020 GPT-3 landed.
Like GPT-2, this is a language algorithm that uses machine learning to predictively write text. In effect, it’s a type of auto-complete model, like the ones that finish our sentences in Gmail.
But what makes GPT-3 so impressive is it’s size. It’s not just large, it’s massive.
While GPT-2 was trained with 1.5 billion parameters, its successor is 100x bigger, using a mind boggling 175 billion parameters.
Even more impressive is the fact it can perform specific tasks without any tuning. You can task it to write code, generate poetry, write articles or engage in a Q&A. And it’ll handle them all.
At the moment GPT-3 is in private beta mode and is being explored by a range of testers. I’m lucky enough to have gained access and I must admit, it’s pretty impressive.
With a single prompt it can write sentences that are well constructed, conversational, tonally appropriate and as promised, pretty much indistinguishable from human writing.
So what can GPT-3 do?
We’re in the early phases, but testers are sharing some fascinating results on Twitter.
Here are a few of the ways it’s being used at the moment:
- Article and blog writing
I’ve seen a few passable articles or blogs written with GPT-3.
From my own experiments, I think it could offer up helpful starting points and even some fully written paragraphs. But writing a whole article from scratch may be a stretch.
With human and AI collaboration however, we could create some good results. Maybe this will be my next experiment!
- Creative writing
The most thorough exploration of GPT-3 and creative writing comes from Gwern Branwen, who has been testing and working with GPT models for a few years. Poetry, horoscopes and dad jokes are all on his site. And he offers a great overarching view of what can realistically be generated.
On a slightly smaller level, I’ve been working with Tiny Giant on a podcast, Audio Saucepan (AI named), which explores the possibilities of AI and the written word — all using GPT-3. We’ve also used AI generated voices, music and artwork to add to the podcasting pleasure.
- A Q&A with Ada Lovelace
For one of our podcast episodes, I used GPT-3 to generate a Q&A with Ada Lovelace. I used my questions as prompts and GPT-3 generated the answers from Ada. This involved phrasing questions like: “Ada Lovelace, what got you into computer science?”
The resulting answers are realistic and nicely conversational. Whether Ada had a french maid and a horse and cart, I don’t know, but it certainly seems viable.
In light of this experiment, you can see how the model could be used for interesting Q&A articles, or to support chatbot builds.
- Story writing for games
AI Dungeon is a text-based video game that’s using GPT-3 to generate parts of its story. There are some nice results here — the creator admits they’ve been cherry-picked, but the results are impressive non-the-less.
If you don’t have access to GPT-3 yet, AI Dungeon is also the place to play with your own prompts.
- Emails created from bullet points
I spend far too much time writing emails, so something that would turn a few bullet points into the finished article sounds excellent. This email writing tool is created by Otherside AI and you can see it in action here.
- Twitter posts
Along similar lines (and taken down for reasons I explain below) is a program that used GPT-3 to write Twitter posts. Provide a single word and the program would come up with a relevant sentence, 260 characters or less.
The downside
Of course this groundbreaking tool does have its problems. Problems that I believe will need to be properly explored and sorted before GPT-3 can be released commercially. (Let’s see).
One of these problems is accuracy. Yes, the model can calculate complex sums, write some code and tell you the world’s capital cities, but it’s far from perfect.
In fact, ironically during my experiments today, GPT-3 told me: The name “GPT-3” stands for Google’s “Generalized Pattern-matching Tool”, which we know isn’t right.
Going forward, if GPT-3’s output is accepted verbatim, you can see how fears of fake news could quickly become a reality.
On top of this, there’s the problem of bias. This was pointed out very publicly by Jermome Pesenti, Head of AI at Facebook, as he tried out the GPT-3 Tweet generator mentioned above.
Pesenti used the words ‘black’, ‘women’ and ‘jews’ as prompts and saw some grim results. These outputs are awful, but not necessarily surprising. After all, GPT-3 was trained with text from across the internet, including Google Books, Wikipedia, articles, coding manuals — all with their human prejudices built in.
So what are Open AI doing about this?
It seems that OpenAI are taking these issues seriously. Shortly after the Pesenti Tweets, they released a toxicity filter, which now rates content and flags any output that is deemed problematic.
It should also be noted that the current GPT-3 beta model isn’t the full version. As a tester, I currently have access to an API which comes in the form of a text box. I can type a prompt and the model will do its work. But an API can be shut down if anything goes awry.
Open AI are also having conversations with testers before anything goes into production. Before we started our GPT-3 podcast, we had a face-to-face chat with the Open AI team about our plans — and we waited 48 hours before getting the green light.
It’s reassuring that these measures are in place. And also good to know that Open AI are using this testing phase to see what kind of problems arise. With something this groundbreaking, we need to have an idea of what it can do (good and bad).
As GPT-3’s CEO, Sam Altman, tweeted after the product’s launch: “It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”
The future?
For me, “a very early glimpse” sums it up well.
GPT-3 offers so much potential, and I’ve no doubt we’ll see some incredible use cases over the coming months.
As an optimist, I’d like to believe that these types of technologies will prove useful for writers.
Whether AI writes for us, or alongside us, it’ll certainly help with some of the groundwork. It will push our creative writing in new directions. And it will free up time for us to do more strategic, complex and beautiful writing. The stuff we actually love to do.
But it’s not just AI’s writing capabilities that matter right now.
The important part is understanding and working consciously with these tools, so we can ensure our AI-powered writing is helpful, useful and safe.