March 20, 2023
KUALA LUMPUR – One of the publications I write for features occasional contributors who submit opinion articles. Last week, my editor told me, “I don’t think this week’s article was written by the usual guy. I think it was written by ChatGPT”.
For those who have avoided the news in recent months, ChatGPT is a human-friendly interface to an artificially intelligent (AI) backend. I wrote about it last year, describing GPT-3 as a neural network machine-learning model, trained with Internet data to generate any type of text (“The intelligence of AI is limited by its creators, us dumb humans”, The Star, May 15, 2022).
Which Internet data? GPT-3 was trained with 45 terabytes of text data, including 410 billion scraps of text from the web, 67 billion passages from books, and three billion parts of Wikipedia.
The ability to easily have a conversation with an AI is what has captured the public’s imagination. The fact that it’s helping people, specifically white-collar workers, with their work is what is revolutionising the workplace.
For example, if you ask it to “write me a paragraph explaining how ChatGPT helps writers”, it will give you this answer as a start: “The popularity of ChatGPT has been stunning. Since its inception, there have been millions of accounts. This is frequently compared with services like Twitter and Netflix, although neither of them combine the productivity impact and ease of accessibility of ChatGPT.”
Putting aside the AI’s massive ego, you could argue that someone like me is on the verge of losing my job, as all I do is write paragraph after paragraph about nothing in particular.
In fact, it is already making an impact on my day-to-day work. For example, I used to help people proofread some of their articles and emails, but now they use ChatGPT to do so. And it can provide better results than I do in less time.
The problem with popularity is, of course, that at some point, there will be pushback. People now say things like “That piece was so badly written that ChatGPT could have done better”.
So how can something like this be simultaneously a poster child for AI and a productivity hack while also being the new form of shade being thrown?
I think it’s because we still view creative writing as a completely human endeavour, especially when so much of our other work has been taken over by machines.
Let me compare this to something like being able to do calculations quickly in your head. In the past, it would be somewhat laborious. But with the invention of tools, all the way from abacuses to calculators, it has become much easier.
It feels like the same thing is happening now with language and writing. I think it started with spell-checkers, to the point that spelling errors in documents now are considered unforgivable. (Somewhat embarrassingly, an email I sent earlier this week had not one but two spelling errors in it. At this point I feel compelled to point out, “to err is human!”)
Nevertheless, when they started, the earlier spell-checkers didn’t necessarily understand context; eg, it wouldn’t know that “Your welcome” should be “You’re welcome”. Then came grammar-checkers, which would miss the mistakes in a mangled sentence like, “The company should big improve grammar-check”.
Despite their shortcomings, each of these steps has actually enhanced the work of writing rather than devalued it. Instead of spending time looking for spelling errors and grammatical mistakes, you can actually try to improve the content.
This incarnation of automation has taken the next step by creating words from scratch and making them make so much sense that it’s difficult to tell if it’s a computer or human who wrote them.
This is known as the Turing Test, which a machine passes if it can hold a conversation with a human and the human cannot tell if it’s a human or computer on the other end. The Lovelace Test is a similar assessment that tests for creativity, to see if a machine can create an original piece of work which programmers cannot explain nor determine how the original code led to the new program.
Max Woolf, a data scientist at news and entertainment portal BuzzFeed, claimed in December 2022 that ChatGPT had passed the Turing Test. However, passing the Lovelace Test is more unclear since, unfortunately, most modern AI models are designed in such a way that they are inherently not transparent.
If ChatGPT can almost pass as a human being, how is it that my editor feels he can still suspect when an article is AI-generated? Well, in this particular case, it was because the writing was too good, it seems. But when it comes to creativity, it isn’t really about stringing words together in an interesting way, but whether you have anything interesting to say.
As a matter of curiosity, I ran through ChatGPT the prompt “write me a few paragraphs about how ChatGPT is both the death knell and saviour of creativity in writing”. It responded with many of the ideas I present in this article, from the fact that it will create the homogenisation of writing, to how AI helps writers focus on content rather than form. It also says it can help with brainstorming ideas to help kickstart the creative process.
But in an effort to try and be original, let me write something here that ChatGPT did not suggest: That AI in writing will help us find original ideas by reminding us of the thinking we have already presented.
If the Internet captures the sum of human knowledge to date, then ChatGPT represents a map of what we know as humankind. For me, the interesting things to explore are the bits where it’s still dark, those areas that old maps used to mark with “here be dragons”.
Ultimately, human endeavour should be, in a large part, about exploring the unknown and reaching out for new experiences. Biologically speaking, the ultimate objective is to create new life itself – but we also have a duty to add to human culture, to help us get the most out of the life we have, ideally by presenting novel experiences.
Even as I’m typing this, it has just been announced that GPT-4, which is even more powerful and has more features, has been incorporated by Google into its Google Workspace product, enabling users to draft complete documents and emails with the prompt of a few words.
This means once this article you’re reading gets published, it’ll probably be digested by the next latest AI model, and perhaps appear in some future output a few years from now. But until then, I will take some heart that this essay, for now, is written by me, myself, and I.
In his fortnightly column, Contradictheory, mathematician-turned-scriptwriter Dzof Azmi explores the theory that logic is the antithesis of emotion but people need both to make sense of life’s vagaries and contradictions. Write to Dzof at email@example.com. The views expressed here are entirely the writer’s own.