Thursday 25 Apr 2024
By
main news image

This article first appeared in Digital Edge, The Edge Malaysia Weekly on March 27, 2023 - April 2, 2023

A few weeks back, an old friend asked me this over some pints. “Do you think ChatGPT will replace us one day?” Both of us are in the creative industry, an area initially assumed to be technologically resilient. But like truck drivers and self-driving vehicles, it is now our heads that may be on the chopping block.

By now, most readers would be well aware of the disruptive capabilities of ChatGPT. Its creator OpenAI and its counterparts have been making inroads into the generative artificial intelligence (AI) space, mainly for text, images and code. Is the rapid development in this area a cause for concern for the average knowledge worker?

New guns, same battlefield

Despite what pundits might say, generative AI in its current state will not replace creators but serve as a supplementary tool within existing workflows. Article-writing using ChatGPT, for instance, still needs human involvement to craft deliberate prompts and scrutinise the output, which includes fact-checking.

To put things in perspective, asking ChatGPT to replace the writing process is like having a Thermomix or Instant Pot replace my mother’s cooking. It is like believing master craftsmen of traditional Japanese knives will be put out to pasture just because a factory has opened up nearby, producing thousands of knives in a single day.

I agree that ChatGPT will affect large swathes of the content creation industry, but mass-produced, low-quality content has always been primed for disruption. Serious writers have grappled with content mills, where regiments of writers get paid by the cent per word. Similarly, reputable artists have also struggled with copyright issues and low-effort copycats.

This new threat of mass-produced content is not new but has merely shifted to the digital landscape. It is wrong to assume that clients are primarily motivated by the lowest price, where it is a race to the bottom. Many will pay more for quality content, but they need a reason to do so. Grammatically correct articles produced by ChatGPT are generic, lack substance and most likely factually incorrect. Although the competitive landscape has drastically intensified, it is not a battleground creators are unfamiliar with.

How the sausage is made

The truth is that AI has always been highly disruptive in niche industries. Microsoft’s integrated development environment (IDE) VSCode has a co-pilot programme that auto-completes code, trained using data collected by popular code-hosting platform, GitHub. Auto-transcribing services built into Google Meet and Microsoft Teams have also put manual transcribers out of business, even disrupting incumbent auto-transcribers like Otter.ai.

ChatGPT is not unique in this matter. The only reason it stands out is because of the leap in capabilities compared with its predecessor, but more importantly, it has brought AI tools to the mainstream. This is thanks to its high accessibility and low learning curve. Anybody can sign up for an account and start conversing with ChatGPT immediately. The responses given seem intellectual and the AI gives the illusion that it is sentient, but it is not.

In overly simplified terms, ChatGPT is a language model that conducts a series of predictions based on given user prompts and the conversational history. Think of it as repeatedly tapping the auto-suggested word on your smartphone when typing a text message, except that the sentence generated actually makes sense.

ChatGPT is incapable of “understanding” the semantics behind the prompts it has been given and the words it has generated. To it, words are just “ones” and “zeros” transformed using word embedding techniques and pieced together using statistical likelihoods. It appears intelligent because it excels at its predictive capabilities and assumptions, learnt through petabytes of trained data scraped from the internet. It is not an understanding machine but a predictive one.

Therefore, ChatGPT struggles with fact-checking because it does not comprehend what “truth” is (this might change with the integration of additional modules). It is also why conversations with ChatGPT will gradually become weird and even outright disturbing as the conversation drags on, as the multitude of input causes the deep learning model to go out of whack.

While I’m still assured of my job security, my opinion might change drastically depending on the state of AI language model development.

Essentially, technological life cycles exist on a sigmoid curve. If we are at the middle or tail end of technological innovation, our circumstances would be like the internet in the 1990s. This infrastructure will give rise to companies competing to develop applications leveraging it while the technology gradually matures and stabilises. Use cases might differ, but the AI equivalent of Napster and AOL will rise and fall, paving the way for even larger corporations in the future.

However, if we are still at the early growth stage of this technology, its potential for disruption will cause heavy whiplash. The companies with the most resources will reign supreme. AI will benefit the many, impoverish the most and enrich the few.

I have mentioned the idea of integrating fact-checking modules into ChatGPT, but this will open a can of worms. Who gets to decide what is factually correct and wrong? How will the results inform and direct human decisions? The topic of ethical AI has been stuck in the regulatory inbox for the longest time, and hopefully ChatGPT has exacerbated the issue, making it too hard for authorities globally to ignore.

Adapting to change

So what does this mean for the man on the street and small and medium companies?

For professional knowledge workers, I highly recommend embracing premium digital tools for everyday use. In fact, I would argue that AI development within the productivity space is far more interesting than what ChatGPT has to offer.

As an example, Reclaim.ai and Motion auto-schedule your daily tasks into your calendar based on the task’s priority, estimated duration and deadline. If a task takes longer than usual, or a meeting gets cancelled at the last minute, the algorithm will readjust your calendar accordingly, letting you know what needs to be done at any given time.

For avid note-takers, Mem.ai uses natural language processing (NLP) to surface notes similar to the one you are working on, introducing a folder-less and tag-less way of organising thousands of notes. It is also the first platform that offers GPT functionality trained using your provided data, aside from the entire internet. This means that it can offer personalised book recommendations based on your book reviews or come up with unique marketing campaigns based on existing meeting notes.

Companies should also increase the cycles of tool adoption and abandonment. With the switch away from licensed software to monthly and yearly subscriptions, it is now easier and, in fact, cheaper to do so. This involves empowering small teams with the agency to choose software that is best suited for the job.

For instance, my current department uses Notion for project management and as the single source of truth. Canva replaced Adobe Suite as our design tool of choice, while Miro and Mindmeister became the defaults for brainstorming and post-mortem purposes. We also migrated away from locally stored phone contacts to a proper customer relationship management (CRM) tool.

Just introducing these tools into the workplace is not enough. Change management exercises involve introducing liberal yet comprehensive policies that balance work efficiency, data security and future-proofing, not to mention getting employee buy-in, training regimes and migration exercises.

Although the initial process may seem tedious, having an agile tool adoption environment pays dividends in the long run. In fact, many of the tools my department adopted do not even have built-in AI functionalities.

That is because well-designed software provides more holistic features and a greater user experience than the status quo. It makes teammates more productive and more comfortable with adopting AI tools in the future. More importantly, it makes working five days a week a more pleasant experience. If Asian countries can fuss over elegant stationery for everyday paperwork, why can’t knowledge workers be fussy over the digital tools we use?

I emphasise this point because many companies I have come across insist on utilising legacy tools. While the industry has already shifted to the cloud for most document handling, some are insistent on using software built in the 1990s. It is understandable if there are compelling reasons to do so, such as backwards compatibility or regulatory requirements, but more often than not, this is not the case. Why force farmers to till the land with hoes when the industry has transitioned to tractors?

Many believe that technology is developing faster than humans can adapt to it, but I disagree. Humans have always excelled at adoption, the key reason we are successful as a species. The cycles of tool adoption are getting shorter, with ChatGPT being the first to hit 100 million users in a matter of days. These 100 million users are not geniuses, but everyday folk.

Knowledge workers today are no longer measured by just the skills they have or the experience they have garnered, but also by their ability to embrace and scrutinise new tools while being able to unlearn, learn and relearn the new workflows and technologies available at hand. We are trying to adapt to the new age of AI digital tools.


Jotham Lim is marketing manager at Innov8tif Solutions, an AI provider specialising in ID verification and transaction authentication

Save by subscribing to us for your print and/or digital copy.

P/S: The Edge is also available on Apple's AppStore and Androids' Google Play.

      Print
      Text Size
      Share