top of page
Untitled design (28).png

Chat GPT - Revolution or Regression?

The blurred line between man and machine has often been the heart of great science fiction. And to evoke a cliché, reality is often stranger than fiction. Launched in November last year, CHAT GPT has already amassed 20 million users and developer Open AI's value has soared to $29 billion. While we haven't seen anything as dramatic as HAL’s breakdown in 2001, CHAT GPT integration with Bing has already been limited to five chats after it repeatedly told a user he loved him and expressed a desire to steal nuclear secrets. And although the model has no conscious understanding of its own output, it has been accused of possessing a “woke” agenda for refusing to use a racial slur to save the world.

When I tried to use the chatbot earlier this week, I was treated to an unavailable message and explanation of CHAT GPT’s status in the style of Shakespeare. While execution can be shaky, its ability to produce succinct copy in a matter of seconds is both impressive and disturbing. 1000-word essays on any academic subject can be concocted in a moment, however it also stumbles on basic maths problems and makes glaring factual errors. If this is the next technological revolution, what does it mean?


What separates CHAT GPT?


While previous iterations of chatbots were mostly used for customer service, Open AI’s large language model draws on a gigantic database of text to formulate original copy. Every time the letter t is typed into a phone with predictive text, a language model draws upon a huge dataset of content and calculates the letters “th” are the most likely to follow. The same principle is then applied to predict the next most likely word to follow in a sentence, and CHAT GPT's algorithm then uses a neural network similar to the human brain to construct full sentences and paragraph structures.

CHAT GPT’s sophistication also comes from its "human learning reinforcement system", where labellers within the company have ranked its most convincing outputs, and then fed the information back to the model to improve its future content. This technology could potentially spell the death of human copywriting and journalism, and completely alter our relationship with information consumption.


Search engines may never be utilized in the same way again, and Google's hasty reveal of their competitor “Bard” earlier this month demonstrated other large language models are susceptible to basic mistakes too (Bard stated the James Webb telescope took the first image of a planet outside of our solar system, even though this happened 17 years earlier).


Gift Or Curse?


How much hype will transition to real-world application is difficult to tell, however, large language models look like they're here to stay. Information consumption could become almost instantaneous, however as the rise of the internet illustrates, this doesn’t correlate to an improvement in critical thought. Information has never been as accessible as it is today, however misinformation, conspiracy theories and distrust of experts on all on the rise. Deciphering the truth from garbage online is more difficult than ever, and if social media companies have struggled with all-encompassing content moderation policies, will Open AI fare any better?


CHAT GPT is built to dismiss any contentious questioning, however users are already trying to “jailbreak” the software to produce racist and offensive material. No man-made system is free of political bias, and the chatbot has already been accused of discrimination against minorities and the LGBT community by the left and bias against conservatives from the right. Personalised chatbots have been hinted at, but how can this be regulated? And with large language models now being widely developed, how long until we see will another system which will happily create inflammatory content?


CHAT GPT has already faced bans in some American schools, with fears such a powerful writing tool will dampen creativity and cultivate cheating. The software maybe able to produce an analytic summary of how we can fight climate change, but will it really help young people nurture the critical thinking skills needed to solve the same problem?


Tools are already being developed to help detect AI-generated content and prevent plagiarism, and even though CHAT GPT’s output is impressive, there is still at times something eerily unhuman about its tone. Long-form answers tend to resemble every high school essay ever written, and while that’s fine for a quick answer, CHAT GPT shouldn’t be relied upon for nuisance. Literary magazines have been inundated with unpublishable AI-generated fiction in the wake of CHAT GPT's launch and their poor quality begs the question - what’s the point of having an abundance of copy if it’s a bit crap?


As with all emerging technologies, we have to ask what value the tech brings and whether it enhances or dampens our experience. With generative AI in its infancy, there's no clear roadmap for its implementation, and policymakers are still speculating on the best approach to governance, with no comprehensive legislation currently in place. Open AI’s creators have openly called for government involvement in developing their products, and large language models will need to enter a refinement period before they are truly incorporated into everyday use. In the meantime, we truly need the input of businesses, governments, and the public’s consciousness to work out how to manage this powerful yet dangerous tool.


 

References


  • Sparkes, M. (2023) Sci-fi magazine overwhelmed by hundreds of AI-generated stories. Available at: https://www.newscientist.com/article/2360672-sci-fi-magazine-overwhelmed-by-hundreds-of-ai-generated-stories/.

  • Simons, J. (2023) The Creator of ChatGPT Thinks AI Should Be Regulated. Available at: https://time.com/6252404/mira-murati-chatgpt-openai-interview/.

  • Greenaway, A. (2023) Andy Greenaway on LinkedIn: #ai #future #tech | 3,262 comments. Available at: https://www.linkedin.com/posts/andy-greenaway_ai-future-tech-activity-7031069851699871744-Enxj/?utm_source=share.

  • Bird, A. (2022) Privacy, Cyber & Data Strategy Advisory: AI Regulation in the U.S.: What’s Coming, and What Companies Need to Do in 2023. Available at: https://www.alston.com/en/insights/publications/2022/12/ai-regulation-in-the-us.

  • Mihalcik, C. (2023) Google ChatGPT Rival Bard Flubs Fact About NASA’s Webb Space Telescope. Available at: https://www.cnet.com/science/space/googles-chatgpt-rival-bard-called-out-for-nasa-webb-space-telescope-error/.




185 views0 comments

Recent Posts

See All
bottom of page