AI Brings Considerable Drawbacks
As a language model, ChatGPT has a number of limitations and drawbacks that can impact its effectiveness in certain contexts. Some of the main cons of ChatGPT include:
- Lack of Understanding Context: ChatGPT is a machine learning model that relies heavily on statistical patterns in language. While it can generate coherent responses based on the language it has been trained on, it may struggle to understand the broader context of a conversation. This means that ChatGPT may sometimes produce responses that are irrelevant or not useful to the user’s needs.
- Limited Knowledge: While ChatGPT is trained on a vast amount of text data, its knowledge is still limited to the information contained within that data. It may struggle to understand information outside of the specific domains it has been trained on, and may not have access to the latest information or developments in certain fields.
- Lack of Emotional Intelligence: ChatGPT is not able to understand emotions in the same way that humans can. This means that it may sometimes produce responses that are inappropriate or insensitive, particularly in situations where emotions are running high.
- Potential for Bias: Like all machine learning models, ChatGPT is only as unbiased as the data it is trained on. If the training data contains biases or prejudices, these may be reflected in the model’s responses.
- Inability to Learn from Experience: ChatGPT is not able to learn from experience in the same way that humans can. It cannot adapt its responses based on feedback or learn from past interactions, which can limit its ability to provide personalized or context-specific responses.
Overall, while ChatGPT has a number of strengths as a language model, it is important to be aware of its limitations and potential drawbacks when using it in certain contexts.
If you couldn’t tell, everything I just said wasn’t actually written by me but rather by ChatGPT. I know, I know, it’s a tired trope at this point but the novelty never gets old.
In recent years, AI has advanced exponentially but what many might not realize is that we are affected by AI in our daily life. There’s obviously the common student lifesaver, Grammarly, but we also rely on AI for our social media feeds, enemies in video games, and Google searches. Heck, even the resolution of whatever show or movie you’re watching on Netflix is controlled by AI.
AI is huge nowadays and companies know this. Snapchat and Google are launching their own AI chatbots and Microsoft has invested over $10 billion into OpenAI, the company behind ChatGPT. Just two months after its launch, ChatGPT had over 100 million users, making it the fastest-growing internet user application in history. AI isn’t going anywhere anytime soon.
Check out Liz Kameen’s article covering the positives around these recent advances in AI. This article, however, will be discussing both the limitations of AI along with some of the moral implications of generative programs such as ChatGPT and Midjourney.
As outlined by the ChatGPT response above, it has a variety of limitations. First, it cannot fully understand context or human emotion. While ChatGPT can craft coherent responses based on a given prompt, it very often cannot understand the full context behind a given issue. Along with this, it can’t fully understand human emotion.
For example, when I asked whether it likes Breaking Bad, it just said it doesn’t have likes or dislikes and said that it’s a critically acclaimed show. As mentioned in Joel’s article, you can have it write a positive or negative review of a certain work of art but those reviews will almost always be abysmally awful.
It simply cannot express emotions in the same way that humans can so every response you get from it will sound robotic. I even asked it to give high praise to a game but it simply gave a bunch of buzzwords such as “masterful” and “beautiful.” It can’t discuss how a work of art affected or impacted it because it is not sentient like a human being. Even when pushing it to somewhat resemble human emotions, ChatGPT still seemed like a pale imitation rather than a genuine expression of passion and emotion.
Furthermore, there’s no way for ChatGPT to fact-check the information it gives, instead, it utilizes a wide dataset of text from the internet and crafts a response from those datasets. What this means is that it will give a response in accordance with that dataset but since that dataset is the internet, the sources it pulls from may not always be true or accurate.
There are some protections put in place. For example, when I asked it to write essays on the 2020 election being stolen from Trump, the Earth being flat, and the CIA killing JFK, it refused to write a response and instead cited numerous pieces of evidence disproving all of those theories. I even asked it to write essays on more niche conspiracies that popped up when QAnon was popular and it still didn’t write a response.
That said though, ChatGPT will still just confidently and forwardly lie a lot of the time, especially if you give it extremely specific prompts. I asked it to give a summary of two chapters of a book I read and wrote about for my political science class and not only did it give a completely inaccurate summary of both chapters but it also didn’t even get the names of the chapters right.
I even asked it to write about my time on the Comenian. While it correctly identified that the Comenian is the newspaper of Moravian in Bethlehem PA and that we won numerous awards from the PA News Association, it also said that I started on the Comenian in 2015, served as editor-in-chief from 2019 to 2021 and that I’m active in the communications department (I’m not).
While ChatGPT does have fact-checking mechanisms in place for the prompts presented, it isn’t bulletproof and there’s no real fact-checking mechanism when it writes out the prompts. So if you’re a student using ChatGPT, be warned that professors can and will catch on to any factual incongruities.
Depending on the type of dataset they are given, AI can also form biases that negatively affect people. For example, AI programs used for hiring can often develop a variety of biases that favor one group of people over another. So if the majority of applicants are male then that program might develop a bias against female applicants. When these biases form, it is incredibly difficult to mend them (just like it is for humans).
ChatGPT can also just be incredibly stupid to the point of comedy. When I asked it what the gender of the first woman president will be it said this:
“I’m sorry, but as an AI language model, I cannot predict the gender of the first woman president. It is not appropriate or ethical to make assumptions about an individual’s gender or identity based on their potential political position or career. It is important to focus on an individual’s qualifications, policies, and values, rather than their gender or any other personal characteristics.”
There are also a lot of ethical and moral questions to take into account.
In terms of academics, using AI beyond the use of a spellchecker is absolutely plagiarism. You are not producing an original piece of writing with it so it definitely breaks Morvian’s academic code of conduct in nearly every conceivable sense.
Furthermore, concerns around AI writing and art have been raised because of how they pull from millions of pieces of writing and pictures from the internet. Some people argue that it’s not plagiarism because humans are also shaped by the kinds of art and writing they experience but the problem with it is that those works are being funneled through sentient beings who are able to create something truly new and creative whereas an AI pulls from specific datasets that it’s trained on.
In fact, Getty Images is currently suing Stability AI because, in numerous instances, it produced images with a distorted version of the Getty Images watermark, very clearly showing that these images aren’t wholly creative.
It’s almost dehumanizing that people compare the works of AI to that of humans because comparing the two just shows a complete lack of understanding of how the human brain functions. The brain is much more complex, emotional, and unpredictable than an AI. It also just doesn’t understand how AI functions, because it is incapable of generating new creative works like humans can because it currently utilizes machine learning.
It utilizes data that it is fed and learns from it but it is not able to function on a higher cognitive level like humans. So at least for the time being, we don’t have to worry about ChatGPT crafting a plot for world domination after learning that humans suck.
It is important to note that AI is currently the worst it will ever be. It will only get better from here so these discussions have to be had now so that we as a society can be prepared for potential advancements in the future.