![]() |
chat GPT logo |
10 cases where you should not use ChatGPT
Many of us rely on chatbots, especially the popular ChatGPT, for many things every day, ranging from help completing work tasks to personal matters.
But the truth is that ChatGPT sometimes hallucinates, fabricates information and presents it as facts, and is extremely confident in what it presents, even when it is wrong.
This becomes increasingly important as the risks increase. So, if you're unsure when it might be risky to use ChatGPT, here are 10 instances when it shouldn't be used, according to a report by CNET, a technology news website, reviewed by Al Arabiya Business.
1- Diagnosing Health Issues
AI cannot physically examine a sick person or order lab tests, and it doesn't have the accumulated scientific expertise of physicians to diagnose and assess cases, so it can't be relied upon for medical diagnosis.
On the other hand, ChatGPT can be used in the healthcare field in a different way. It can help formulate questions to ask a doctor during a medical appointment, translate complex medical terms, or organize a symptom schedule so the user is better prepared during a doctor's visit. This can maximize the experience and reduce stress.
2- Mental Health Care
ChatGPT can provide immediate calming and support techniques, but it cannot provide actual assistance during a severe psychological crisis.
ChatGPT may be somewhat helpful in overcoming grief as long as the user is aware of its limitations. However, it is no substitute for a real therapist; it is merely a pale imitation at best, and downright dangerous at worst.
ChatGPT lacks practical experience, cannot read a user's body language or tone of voice, and lacks any capacity for genuine empathy. Its advice can be misguided, ignore warning signs, or unintentionally reinforce biases embedded in its training data.
3. Making Instant Safety Decisions
If your carbon monoxide alarm starts going off, don't open ChatGPT and ask it if you're in real danger. Instead, get out first and ask questions later.
Large language models like ChatGPT can't smell gas, detect smoke, or dispatch an emergency team. The chatbot can only operate with the little information you provide, which is often insufficient or delayed in emergency situations.
In an emergency, every second you spend typing is a second you're not evacuating or calling 911. Therefore, a chatbot should be treated as a source of explanation after the fact, not as a first-responder.
4. Financial or Tax Planning
ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, your tax bracket, your retirement goals, or your risk tolerance.
Because its training data may not include the current tax year or the most recent rate hikes, its guidance may be outdated and inaccurate.
Therefore, a chatbot cannot replace a certified public accountant. You should also keep in mind that anything you share with a chatbot is likely to become part of its training data, including your income, Social Security number, and bank account information.
5- Handling confidential or regulated data
It should be understood that any data entered into ChatGPT to request a command from the chatbot will be outside the user's control and stored on third-party servers. This data could be used to train future AI models.
Because of this, caution should be exercised when handling customer contracts, medical records, tax returns, birth certificates, driver's licenses, passports, and any other sensitive information.
6- Cheating in School
Using AI to cheat on exams or homework may seem easy, but Turnitin and similar detection systems are constantly improving their ability to detect AI-generated texts.
Discovery of such a thing could result in expulsion from school or university, and the revocation of degrees or licenses.
7- Following breaking news and information
Since OpenAI launched ChatGPT's search feature in late 2024, the bot can access the latest web pages, stock quotes, gas prices, sports scores, and other real-time data upon request, with the ability to click on them to verify the source.
However, the chatbot doesn't provide continuous automatic updates, and you'll need to manually enter a new command for the chatbot each time you want to update information.
Therefore, when speed is essential, such as in disaster coverage, market coverage, or breaking news, ChatGPT isn't the best choice.
8- Drafting a Will or Any Contract
ChatGPT is adept at explaining basic legal concepts, but asking it to draft an actual legal document is a real risk.
9- Artistic Creation
Artificial intelligence should not be used to produce art, given that art should be the artist's own creation and should be a human expression derived from experience, not an algorithm. Some may consider the use of AI in artistic creation unethical.
0 Comments