Why ChatGPT Can’t Do Basic Math

Rafael Moscatel
Tomorrow’s Jobs Today
3 min readOct 3, 2023

--

I recently had an opportunity to sit down with my 12-year-old to help him develop his math skills. I thought it might be useful to leverage Open AI’s ChatGPT tool to aid us. But while it could spell out explanations and scenarios for selected computations, I began noticing a concerning pattern of it generating basic, erroneous errors when asked to solve easy questions involving simple fractions and long division that included single-digit remainders.

ChatGPT doesn’t use a calculator:)

Upon further inquiry, I confirmed that the model is frequently prone to computational errors. Any parent, educational institution, or instructor teaching math should consider it entirely unreliable. The reason is apparent. Rather than rely on a traditional calculator, ChatGPT sources its math answers from patterns in natural language and documents, many of which are incorrect for several reasons.

Can you solve this, Chat GPT?

Of course, we know this is the nature of AI, which is why I had suspected it was throwing the errors, but I hadn’t considered as a parent, a citizen, and a consumer, how improper and ineffective the tool is for mathematics and how dangerous the ramifications are for society.

If you consider how many times ChatGPT has probably been used to complete homework or even by well-meaning teachers to provide work for students, it’s almost horrifying. But beyond just the bad habits this technology is instilling in future generations, think about the other ways a reliance on AI could affect other critical disciplines, from safety in construction projects to pharmaceutical chemistry or other health-related fields! And it’s not just science. Not so long ago, an attorney was caught using Artificial Intelligence to respond to discovery because the case law did not exist! Luckily, he was sanctioned. But how many others have gotten a pack of lies past the eyes of an unsuspecting jurist?

And while we know that #AI tools have been proven to be politically biased in favor of liberal causes — developers have admitted to this, it should be a warning to slow down in every other field. Yes, we have to accept that bad data quality is the nature of the beast — garbage in, garbage out. But when it comes to educating our kids, employees, customers, and who knows whomever or whatever other essential functions these erroneous calculations might affect, it should give everybody cause for concern.

We reasonably worry about the power of AI in terms of what it will do to the workforce, but one of the bigger red flags seems to be a question of basic proficiency. I would urge Open AI developers to recognize and address this deficiency in Chat GPT before it gets worse.

--

--

Rafael Moscatel
Tomorrow’s Jobs Today

Author of The Bastard of Beverly Hills, Tomorrow's Jobs Today and The Little Girl with the Big Voice