Coding is not Going Anywhere: Disagreeing with Jensen Huang

There has been a significant push from online courses, influencers, and governments promoting the idea that everyone should learn to code. The push is fueled by the booming popularity of online courses promising to help anyone change their careers. Anyone can be a software engineer, software developer, software tester, et cetera.

The Turning Point

Recently, Jensen Huang, CEO of Nvidia, made headlines with his statement: "Don't learn to code." At the World Government Summit in Dubai, he furthered this narrative by suggesting that AI will replace coders, stating, "Everyone is now a programmer."

Firstly, I want to clarify that I'm not opposed to AI. It's helpful, save my time and potentially will remedy the labor shortages due to the aging society in next 20 years but it won't kill programmers.

LLMs : A Limitation

The current approach to AI, particularly with Large Language Models (LLMs), does not embody true "Artificial Intelligence." LLMs lack consciousness, reasoning, logic, and understanding. LLMs excel in processing vast amounts of data and mapping patterns to generate answers. However, problem-solving is not a simple copy-and-paste operation. It requires a deep understanding of the problem itself. Of course, If you insert Leetcode style problems into your LLMs, they will be able to solve because these types of problems are used over and over again. Like University Admissions Exams. There are only few original problems and they only change the wordings. And the solutions are all over the internet.

alt text
Here is an image that I try to make up "King Mantisoraya" asking LLM "who is King Mantisoraya". The LLM replies suggested that Mantisoraya is a character from the anime and manga "Yu-Gi-Oh!". However, after I try to search on Google, I find nothing closely related to the name of the character in Yu-Gi-Oh anime.

LLMs look promising and feel promising because the premise is that all their outcomes are human responses. They have been designed to do so. They might replace your tinder girlfriends. They are best at knowing what you want as a response but worst for the truth-seeking and problem-solving.

LLM lacks true understanding. That’s why they will never be able to guarantee their response correctness based on their internal understanding. Basing on statistics, LLM is bound to always be sound correctful that’s why they will always be hallucinating. At its core, hallucination is LLM core. That alone would put them in a place that is unable to replace human.

A Future Perspective

In the future, there might be new ways to approach AI. It may be programmer replacing AI. But sorry LLMs, it's not today.