There is a chorus of voices telling us that soon we will be able to hand all of our coding tasks over to an AI agent powered by an LLM. It will do all the tedious boring things for us, and we can focus on the important stuff. While the new generative AIs are amazing in their capabilities, I for one think we shouldn’t be so quick to dismiss the value of learning to code. Granted, I make my living teaching people to write software, so maybe I should call this “Why I don’t quit my job and tell everyone to use ChatGPT to write their code”, because I believe that learning to code is still necessary and good.
In early 2024 the founder and CEO of NVIDIA Jensen Huang participated in a discussion at the World Governments Forum that inspired countless blog posts and videos with titles like “Jensen Huang says kids shouldn’t learn to code!”. What he actually said is a bit different but the message is essentially the same [click here to watch for yourself, it’s the last 4 minutes or so of the interview]: “It’s our job to make computing technology such that nobody has to program … and that the programming language is human. Everybody in the world is now a programmer…“

He suggests that instead of learning to code, he says we should focus on the Life Sciences because that’s the richest domain for discovery, and he thinks that developing nations that want to rise on the world stage should encourage their children to do the same. Oh, and he says we should build lots of infrastructure (with lots of NVIDIA chips of course).
There is a core part of his message I actually agree with. At Diller Digital, and at Enthought where our roots are, we have always believed it’s easier to add programming skills to a scientist, engineer, or other domain expert than it is to train a computer scientist in one of the hard sciences. That’s why if you’ve taken one of our courses, you’ve no doubt seen the graphic below, touting the scientific credentials of the development staff. And for that reason, I agree that becoming an expert in one of the natural sciences or engineering discipline is personally and socially valuable.

At Enthought, almost no one was formally trained as a developer. Most of us (including me) studied some other domain (in my case it was Mechanical Engineering, the Thermal and Fluid Sciences, and Combustion in particular) but fell in love with writing software. And although there is a special place in my heart for the BASIC I learned on my brother’s TI/99 4A, or Pascal for the Macintosh 512k my Dad brought home to use for work, or C, which I self taught in High School and college, it was really Python that let me do useful stuff in engineering. Python has become a leading language for scientific computing, and a rich ecosystem has developed around the SciPy and NumPy packages and the SciPy conference. One of the main reasons is that it is pragmatic and easy to learn, and it is expressive for scientific computing.
And that brings me to my first beef with Huang’s message. While the idea of using “human language”, by which I believe he means “human language that we use to communicate with other humans” otherwise known as natural language, to write software has some appeal, it ignores the fact that we already use human language to program computers. If we were writing software using computer language, we’d be writing with 1s and 0s or hexadecimal codes. Although there are still corners of the world where specialists do that, it hasn’t been mainstream practice since the days of punch cards.

Modern computer languages like Python are designed to be expressive in the domain space, and they allow you to write code that is clear and unambiguous. For example, do you have any doubts about what is happening in this code snippet borrowed from the section on Naming Variables in our Software Engineering for Scientists & Engineers?
gold_watch_orders = 0
for employee in employee_list:
gold_watch_orders += will_retire(employee.name)Even a complete newcomer could see that we’re checking to see who’s about to retire, and we’re preparing to order gold watches. It is clearly for human consumption, but there are also decisions about data types and data structures that had to be made. The act of writing the code causes you to think about your problem more clearly. The language supports a way of thinking about the problem. When you give up learning the language, you inevitably give up learning a particular way of thinking.
This brings me to my second beef with the idea that we don’t need to learn programming. Using what Huang calls a “human language” in fact devolves pretty quickly into an exercise called “prompt engineering”, where the new skill is knowing how to precisely specify your goal to a generative model using a language that is not really designed for that. You end up needing to work through another layer of abstraction that doesn’t necessarily help. Or that is useful right up to the point where it isn’t, and then you’re stuck.
I often point my students to an article by Joel Spolsky called “The Law of Leaky Abstractions“, in which the author talks about “what computer scientists like to call an abstraction: a simplification of something much more complicated that is going on under the covers.” His point is that abstractions are useful and allow us to all sorts of amazing things, like send messages across the internet, or to our point, use a chat agent to write code. His central premise is there is no perfect abstraction.
All non-trivial abstractions, to some degree, are leaky.
Joel Spolsky
By that, he means that eventually the abstraction fails, and you are required to understand what’s going on beneath the abstraction to solve some tricky problem that eventually emerges. By the time he wrote the article in 2002, there was already a long history of code generation tools attempting to abstract away the complexity of getting a computer to do the thing you want to do. But inevitably, the abstraction fails, and to move forward you have to understand what’s going on behind the abstraction.
… the abstractions save us time working, but they don’t save us time learning.
Joel Spolsky
And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a proficient programmer is getting harder and harder.
For example, I’m grateful for the WYSIWYG editor WordPress provides for producing this blog post, but without understanding the underlying HTML tags it produces and the CSS it relies on, I’d be frustrated by some of the formatting problems I’ve had to solve. The WYSIWYG abstraction leaks, so I learn how HTML works and how to find the offending CSS class, and it makes solving the image alignment problem much much easier.
But it’s not only the utility of the tool. There’s a cognitive benefit to learning to code. In my life as a consultant for Enthought, and especially during my tenure as a Director of Digital Transformation Services, I would frequently recommend that Managers, and even sometimes Directors, take our Python Foundations for Scientists & Engineers, not because they needed to learn to code, but because they needed to learn how to think about what software can and can’t do. And with Diller Digital, the story is the same. Especially in the Machine Learning and Deep Learning classes, managers join because they want to know how it works, what’s hype and what’s real, and they want to know how to think about the class of problems those technologies address. People are learning to code as a way of learning how to think about problems.
The best advice I’ve heard says this:
Learn to code manually first, then use a tool to save time.
I’ll say in summary, the best reason to learn to code, especially for the scientist, engineers, and analysts who take our classes, is that you are learning how to solve problems in a clear, unambiguous way. And even more so, you learn how to think about a problem, what’s possible, and what’s realistic. Don’t give that up. See also this article by Nathan Anacone.
What do you think? Let me know in the comments.