“Protecting the Training Period”

I recently finished reading Nobel-laureate Daniel Kahneman’s “Thinking, Fast and Slow” and found it to to be thought-provoking and more than a little humbling in light of the current generative AI revolution and the role of cognitive effort in learning. From the notion of System 1 and System 2 (about which I’ve written before) to the value of formulas over intuition (linear models!), and the efficiency-seeking (a.k.a laziness) of the human brain when it comes to cognitive tasks, the book articulates many of the principles that guide us at Diller Digital and that Enthought used for training many years previously.

Cover of Daniel Kahneman's book "Thinking, Fast and Slow", which contains important ideas about the role of cognitive effort.

One of the most important takeaways for me is that in educational environments, one must carefully manage cognitive effort. In the age of LLMs, I increasingly see the role of an instructor like that of a sherpa, guiding students to the most productive use of cognitive effort during a focused period, and avoiding wasted effort on trivialities that don’t contribute to long-term success. This is particularly the case with students in mid-career, who have limited blocks of time for training, and whose other demands don’t all magically disappear during a week of training.

Among the humbling insights from the book was that the way System 1 & System 2 work, coupled with our blind spots (“What You See Is All There Is”), can lead us into poor decision making. This plays out in many ways, of course, but I’m thinking in particular of those captains of industry who assure us that we can stop writing code now. Sam Altman recently felt a little useless, which was sad, that ChatGPT had come up with a better idea than him, for example. I was dismayed by how easily he seemed to give up his sense of worth or to measure his value by productivity.

So my attention was caught by “Betting blind on AI and the scientific mind“, an article by Tim Requarth, a professor at NYU, who was struggling, as many educators do these days, with the question of AI’s use in the classroom. On one hand, there are those who assert that AI causes brain rot — skill atrophy in the presence of automation is a well documented phenomenon. This side takes as a banner the statement “writing is thinking” and points to the likes of Richard Feynman and his laboratory notebooks as evidence that there are no shortcuts around cognitive effort. And on the other side are the AI evangelists who glibly point out how much they were able to accomplish with the aid of an LLM, assert that prompt engineering and human-based discussions are just as legitimate forms of thinking as writing, and wonder who cares about the details of APIs anyway? What’s missed in these glib statements is the importance of cognitive effort for training. Both the AI-reactionary and AI-evangelist voices have valid points to make, but they are too often presented in extreme. We’ll try to keep the middle ground here.

Given that we generally agree on the importance of cognitive effort for learning (see Derek Muller’s “Effort Is the Algorithm” for a fun take), a pair of assertions Pf. Requarth made intrigued me. First, and this is very much in line with Kahneman’s decision-making traps, we are very bad at perceiving and judging our own cognitive state. Said another way, thinking hard hurts, and we don’t like pain, so it’s very difficult for us to differentiate beneficial cognitive effort from tedium.

Second, and this follows directly from the first, “any tool that preserves productive struggle will probably lose in the marketplace to tools that eliminate it.” This is the tricky piece – because our brains tend not to like effortful thinking, we instinctively avoid it.

Any tool that preserves productive struggle will probably lose in the marketplace to tools that eliminate it.

The solution to the conundrum is to recognize and curate the training experience. In an academic setting, this means banning the use of LLMs to aid in writing doctoral research proposals and dissertations, or essays in middle or high school. Or in continuing education like Diller Digital provides, it means taking the time to work through the fundamentals, and spending the cognitive effort — at least once — to learn how a list differs from a tuple or a Numpy ndarray and why it matters in scientific computing. In my years as a consultant for Enthought, for example, I saw many managers go through our training and emerge with understanding that greatly enhanced their leadership of technical projects, even though they never wrote another line of code after completing the class. A well-trained individual can exercise discernment about whether to employ a tool, because not every email or every piece of shim code needs a human’s full attention. But if they choose to use the tool, it will not be because they didn’t have the skill, or it was too cognitively painful for them to do it themselves.

Ultimately, I agree with Requarth’s recommendation to “use AI responsibly” by protecting the training period, managing the “friction” of the tools, and speaking about use of LLMs and other “AI” technology with nuance.

Author: tim

Founder and Principal at Diller Digital

Leave a Reply

Your email address will not be published. Required fields are marked *