NumPy Indexing — the lists and tuples Gotcha

In a recent session of Python Foundations for Scientists & Engineers, a question came up about indexing a NumPy ndarray. Beyond getting and setting single values, NumPy enables some powerful efficiencies through slicing, which produces views of an array’s data without copying, and fancy indexing, which allows use of more-complex expressions to extract portions of arrays. We have written on the efficiency of array operations, and the details of slicing are pretty well covered, from the NumPy docs on slicing, to this chapter of “Beautiful Code” by the original author of NumPy, Travis Oliphant.

Slicing is pretty cool because it allows fast efficient computations of things like finite difference, for say, computing numerical derivatives. Recall that the derivative of a function describes the change in one variable with respect to another:

\frac{dy}{dx}

And in numerical computations, we can use a discrete approximation:

\frac{dy}{dx} \approx \frac{\Delta x}{\Delta y}

And to find the derivative at any particular location i, you compute the ratio of differences:

\frac{\Delta x}{\Delta y}\big|_i = \frac{x_{i+1} - x_{i}}{y_{i+1} - y{i}}

NumPy allows you to use slicing to avoid setting up costly-for-Python for: loops by specifying start, stop, and step values in the array indices. This lets you subtracting all of the i indices from the i+1 indices at the same time by specifying one slice that starts at element 1 and goes to the end (the i+1 indices), and another that starts at 0 and goes up to but not including the last element. No copies are made during the slicing operations. I use examples like this to show how you can get 2 and sometimes 3 or more orders of magnitude speedups of the same operation with for loops.

>>> import numpy as np

>>> x = np.linspace(-np.pi, np.pi, 101)
>>> y = np.sin(x)

>>> dy_dx = (
...     (y[1:] - y[:-1]) /
...     (x[1:] - x[:-1])
... )
>>> np.sum(dy_dx - np.cos(x[:-1] + (x[1]-x[0]) / 2))  # compare to cos(x)
np.float64(-6.245004513516506e-16)  # This is pretty close to 0

Fancy indexing is also well documented (but the NumPy docs now use the more staid term “Advanced Integer Indexing“, but I wanted to draw attention to a “Gotcha” that has bitten me a couple of times. With fancy indexing, you can either make a mask of Boolean values, typically using some kind of boolean operator:

>>> a = np.arange(10)
>>> evens_mask = a % 2 == 0
>>> odds_mask = a % 2 == 1
>>> print(a[evens_mask])
[0 2 4 6 8]

>>> print(a[odds_mask])
[1 3 5 7 9]

Or you can specify the indices you want, and this is the Gotcha, with tuples or lists, but the behavior is different either way. Let’s construct an example like one we use in class. We’ll make a 2-D array b and construct at positional fancy index that specifies elements in a diagonal. Notice that it’s a tuple, as shown by the (,) and each element is a list of coordinates in the array.

>>> b = np.arange(25).reshape(5, 5)
>>> print(b)
[[ 0  1  2  3  4]
 [ 5  6  7  8  9]
 [10 11 12 13 14]
 [15 16 17 18 19]
 [20 21 22 23 24]]
>>> upper_diagonal = (
...     [0, 1, 2, 3],  # row indices
...     [1, 2, 3, 4],  # column indices
... )
>>> print(b[upper_diagonal])
[ 1  7 13 19]

In this case, the tuple has as many elements as there are dimensions, and each element is a list (or tuple, or array) of the indices to that dimension. So in the example above, the first element comes from b[0, 1], the second from b[1, 2] so on pair-wise through the lists of indices. The result is substantially different if you try to construct a fancy index from a list instead of a tuple:

>>> upper_diagonal_list = [
    [0, 1, 2, 3],
    [1, 2, 3, 4]
]
>>> b_with_a_list = b[upper_diagonal_list]
>>> print(b_with_a_list)
[[[ 0  1  2  3  4]
  [ 5  6  7  8  9]
  [10 11 12 13 14]
  [15 16 17 18 19]]

 [[ 5  6  7  8  9]
  [10 11 12 13 14]
  [15 16 17 18 19]
  [20 21 22 23 24]]]

What just happened?? In many places, lists and tuples have similar behaviors, but not here. What’s happening with the list version is different. This is in fact a form of broadcasting, where we’re repeating rows. Look at the shape of b_with_a_list:

>>> print(b_with_a_list.shape)
(2, 4, 5)

Notice that its dimension 0 has 2 elements, which is the same as the number of items in upper_diagoal_list. Notice the dimension 1 has 4 elements, corresponding to the size of each element in upper_diagoal_list. Then notice that dimension 2 matches the size of the rows of b, and hopefully it will be clear what’s happening. In upper_diagoal_list we’re constructing a new array by specifying the rows to use, so the first element of b_with_a_list (seen as the first block above) consist of rows 0, 1, 2, and 3 from b, and the second element is the rows from the second element of upper_diagonal_list. Let’s print it again with comments:

>>> print(b_with_a_list)
[[[ 0  1  2  3  4]   # b[0] \
  [ 5  6  7  8  9]   # b[1]  | indices are first element of
  [10 11 12 13 14]   # b[2]  | upper_diagonal_list
  [15 16 17 18 19]]  # b[3] /

 [[ 5  6  7  8  9]   # b[1] \
  [10 11 12 13 14]   # b[2]  | indices are second element of
  [15 16 17 18 19]   # b[3]  | upper_diagonal_list
  [20 21 22 23 24]]] # b[4] /

Forgetting this convention has bitten me more than once, so I hope this explanation helps you resolve some confusion if you should ever run into it.

On Software Craftsmanship

Last week I found myself engaged with a group of students from Los Alamos National Laboratories in our Software Engineering for Scientists & Engineers, known informally as Software Craftsmanship. Apart from the epic New Mexico skies, the grand vistas, and the welcome relief from the heat and humidity of my hometown Austin, what I particularly loved about the week was the focus on craftsmanship.

This class had a high proportion of people I’d worked with before learning Python programming, data analysis, and/or machine learning, so it was easy to build rapport. Questions and dialog flowed easily. One student had this to say:

The interactivity of the in-person class, paired with the detailed course slides, was very effective. The source control (git), readable code, refactoring, and unit testing sections were all very useful and will be directly impactful to my work. There were multiple instances throughout the week where I learned something that would have saved me significant time on a problem I had encountered within the last 6 months.

One of the things we cover in the class is code review, the practice of submitting your code for review and critique before it’s accepted into a project, in some ways similar to the academic peer-review process. At Diller Digital, we try model this process by submitting and responding to feedback on the course materials. In response to a session of Software Engineering earlier this year, students suggested we learn source control and the details of git at the start of the class and then use it in a workflow typical of small teams in an R&D environment. Diller Digital has a git server (powered by Gitea, a close analog to GitHub and GitLab), and we created a class repository and developed a couple of small libraries that can serve as best-practice examples of variable naming, use of the logging library, Sphinx-ready documentation, unit testing, and packaging using standard tooling. One of the many jokes about git is that you can learn how to do 90% of what you’ll need with only understanding 10% of what’s actually going on. I’m not sure about the numbers there, but I do know that using and practicing what you’ve learned makes all the difference.

The in-person, instructor-led format makes engagement much easier and lowers the barriers to asking questions and providing individualized help. But one of the important principals behind that is the role of effortful thinking in learning. I like the way Derek Muller (of Veritasium fame) explains in this video how we have two systems in our brain, one fast — for instinctive, rapid-fire processing of the kind you’re using to parse the words on this page, and one slow – the effortful, brain-taxing system required for understanding something.

It’s probably that effortful system you’re using trying to understand my point, and you’ll surely use it trying to tell whether 437 is evenly divisible by 7 in your head. It’s not quite as simple as two distinct systems, as the author of that idea, Daniel Kahneman, makes clear in his book, Thinking, Fast and Slow, but it gives us a useful mental model for talking about software craftsmanship, and why we teach the way we do at Diller Digital. One of the main takeaway points is that effortful thinking is necessary for learning, but not all effortful thinking results in useful learning.

One of the first ideas we introduce in Software Engineering is that of cognitive load and its management. Cognitive load is a measure of effortful thinking — it’s the effort required to understand something, and we would like that effort to be spent on important things like the business logic of an algorithm and not on trivial things like indentation and syntax. That’s the purpose using a coding standard — once your brain gets used to seeing code that’s formatted in a common way (for Python it’s embodied in PEP8), the syntax becomes transparent (it’s handed off to the fast thinking part of our brain), and you can see through it to the logic of the code and spend your effort understanding that. Code that’s not formatted that way introduces a small cognitive tax on each line that adds up to measurable fatigue over time. If you want an example of that kind of fatigue, try this little game.

So managing cognitive load informs choices of layout, use of white space, and selecting the names of Python objects, and this is one of the important things we teach in Software Engineering. But it also informs the way we design our courses. We introduce ideas and demonstrate them and then have our students spend effort internalizing them, first in a simple “Give It A Try” exercise and eventually in a comprehensive exercise. The goal is to direct our students’ effort to increasingly independent tasks, in what is sometimes called a “fading scaffold”, where early effort is guided closely, and in later efforts, students have more room to make and recover from mistakes. This is also the thinking behind the presence in some courses of “Live Coding” scripts, where demos and exercises are set up already, and the student only has to focus on the learning goal and not on typing all of the supporting code around it. These have proven to be especially popular in our Machine Learning and Deep Learning classes.

This also suggests a strategy for the effective use of Large Language Models for coding. Use them reduce effort where it’s not critical to gain understanding or to gain a skill. But don’t let them replace effortful thinking where it counts most — in learning and in crafting your scientific, engineering, or other analytical workflow. And if you want a guide in your learning journey, we’re here to help. Click here for the course schedule.

I have taken four courses with Diller Digital and this [Software Engineering] is by far the most useful one. Many of us have learned programming as a need to do research, but we do not have any formal background in computational programming. I think this course takes basic Python programming skills to a more formal level, aligned with people with programming background allowing us to improve the quality of code we produce, the efficiency in the implementation and collaboration. 
Also, hosting the course in person made a big difference for me. I was easily engaged the entire day, the exercises and the possibility to ask in person made the entire course smoother.

I think this course material is incredibly helpful for people who don’t have professional software engineering experience. Of all the courses I took from Diller Digital, I found this the most foundational and immediately useful.

On the Usefulness of LLMs and Other Deep Learning Models

Lately I’ve been thinking a lot about the state of “AI” and its implications for us embodied “human intelligences”. Hardly a week (or even a day) goes by without some Silicon Valley titan proclaiming that “AI is smarter than humans” and arguing about whether this is good or bad for us as a species. “It’s a white-collar apocalypse”, “There will be all kinds of new jobs!”, “We are now confident we know how to build AGI as we have traditionally understood it.” What’s missing from these statements are clear definitions for terms like “smarter”and “intelligent”, and when they are provided, they conflict with what we already know. Consider Sam Altman’s definition of AGI:“AI systems that can perform most economically valuable work as well as or better than humans.” Or think about the well-respected Turing Test, which judges machine intelligence based on a human’s ability to distinguish the behavior of a machine from a human, based on specific intellectual tasks. That reduces human intelligence to competence at tasks that can be completed at a keyboard. I find the narrow scope of such definitions unsatisfying.

I recently returned from a mission trip to Guatemala, where I worked side-by-side with local masons, who mix concrete and plaster by hand and improvise solutions to deal with tricky build sites and keep homes dry in the rainy season. That was a humbling lesson in the limits of the kind of “intelligence” my PhD and digital skills afford me. Those guys are performing intelligent, economically valuable work. Then there are the nurses at the clinics I’ve visited recently whose reading of a patient’s physical and emotional state include levels of cultural and social nuance in addition to the complex medical conditions of the human body. In fact, scientists, engineers, technicians, nurses, farmers, and floral designers who solve problems all the time in environments full of uncertainty and human need are applying forms of intelligence and performing economically valuable work that no LLM can touch. These are embodied, culturally embedded, and morally aware practices—not lines of text on a screen.

“But wait,” you say, “LLMs like ChatGPT and Claude are amazing! Why are you being such a curmudgeon?” I agree. In fact, ChatGPT helped me draft this piece, and although I ended up throwing away most of what it wrote, its ability to do research and summary is excellent. It also pointed me to some resources faster than I would have found them on my own. So is ChatGPT “smarter” than me? I think the more interesting question is “When does ChatGPT, LLM or other AI, have an advantage over me?”

What started me down this path was a couple of articles I came across recently. Bruce Schneier and Will Anderson wrote at The Conversation about 4 axes, what they call “The 4 S’s”, of technology’s advantages over humans. The article is not long and worth a read; in short they point out that AI often has an advantage over humans when it comes to speed, scale, scope, and sophistication. When those are the barriers, it can make sense to implement AI. When they’re not, introduction of AI can feel gratuitous, or even downright annoying; witness auto-completion for text messages, or the many customer service chatbots. Schneier and Anderson point out that companies implement them seeking to benefit from scale, but customers don’t see benefits from speed or sophistication, and they suffer from the loss of human communication in terms of empathy, sincerity, context, and problem solving ability. But there are many contexts where AIs are able to surpass the performance of humans, such as when playing Chess or Go, analyzing protein folding structures, and identifying promising materials for engineering applications.

However, there are contexts and situations where the perception of speed up is actually illusory. In July 2025, the folks at Model Evaluation & Threat Research (METR) published a study of 16 experienced senior developers of large open source software projects in which they recorded and analyzed their activity as they resolved issues from the issue tracker on their project. The study controlled their use of the AI tool of their choice. The key finding was that the developers generally reported believing that AI had sped them up by 20% or more, when in fact it took them on average 19% longer to resolve the issues. They point out that often the benchmarks used to measure the productivity gains of AI coding tools don’t reflect the kinds of tasks found “in the wild” and thus aren’t helpful. Even self-reporting by experienced developers are not a reliable guide to productivity impacts. Also of interest is this white paper from GitClear on the decline in code quality with the use of AI coding tools.

Developers generally reported believing that AI had sped them up by 20% or more, when in fact it took them on average 19% longer to resolve issues from large, mature, open-source projects.

Furthermore, there are limits to the level of sophistication even “reasoning” models can attain. In a refreshingly honest piece from Apple, published in June 2025, the authors discuss the strengths and weaknesses of standard models (LLMs) and large reasoning models (LRMS) in performing tasks of varying complexity. They find a hard limit on the complexity of problems for which LLMs and LRMs are capable of finding solutions, even given arbitrarily more computing power.

The real danger of technology is not that it will become too intelligent and take over, but that it will become too convenient and seduce us into delegating the most human parts of our lives.

Andy Crouch, The Life We’re Looking For

So what’s my point in all of this? It’s surely not to reject the amazing tools available to us in the era of LLMs. It’s to recognize them as tools with strengths and weaknesses. And it’s also to remember something that Andy Crouch, an author whose commentary on the relationship of humans to technology I respect, talks about in his book The Life We’re Looking For, that superpowers often take something of our humanity when we assume them. When we step on an airplane to assume the superpower of crossing a continent in a matter of hours, we have to remain very still and give up exercise and mobility for the time it takes to travel. When by using our mobile phone we assume the superpower of navigating a city we’ve never been to before, we erode our human ability to find our way on our own (with consequences for cognitive decline, as it turns out, see this book and this article among others for nuance on the subject and what to do about it). And perhaps most relevant for this post, when you hand over the job of writing (code, or blog posts, or novels) to an LLM, you are eroding your ability to think about problems. As I’ve said before, learning to code is really learning to think about problems, and writing code is actively engaging with the problem in constructive ways.

This is why I founded Diller Digital, and why I still passionately believe in teaching coding skills. This principle guides the way we teach: starting with foundational principles and building up practical knowledge through examples and exercises with increasing independence. This is why by the end of a class, we are teaching you how to find out the answers to your questions for yourself using the knowledge framework we’ve developed together. We value human intelligence—not because it’s flawless, but because it’s rooted in judgment, context, and a lived understanding of the world. We believe machine learning is most powerful when it extends what humans can already do well. We build our courses to empower you to apply these tools responsibly, creatively, and critically.

Meet Your Instructors Series – Logan

This month we are featuring our knowledgeable instructor, Logan Thomas.

1) What is your name and where are you currently located?

My name is Logan Thomas, and I’m currently based in Austin, Texas.

2) How did you end up in engineering education?

I came into engineering education through a natural progression from industry into teaching. I started my career applying machine learning and data science in domains ranging from digital media to mechanical engineering to biotech. Along the way, I discovered how much I enjoy breaking down complex concepts and helping others level up their skills. This led me to teaching roles at Enthought and mentoring opportunities through conferences like SciPy, where I’ve served as the Tutorials Chair. I love helping others build confidence in technical topics.

3) How do you stay current with the latest advancements in engineering technology and industry practices?

I stay up-to-date through a combination of hands-on work, professional communities, and continuous learning. I regularly contribute to and attend conferences like SciPy, stay active on GitHub, and follow key publications and blogs in data science, machine learning, and software engineering. I also enjoy experimenting with new tools and libraries in side projects and applying them in my role as a Data Science Software Engineer at Fullstory.

4) Can you describe your teaching philosophy and how it aligns with Diller Digital’s mission and values?

My teaching philosophy is rooted in curiosity, empathy, and empowerment. I believe the best learning happens when students feel safe to ask questions, explore, and make mistakes. I aim to connect abstract concepts to real-world problems and encourage students to become confident problem-solvers. I bring not just technical depth but a coaching mindset that helps learners develop independence.

5) What engineering software and tools do you have experience with, and how do you incorporate them into your teaching?

I have extensive experience with Python, TensorFlow, PyTorch, PySpark, and data science libraries like numpy, pandas, scikit-learn, and matplotlib. I’ve also worked with engineering tools like MATLAB, Abaqus, and simulation platforms during my earlier mechanical engineering roles. In teaching, I use these tools to build hands-on labs and project workflows that mirror industry applications—for example, guiding students through feature engineering in Python or designing reproducible machine learning pipelines.

6) How do you balance theoretical knowledge with practical, hands-on learning in your classes?

I try to lead with intuition, then reinforce with both theory and practice. I introduce concepts through stories or visuals, connect them to math and science fundamentals, and then move into code or simulation exercises. I often use real-world datasets and scenarios to bridge the gap between textbook theory and professional problem-solving.

7) Can you discuss your experience with project-based learning and how you guide students through the data analysis workflow?

Project-based learning is at the core of how I teach. I’ve led corporate hackathons, taught project-based machine learning courses, and mentored students through the entire data science lifecycle—from framing the problem and wrangling data to building models and interpreting results. I emphasize documentation, version control, and modular design to instill good engineering habits while keeping things collaborative and fun.

8) What strategies do you use to assess student understanding and provide constructive feedback on their work?

I focus heavily on active engagement and asking probing questions to gauge where students are in their understanding. During live coding sessions, I pause frequently to ask why a certain approach might work or what might happen if we changed a parameter—this helps surface both strengths and misconceptions in real time. I also use “coding karaoke,” where students follow along and fill in missing pieces of code to reinforce concepts and promote deeper learning. These interactive techniques give me a window into their thought process, which is often more insightful than a finished project. When giving feedback, I keep it specific, kind, and actionable—usually highlighting what they did well and nudging them to reflect on one or two key areas to improve. I also encourage self-assessment and goal-setting to build metacognitive skills and confidence over time.

9) What strategies do you use to communicate complex engineering concepts to students with varying levels of understanding?

I rely on analogies, visual aids, interactive demos, and scaffolding. I check in often, ask open-ended questions, and adjust based on the group’s energy and comprehension. I also try to normalize “not knowing”—creating an environment where curiosity is more important than correctness. Teaching, for me, is more about coaching than lecturing.

10) What is your favorite way to spend a Saturday? Favorite meal?

My favorite Saturday is one where I get some good coffee, play outside with my wife and two boys, and maybe sneak in a run or catch a baseball game. In the evening, nothing beats a homemade meal—especially if I can grill it in the backyard with friends and family around.

Logan and family in Austin

Thanks for your answers to these questions, Logan so we can get to know you better as one of our respected instructors.

Meet Your Instructors Series – Alex

This month we are featuring our knowledgeable instructor, Alex Chabot-Leclerc.

1) What is your name and where are you currently located?

My name is Alexandre Chabot-Leclerc, but I go by Alex. I live in Burlington, VT.

2 ) How did you end up in engineering education?

The two things I enjoyed doing most in grad school were teaching and programming (writing papers, not so much). I thought the only way to combine these two things was to become a professor. Thankfully, I found a perfect role I didn’t know was possible: trainer & scientific software developer at Enthought. I started there in 2016 and taught more than 800 students during my time there.

3 ) How do you stay current with the latest advancements in engineering technology and industry practices?

I have a rather voracious information diet. I subscribe to hundreds of RSS feeds that I read and scan regularly. I don’t recommend this to everyone (anyone?), but it works for me. I’m also part of various groups of people with varied interests that always bring up interesting new things.

4 ) Can you describe your teaching philosophy and how it aligns with Diller Digital’s mission and values?

I believe everyone can learn. And that everyone likes learning. But for learning to be fun, it has to be just hard enough to be satisfying. My job as a trainer is to stay in that zone for everyone at once (that’s the very tricky part!). 

I also believe learning without a solid foundation is like building on sand. It’ll last for a little while, but then it will crumble. Therefore, I spend a lot of time querying students, paying close attention to their reactions, and developing an understanding of what they know. What’s solid that I can build on? 

I also believe that you learn by doing. We’ve all had this experience of listening to a teacher, nodding our heads in agreement, and then trying to solve the exercise on our own and realizing we don’t actually know how to do the thing. Well, that’s why our classes have so much hands-on content.

5) What engineering software and tools do you have experience with, and how do you incorporate them into your teaching?

I “grew up” using MATLAB, and even though I haven’t used it in a little more than a decade, I remember enough about how it works to be useful when teaching. Otherwise, the tools I use regularly are all the usual suspects from the Python scientific computing and PyData ecosystems: NumPy, Pandas, Matplotlib, scikit-learn, plus some more domain-specific packages.

6) How do you balance theoretical knowledge with practical, hands-on learning in your classes?

As much as I love theoretical knowledge, I teach students that the service of the theoretical knowledge is at the service of the theoretical knowledge. In classes, most of the value is in the doing. Therefore, all the theoretical knowledge I teach students is at the service of the hands-on work they will do in class, and especially, when they return to work.

7) Can you discuss your experience with project-based learning and how you guide students through the data analysis workflow?

I have extensive experience with project-based learning, both as a student and as a trainer. My entire undergraduate degree in electrical engineering used project-based learning, and I loved it. Learning was always at the service of doing something, of solving a problem. It helped connect each piece of learning to all the other ones required to get to that point and all the point and all the ones that came after.

Later, as a trainer at Enthought, I created classes and developed a whole program based on project-based learning. Major theories of “transfer of learning” suggest that it’s easier to apply what one has learned after practicing and when the learning experience and the new situation are similar. Therefore, my goal whenever I design a project is to make it as realistic as possible and use the tools that students will likely use in their work.

8) What strategies do you use to assess student understanding and provide constructive feedback on their work?

I ask questions (nicely!) until I’m satisfied with the answer. I’m looking for a correct and “solid” answer; something they really know. If the knowledge is shaky, it’s hard to build on.

To be effective, I must build trust with students. I need them to be honest when answering questions, even if it means showing everyone else there’s something they don’t know.

9) What strategies do you use to communicate complex engineering concepts to students with varying levels of understanding?

I use simplifications and analogies, often multiple ones. Usually, they’re analogies from the physical world or accessible things, like cooking. To help me, I also ask every student about their experience at the beginning of class. That way, I understand where they’re coming from: their domain of work, which programming languages they’ve used, etc. I’ll use that knowledge to provide examples, concepts, and comparisons to things I know they’re familiar with.

I’ll also try to reveal complexity in a progressive manner. I’ll start with a simple analogy or explanation for people who are maybe less experienced so they at least know that this thing exists or have a good mental model for it. Then, I’ll dig in deeper and deeper, closer to the details of how things work for the more advanced people in the class.

10) What is your favorite way to spend a Saturday? Favorite meal?

There are so many ways to spend a good Saturday! A good one is a tasty brunch with family and friends, outdoor physical activities like a bike ride, a nice happy hour, and a tasty dinner. What’s for dinner? Some Japanese food (not sushi, even though it’s lovely) would be great.

Thanks for your answers to these questions, Alex so we can get to know you better as one of our respected instructors.

Batching and Folding in Machine Learning – What’s the Difference?

In a recent session of Machine Learning for Scientists & Engineers, we were talking about the use of folds in cross-validation, and a student did one of my favorite things — he asked a perceptive question. “How is folding related to the concept of batching I’ve heard about for deep learning?” We had a good discussion about batching and folding in machine learning and what the differences and similarities are.

What is Machine Learning?

Terms like “AI” and “machine learning” have become nearly meaningless in casual conversation and advertising media—especially since the arrival of large language models like ChatGPT. At Diller Digital, we define AI (that is, “artificial intelligence”) as computerized decision-making, covering areas from robotics and computer vision to language processing and machine learning.

Machine learning refers to the development of predictive models that are configured, or trained, by exposure to sample data rather than by explicitly encoded interactions. For example, you can develop a classification model that sorts pictures into dogs and cats by showing it a lot of examples of photos of dogs and cats. (Sign up for the class to learn the details of how to do this.).

Or you can develop a regression model to predict the temperature at my house tomorrow by training the model on the last 10 years’ worth of measurements of temperature, pressure, humidity, etc. from my personal weather station.

Classical vs Deep Learning

Broadly speaking, there are two kinds of machine learning: what we at Diller Digital call classical machine learning and deep learning. Classical machine learning is characterized by relatively small data sets, and it requires a skilled modeler to do feature engineering to make the best use of the available (and limited) training data. This is the subject of our Machine Learning for Scientists & Engineers class. Deep Learning is a subset of machine learning that makes use of many-layered models that function in a rough analog to how the neurons in a human brain function. Training such models requires much more data but less manual feature engineering by the modeler. The skill in deep learning is that of configuring the architecture of the model, and that is the subject of our Deep Learning for Scientists & Engineers.

Parameters and Hyperparameters

There is one more pair of definitions we need to cover before we can talk about folding versus batching: parameters and hyperparameters.

At the heart of both kinds of machine learning is the adjustment of a model’s parameters, sometimes also called coefficients or weights. Simply stated, these are the coefficients of what boils down to a linear regression problem.

Each model also has what are called hyperparameters, or parameters that govern how the model behaves algorithmically. These might include things like how you score your model’s performance or what method you use to update the model weights.

The process of training a model is the process of adjusting the parameters until you get the best possible predictions from your model. For this reason, we typically divide our training data into two parts: one (the training data set) for adjusting the weights, the other (the testing data set) for assessing the performance of the model. It’s important to score your model on data that was not used in the training step because you’re testing its predictive power on things it hasn’t seen before.

What is Folding?

So this brings us finally to the subject of folding and batching. Folding typically arises in the context of cross-validation, when you’re trying to decide on the best hyperparameters to use for your model. That process involves fitting your model with different sets of hyperparameters and seeing which combination gives the best results. How can you do that without using your test data set? (If we used the test data set during training, that would be cheating because it would sacrifice the ability of your model to generalize for the short-term gain of a better result.) We divide our training data into folds and hold each fold back as a “mini-test” data set and train on the others. We successively hold each fold back and then average the scores across the folds. That becomes our cross-validation score and gives us a way to score that set of hyperparameters without dipping into the test data set.

Folds divide a training data set into sections, one of which is held out as a mini “test” section for scoring a combination of hyperparameters in cross-validation.

What is Batching?

Batching looks a lot like folding but is a distinct concept used in a different context. Batching arises in the context of training deep models, and it serves two purposes. First, training a deep learning model typically requires a lot of training data (orders of magnitude more data than classical methods), and except for trivial cases you can’t fit all the training data into working memory at the same time. You solve that problem by dividing the training data into batches in much the same way that you would divide it into folds for cross-validation, and then iteratively update the model parameters using each batch of data until you have used the entire training data set. One full pass through all of the batches is called an epoch. Training a deep learning model typically takes multiple epochs.

A training data set is divided into batches to reduce memory requirements and provide variation for model parameter refinement. Each batch is used once per training epoch.

Beyond considerations of working memory, there’s a second important reason to train a deep model on batches: because there are so many model parameters with so many possible configurations, and because of the way the layers of the model insulate some of the parameters from information in the test data set, it’s helpful that smaller batches are “noisier” and provide more variation for the training algorithm to use to adjust the model parameters. As a physical analogy, you might think of the way that shaking a pan while you poured sand into it would help it settle into a flat surface more quickly than just waiting for gravity to do the work for you, and without shaking you might end up with lumps and bumps.

So hopefully, by this point you can see how folding is similar to batching and how they are distinct concepts. They both similarly divide training data into segments. Folding is used in cross-validation for optimizing hyperparameters, and batching is used in training deep learning models to limit memory requirements and improve convergence for fitting model parameters.

Diller Digital offers Machine Learning for Scientists & Engineers and Deep Learning for Scientists & Engineers at least once per quarter. Sign up to join us, and bring your curiosity, questions, and toughest problems and see what you can learn! Maybe you’ll join the chorus of those who leave glowing feedback.

Meet Your Instructors Series – Tim

Hello! This is Rachel and I will be hosting a series of Q+As with your valued instructors so that you can get a glimpse into their specific career backgrounds, teaching styles and processes.

We will be starting with our President and Founder of Diller Digital, Tim Diller, pictured here.


 
1) What is your name and where are you currently located? 

My name is Tim Diller, and I live in Austin, TX with my wife Hannah and my dog Stella.  I have three adult children who are all out of the house now. 

2 ) How did you end up in engineering education? 

For me it has been a long, winding, and nearly closed-loop path.  My reference point is my father, who has spent his entire career in academia, combining research and teaching in Biomedical Engineering at The University of Texas at Austin.  From him and others in my family, I developed a high regard for education, and from an early age I aspired to professorship, and that vision plus a deep-seated curiosity about mechanical things (especially cars and airplanes) guided my steps through high school, my bachelor’s degree in Mechanical Engineering at The University of Texas, and into the first semester of a Master’s degree program at MIT, where I hit an academic wall, nearly failing out of the program.  On academic probation, I did some deep soul searching and realized that the math-heavy robotics program I had been pursuing was not a good fit for my natural talents and inclinations.  Instead, with some guidance, I pivoted to project-based courses in manufacturing and production systems design, where I thrived.  That led me to my first “real” job, at the Michelin Americas R&C Corporation. 

My time at Michelin helped me realize a few more things:  I love the collaborative team environment, I love learning about what people are doing in industry (working for a Tier 1 supplier in the automotive industry is great for that), and I love teaching (I had the opportunity to pick up, revamp, and deliver Michelin’s course on tire performance for vehicle handling during my time there).  I also found myself gravitating to software development projects and had my first exposure to Python at that time.  I spent 5 years there before a growing sense of “unfinished business” led me to return to graduate school for a doctoral degree at The University of Texas. 

During my doctoral program, I spent a lot of time instrument an engine and analyzing data on exhaust gases (if you’ve taken a class from me or read some of my other posts, you’ll notice I use a lot of automotive references). had many opportunities to teach, substituting for my advising professor from time to time, delivering guest lectures on tire performance for another professor in the department.  By the time I had finished my degree and was working as a postdoctoral researcher, this was a regular occurrence.  During those years, I also made the transition from MATLAB and because thoroughly hooked on Python.  At the same time that I was seeking employment in academia, my love of using software for scientific computing was growing.  Thus it was that during another round of academic hiring (this was during the period after the downturn in 2008), I found that Enthought was hiring, and right in Austin, where I was located. 

The opportunity at Enthought included working on interesting coding and consulting projects across a broad swath of industry, working collaboratively in small teams, and teaching a 40-hr, week-long class called Python for Scientists & Engineers, which I would eventually teach over 50 times for Enthought. 

When in 2023 Enthought retired their training department during a reorganization, I founded Diller Digital to provide continuity of service to the customers they had served for decades.  I get a lot of joy working with smart, motivated engineers, scientists, and analysts to help them increase their digital skills in scientific computing. 

3 ) How do you stay current with the latest advancements in engineering technology and industry practices? 

I read papers and a lot of tech-oriented news sources.  It’s a lot of fun for me to do that. I buy (paper!) books on the topics I teach about and mark them up, code the demo examples and play around. For example, at present (mid 2025) I’m in the middle of reading and coding my through Sebastian Raschka’s Build a Large Language Model (From Scratch).   From time to time I will do small consulting jobs to stay engaged.  And it turns out I learn a lot from my students when they ask good questions.  I’ve learned that often the best answer is “I don’t know, but let me look into that”, and I’ll do enough of a deep dive to get an answer the next day, but often I’ll keep going.  And sometimes I’ll incorporate new material into the course based on that.  Or post about it.

4 ) Can you describe your teaching philosophy and how it aligns with Diller Digital’s mission and values? 

I believe that technology should be used to elevate the value and dignity of humanity’s work.  I also believe that a thorough understanding of fundamental principles is critical as a solid foundation for future self-learning in scientific computing and solving problems with software.  Because of that, I emphasize lots of hands-on experience, getting students to do basic things by hand and on their own before teaching them how to automate work with higher-level tools.  Although they might articulate it a little differently, this is close to the philosophy Enthought used to develop the materials we deliver at Diller Digital.  Enthought was clearly formative for me in my approach to teaching. 

5) What engineering software and tools do you have experience with, and how do you incorporate them into your teaching? 

My day-to-day coding work takes place in two contexts.  In the classroom, I use Jupyter Lab, which is just about the perfect tool for that environment— it’s simple enough to get everybody on the same platform quickly, even if someone has never used it before.  For maintaining the demos, exercises, automation scripts, or any other more-involved coding work, I’ll use VS Code with the Flake8 and Diff extensions installed. 

In the past, I liked Sublime Text because of its multiple-cursor and block-editing capabilities, and before that I was a proud (and probably obnoxious) fan of emacs, which I’ll still use on occasion when logged into a server with text-only interface.  But for that environment, I have come to appreciate the lighter-weight nano editor, which is available in pretty much every text-only environment I use these days. 

In addition to Python and the scientific computing libraries we teach, I’ve spent substantial time with MATLAB, C, LabView, and Visual Basic.  My first programming language was BASIC for the TI-99/4A, whose CALL SPRITE was the key technology that let me write my own video games. 

6) How do you balance theoretical knowledge with practical, hands-on learning in your classes? 

I try to teach in a way that theory and practice complement each other.  I use theory to explain the “Why”, and practical, hands-on learning to explain the “How”.  For example, when it comes to talking about lists and sets, the theory is important for understanding why sets have such faster look-up times, but I make sure that knowledge is accessible by demonstrating and having students follow along with %timeit commands. 

7) Can you discuss your experience with project-based learning and how you guide students through the data analysis workflow? 

As I talked about earlier, in graduate school I really struggled with theory-heavy instruction and thrived in project-based classes.  In addition, I have watched my father (an engineer professor) develop a course for graduate students in designing inquiry-based instruction, which is closely related to project-based learning.  Over the decades, we have had many long discussions about and have developed a shared passion for the subject. 

With that in mind, I start each class by asking them what goals they have in mind and then probing as much as time allows.  On the one hand, I’m genuinely interested in finding out what people do, but on the other hand, getting a student to articulate their problem and the value they are bringing to their organization is critical to creating context for learning.  That was a big part of what I did as a consultant for Enthought, and I have carried that into the classroom. 

Once context is established, I assume that the students need to see and hear, to follow along on their own machine, then to do on their own and exercise recall before they’ll master a concept.  We do this at multiple scales, typically building up from using and mastering data types, moving on to useful code segments, and ending with some kind of realistic capstone project that ties everything together.  Each of Python Foundations, Data Analysis with Pandas, Machine Learning, and Deep Learning follow this arc, and in choosing exercises, we have worked hard to make sure the problem is simple enough to be tractable in the relatively short time we have in class yet also complex enough to provide experience that will be useful in day-to-day work. 

8) What strategies do you use to assess student understanding and provide constructive feedback on their work? 

We have lots of small “Give It A Try” exercises that are designed to cement understanding and surface any confusion.  In virtual courses, it’s a bit more of a challenge, because I have to rely on self-reporting, and not everyone likes to unmute and ask a question.  For in-person classes, I walk the room during such exercises and look for the pink background of error messages.  The barrier to asking questions in that environment is lower.  But in either case, I tend to get good questions. 

The Give It A Try exercises are conducive to having students share their code, so sometimes I’ll use someone’s answer to explain the solution and ask for peer-suggestions. 

9) What strategies do you use to communicate complex engineering concepts to students with varying levels of understanding? 

This is the real challenge, because there is always a good diversity of backgrounds and experience.  One thing I do is try to provide a meta-level discussion, letting students know how important the following concept is and whether understanding it is critical or they can ignore it if needed. 

Another thing I do is to treat every question like pure gold, no matter the level.  If they ask about something fundamental I’ve already explained 5 times, great!  Because in that case, they finally have the context to get it, and by asking the question, they’ve owned the concept.  And if they ask a real stumper, and I have to do homework afterward to figure it out, that’s great too. 

Finally, I use a lot of physical analogies.  Students who have been in my classes may recall I tend to use a lot of automotive analogies, referring to “popping the hood” or talking about engines, brakes, and clutches.  But if someone mentions something like owning a hobby farm during introductions, we’ll talk about examples from the farm during class. 
 

10) What is your favorite way to spend a Saturday? Favorite meal? 
My ideal Saturday starts in the yard, mowing, weeding, or trimming. Once that’s done and the house is clean, maybe my wife and I will take our dog Stella for a walk on the local greenbelt trail.  After that, I work in the shop, where I like to build or restore furniture, make picture frames, or turn a bowl on my lathe.  Double the points if one of my kids is working with me.  If I can fire up the smoker and keep some ribs (when there’s more time) or fish (if there’s less) going while I’m in the shop, that completes the perfect Saturday. 

Tim Diller with his Family

Thanks for your answers to these questions, Tim so we can get to know you better as one of our respected instructors.

“Popping the Hood” in Python

One man holds the hood of a car open while he and his friend look at the engine together.

Last weekend found me elbow-deep in the guts of my car, re-aligning the timing chain after replacing a cam sprocket. As I reflected on the joys of working on a car with only 4 cylinders and a relatively spacious engine bay, I found myself reflecting on one of the things I love best about the Python programming language — that is the ability to proverbially “pop the hood” and see what’s going on behind the abstractions. (With a background in Mechanical Engineering, car metaphors come naturally to me.)

As an Open Source, well-documented, scripted language, Python is already accessible. But there are some tools that let you get pretty deeply into the inner workings in case you want to understand how things work or to optimize performance.

Use the Source!

The first and easiest way to see what’s going on is to look at the inline help using Python’s built-in help() function, which displays the docstring using a pager. But I almost always prefer using the ? and ?? in IPython or Jupyter to display the just the docstring or all of the source code if available. For example consider the relatively simple parseaddr function from email.utils:

In [1]: import email

In [2]: email.utils.parseaddr?
Signature: parseaddr(addr, *, strict=True)
Docstring:
Parse addr into its constituent realname and email address parts.

Return a tuple of realname and email address, unless the parse fails, in
which case return a 2-tuple of ('', '').

If strict is True, use a strict parser which rejects malformed inputs.
File:      /Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/email/utils.py
Type:      function

In our Python Foundations course, I can usually elicit some groans by encouraging my students to “Use the Source” with the ?? syntax, which displays the source code, if available:

In [3]: email.utils.parseaddr??
Signature: parseaddr(addr, *, strict=True)
Source:   
def parseaddr(addr, *, strict=True):
    """
    Parse addr into its constituent realname and email address parts.

    Return a tuple of realname and email address, unless the parse fails, in
    which case return a 2-tuple of ('', '').

    If strict is True, use a strict parser which rejects malformed inputs.
    """
    if not strict:
        addrs = _AddressList(addr).addresslist
        if not addrs:
            return ('', '')
        return addrs[0]

    if isinstance(addr, list):
        addr = addr[0]

    if not isinstance(addr, str):
        return ('', '')

    addr = _pre_parse_validation([addr])[0]
    addrs = _post_parse_validation(_AddressList(addr).addresslist)

    if not addrs or len(addrs) > 1:
        return ('', '')

    return addrs[0]
File:      /Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/email/utils.py
Type:      function

Looking at the next-to-last line, you see there’s a path to the source code. That’s available programmatically in the module‘s .__file__ attribute, so you could open and print the contents if you want. If we do that for Python’s this module, we can expose a fun little Easter Egg.

In [4]: import this
# <output snipped - but try it for yourself and see what's there.>

In [5]: with open(this.__file__, 'r') as f:
   ...:     print(f.read())
   ...: 
s = """Gur Mra bs Clguba, ol Gvz Crgref

Ornhgvshy vf orggre guna htyl.
Rkcyvpvg vf orggre guna vzcyvpvg.
Fvzcyr vf orggre guna pbzcyrk.
Pbzcyrk vf orggre guna pbzcyvpngrq.
Syng vf orggre guna arfgrq.
Fcnefr vf orggre guna qrafr.
Ernqnovyvgl pbhagf.
Fcrpvny pnfrf nera'g fcrpvny rabhtu gb oernx gur ehyrf.
Nygubhtu cenpgvpnyvgl orngf chevgl.
Reebef fubhyq arire cnff fvyragyl.
Hayrff rkcyvpvgyl fvyraprq.
Va gur snpr bs nzovthvgl, ershfr gur grzcgngvba gb thrff.
Gurer fubhyq or bar-- naq cersrenoyl bayl bar --boivbhf jnl gb qb vg.
Nygubhtu gung jnl znl abg or boivbhf ng svefg hayrff lbh'er Qhgpu.
Abj vf orggre guna arire.
Nygubhtu arire vf bsgra orggre guna *evtug* abj.
Vs gur vzcyrzragngvba vf uneq gb rkcynva, vg'f n onq vqrn.
Vs gur vzcyrzragngvba vf rnfl gb rkcynva, vg znl or n tbbq vqrn.
Anzrfcnprf ner bar ubaxvat terng vqrn -- yrg'f qb zber bs gubfr!"""

d = {}
for c in (65, 97):
    for i in range(26):
        d[chr(i+c)] = chr((i+13) % 26 + c)

print("".join([d.get(c, c) for c in s]))

Another way to do this is to use the inspect module from Python’s standard library. Among many other useful functions is getsource which returns the source code:

In [6]: import inspect
In [7]: my_source_code_text = inspect.getsource(email.utils.parseaddr)

This works for libraries and functions that are written in Python, but there is a class of functions that are implemented in C (for the most popular version of Python, known as CPython) and called builtins. Source code is not available for those in the same way. The len function is an example:

In [8]: len??
Signature: len(obj, /)
Docstring: Return the number of items in a container.
Type:      builtin_function_or_method

For these functions, it takes a little more digging, but this is Open Source Software, so you can go to the Python source code on Github, and look in the module containing the builtins (called bltinmodule.c). Each of the builtin functions is defined there with the prefix builtin_, and the source code for len is at line 1866 (at least in Feb 2025 when I wrote this):

static PyObject *
builtin_len(PyObject *module, PyObject *obj)
/*[clinic end generated code: output=fa7a270d314dfb6c input=bc55598da9e9c9b5]*/
{
    Py_ssize_t res;

    res = PyObject_Size(obj);
    if (res < 0) {
        assert(PyErr_Occurred());
        return NULL;
    }
    return PyLong_FromSsize_t(res);
}

There you can see that most of the work is done by another function PyObject_Size(), but you get the idea, and now you know where to look.

Step by Step

To watch the Python interpreter step through the code a line at a time and explore code execution, you can use the Python Debugger pdb, or its tab-completed and syntax-colored cousin ipdb. These allow you to interact with the code as it runs and execute arbitrary code in the context of any frame of execution, including printing out the value of variables. They are the basis for most of the Python debuggers built in to IDEs like Spyder, PyCharm, or VS Code. Since they are best demonstrated live, and since we walk through their use in our Software Engineering for Scientists & Engineers class, I’ll leave it at that.

Inside the Engine

Like Java and Ruby, Python runs in a virtual machine, commonly known as the “Interpreter” or “runtime”. So in contrast to compiling code in, say, C, where the result is an executable object file consisting of system- and machine-level instructions that can be run as an application by your operating system, when you execute a script in Python, your code gets turned into bytecode. Bytecode is a set of instructions for the Python virtual machine. It’s what we would write if we were truly writing for the computer (see my comments on why you still need to learn programming).

But while it’s written for the virtual machine, it’s not entirely opaque, and it can sometimes be instructive to take a look. In my car metaphor, this is a bit like removing the valve cover and checking the timing marks inside. Usually we don’t have to worry about it, but it can be interesting to see what’s going on there, as I learned when producing and answer for a Stack Overflow question.

In the example below, we make a simple function add. The bytecode is visible in the add.__code__.co_code attribute, and we can disassemble it using the dis library and turn the bytecode into something slightly more friendly for human eyes:

In [9]: import dis
In [10]: def add(x, y):
    ...:     return x + y
    ...: 
In [11]: add.__code__.co_code
Out[11]: b'\x95\x00X\x01-\x00\x00\x00$\x00'
In [12]: dis.disassemble(add.__code__)
  1           RESUME                   0

  2           LOAD_FAST_LOAD_FAST      1 (x, y)
              BINARY_OP                0 (+)
              RETURN_VALUE

In the output of disassemble, the number in the first column is the line number in the source code. The middle column shows the bytecode instruction (see the docs for their meaning), and the right-hand side shows the arguments. For example in line 2, LOAD_FAST_LOAD_FAST pushes references to x and y to the stack, and the next line BINARY_OP executes the + operation on them.

Incidentally, if you’ve ever noticed files with the .pyc extension or folders called __pycache__ (which are full of .pyc files) in your project directory, that’s where Python stores (or caches) bytecode when a module is imported so that next time, the import is faster.

In Conclusion

There’s obviously a lot more to say about bytecodes, the execution stack, the memory heap, etc. But my goal here is not so much to give a lesson in computer science as to give an appreciation for the accessibility of the Python language to curious users. Much as I think it’s valuable to be able to pop the hood on your car and point to the engine, the oil dipstick, the brake fluid reservoir, and the air filter, I believe it’s valuable to understand some of what’s going on “under the hood” of the Python code you may be using for data analysis or other kinds of scientific computing.

You Still Need to Learn to Write Code in the Age of LLMs

Can we really delegate most or all of our coding tasks to LLMs? Should we tell our kids not to study computer science? What are the reasons we should still learn to write code in the age of LLMs?

There is a chorus of voices telling us that soon we will be able to hand all of our coding tasks over to an AI agent powered by an LLM. It will do all the tedious boring things for us, and we can focus on the important stuff. While the new generative AIs are amazing in their capabilities, I for one think we shouldn’t be so quick to dismiss the value of learning to code. Granted, I make my living teaching people to write software, so maybe I should call this “Why I don’t quit my job and tell everyone to use ChatGPT to write their code”, because I believe that learning to code is still necessary and good.

In early 2024 the founder and CEO of NVIDIA Jensen Huang participated in a discussion at the World Governments Forum that inspired countless blog posts and videos with titles like “Jensen Huang says kids shouldn’t learn to code!”. What he actually said is a bit different but the message is essentially the same [click here to watch for yourself, it’s the last 4 minutes or so of the interview]: “It’s our job to make computing technology such that nobody has to program … and that the programming language is human. Everybody in the world is now a programmer…

Photo of Jensen Huang speaking at the 2024 World Governments Forum.

He suggests that instead of learning to code, he says we should focus on the Life Sciences because that’s the richest domain for discovery, and he thinks that developing nations that want to rise on the world stage should encourage their children to do the same. Oh, and he says we should build lots of infrastructure (with lots of NVIDIA chips of course).

There is a core part of his message I actually agree with. At Diller Digital, and at Enthought where our roots are, we have always believed it’s easier to add programming skills to a scientist, engineer, or other domain expert than it is to train a computer scientist in one of the hard sciences. That’s why if you’ve taken one of our courses, you’ve no doubt seen the graphic below, touting the scientific credentials of the development staff. And for that reason, I agree that becoming an expert in one of the natural sciences or engineering discipline is personally and socially valuable.

Image from the About Enthought slide in Enthought Academy course material showing that 85% of the developers have advanced degrees, and 70% hold a PhD.

At Enthought, almost no one was formally trained as a developer. Most of us (including me) studied some other domain (in my case it was Mechanical Engineering, the Thermal and Fluid Sciences, and Combustion in particular) but fell in love with writing software. And although there is a special place in my heart for the BASIC I learned on my brother’s TI/99 4A, or Pascal for the Macintosh 512k my Dad brought home to use for work, or C, which I self taught in High School and college, it was really Python that let me do useful stuff in engineering. Python has become a leading language for scientific computing, and a rich ecosystem has developed around the SciPy and NumPy packages and the SciPy conference. One of the main reasons is that it is pragmatic and easy to learn, and it is expressive for scientific computing.

And that brings me to my first beef with Huang’s message. While the idea of using “human language”, by which I believe he means “human language that we use to communicate with other humans” otherwise known as natural language, to write software has some appeal, it ignores the fact that we already use human language to program computers. If we were writing software using computer language, we’d be writing with 1s and 0s or hexadecimal codes. Although there are still corners of the world where specialists do that, it hasn’t been mainstream practice since the days of punch cards.

Image of human hands holding a stack of punch cards.  Image originally appears on IBM's web page describing the history of the punch card.

Modern computer languages like Python are designed to be expressive in the domain space, and they allow you to write code that is clear and unambiguous. For example, do you have any doubts about what is happening in this code snippet borrowed from the section on Naming Variables in our Software Engineering for Scientists & Engineers?

gold_watch_orders = 0
for employee in employee_list:
    gold_watch_orders += will_retire(employee.name)

Even a complete newcomer could see that we’re checking to see who’s about to retire, and we’re preparing to order gold watches. It is clearly for human consumption, but there are also decisions about data types and data structures that had to be made. The act of writing the code causes you to think about your problem more clearly. The language supports a way of thinking about the problem. When you give up learning the language, you inevitably give up learning a particular way of thinking.

This brings me to my second beef with the idea that we don’t need to learn programming. Using what Huang calls a “human language” in fact devolves pretty quickly into an exercise called “prompt engineering”, where the new skill is knowing how to precisely specify your goal to a generative model using a language that is not really designed for that. You end up needing to work through another layer of abstraction that doesn’t necessarily help. Or that is useful right up to the point where it isn’t, and then you’re stuck.

I often point my students to an article by Joel Spolsky called “The Law of Leaky Abstractions“, in which the author talks about “what computer scientists like to call an abstraction: a simplification of something much more complicated that is going on under the covers.” His point is that abstractions are useful and allow us to all sorts of amazing things, like send messages across the internet, or to our point, use a chat agent to write code. His central premise is there is no perfect abstraction.

All non-trivial abstractions, to some degree, are leaky.

Joel Spolsky

By that, he means that eventually the abstraction fails, and you are required to understand what’s going on beneath the abstraction to solve some tricky problem that eventually emerges. By the time he wrote the article in 2002, there was already a long history of code generation tools attempting to abstract away the complexity of getting a computer to do the thing you want to do. But inevitably, the abstraction fails, and to move forward you have to understand what’s going on behind the abstraction.

… the abstractions save us time working, but they don’t save us time learning.
And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a proficient programmer is getting harder and harder.

Joel Spolsky

For example, I’m grateful for the WYSIWYG editor WordPress provides for producing this blog post, but without understanding the underlying HTML tags it produces and the CSS it relies on, I’d be frustrated by some of the formatting problems I’ve had to solve. The WYSIWYG abstraction leaks, so I learn how HTML works and how to find the offending CSS class, and it makes solving the image alignment problem much much easier.

But it’s not only the utility of the tool. There’s a cognitive benefit to learning to code. In my life as a consultant for Enthought, and especially during my tenure as a Director of Digital Transformation Services, I would frequently recommend that Managers, and even sometimes Directors, take our Python Foundations for Scientists & Engineers, not because they needed to learn to code, but because they needed to learn how to think about what software can and can’t do. And with Diller Digital, the story is the same. Especially in the Machine Learning and Deep Learning classes, managers join because they want to know how it works, what’s hype and what’s real, and they want to know how to think about the class of problems those technologies address. People are learning to code as a way of learning how to think about problems.

The best advice I’ve heard says this:

Learn to code manually first, then use a tool to save time.

I’ll say in summary, the best reason to learn to code, especially for the scientist, engineers, and analysts who take our classes, is that you are learning how to solve problems in a clear, unambiguous way. And even more so, you learn how to think about a problem, what’s possible, and what’s realistic. Don’t give that up. See also this article by Nathan Anacone.

What do you think? Let me know in the comments.

Managing Pandas’ deprecation of the Series first() and last() methods.

Have you stumbled across this warning in your code after updating Pandas: “FutureWarning: last is deprecated and will be removed in a future version. Please create a mask and filter using `.loc` instead“? In this post, we’ll explore how that method works and how to replace it.

I’ve always loved the use-case driven nature of methods and functions in the Pandas library. Pandas is such a workhorse in scientific computing in Python, particularly when it comes to things like timeseries data and dealing with calendar-labeled data in particular. So it was with a touch of frustration and puzzlement that I discovered that the last() method had been deprecated, and its removal from Pandas’ Series and DataFrame types is planned. In the Data Analysis with Pandas` course, we used have an in-class exercise where we recommended getting the last 4 weeks’ data using something like this:

In [1]: import numpy as np
   ...: import pandas as pd
   ...: rng = np.random.default_rng(42)
   ...: measurements = pd.Series(
   ...:    data=np.cumsum(rng.choice([-1, 1], size=350)),
   ...:    index=pd.date_range(
   ...:        start="01/01/2025",
   ...:        freq="D",
   ...:        periods=350,
   ...:   ),
   ...:)
In [2]: measurements.last('1W')
<ipython-input-5-ec16e51fe7ce>:1: :1: FutureWarning: last is deprecated and will be removed in a future version. Please create a mask and filter using `.loc` instead
  measurements.last('1W')
Out[2]:
2025-12-15   -7
2025-12-16   -8
Freq: D, dtype: int64

This has the really useful behavior of selecting data based on where it falls in a calendar period. Thus the command above usefully returns the two elements from our Series that occur in the last calendar week, which begins (in ISO format) on Monday, Dec 15.

The deprecation warning says “FutureWarning: last is deprecated and will be removed in a future version. Please create a mask and filter using .loc instead.” Because .last() is a useful feature, I wanted to take a closer look to see if I could understand what’s going on and what the best way to replace it would be.

Poking into the code a bit, we can see that the .last() method is a convenience function that uses pd.tseries.frequencies.to_offset() to turn '1W', technically a designation of period, into an offset, which is subtracted from the last element of the DatetimeIndex, yielding the starting point for a slice on the index. From the definition of last:

 ...
    offset = to_offset(offset)

    start_date = self.index[-1] - offset
    start = self.index.searchsorted(start_date, side="right")
    return self.iloc[start:]

Note that side='right' in searchsorted() finds the first index greater than start_date. We could wrap all of this into an equivalent statement that yields no FutureWarning thus:

In [3]: start = measurement.index[-1] - to_offset('1W')
In [4]: measurement.loc[measurement.index > start]
Out[4]:
2025-12-15   -7
2025-12-16   -8
Freq: D, dtype: int64

There’s a better option, though, which is to use pd.DateOffset. It’s a top-level import, and it gives you control over when the week starts, which to_offset does not. Remember we are using ISO standards, so Monday is day 0:

In [5]: start = measurements.index[-1] - pd.DateOffset(weeks=1, weekday=0)
In [6]: measurements.loc[measurements.index > start]
Out[6]:
2025-12-15   -7
2025-12-16   -8
Freq: D, dtype: int64

Slicing also works, even if the start point doesn’t coincide with a location in the index. Mixed offset specifications are possible, too:

In [7]: measurements.loc[measurements.index[-1] - pd.DateOffset(days=1, hours=12):]
Out[7]:
2025-12-15   -7
2025-12-16   -8
Freq: D, dtype: int64

The strength of pd.DateOffset is that it is calendar aware, so you can specify the day of the month, for example:

In [8]: measurements.loc[measurements.index[-1] - pd.DateOffset(day=13):]
Out[8]:
2025-12-13   -7
2025-12-14   -6
2025-12-15   -7
2025-12-16   -8
Freq: D, dtype: int64

There’s also the non-calendar-aware pd.Timedelta you can use to count back a set time period without taking day-of-week or day-of-month into account. Note: as with all Pandas location-based slicing, it is endpoint inclusive, so 1 week yields 8 days’ measurements:

In [9]: measurements.loc[measurements.index[-1] - pd.Timedelta(weeks=1):]
Out[9]:
2025-12-09   -9
2025-12-10   -8
2025-12-11   -7
2025-12-12   -8
2025-12-13   -7
2025-12-14   -6
2025-12-15   -7
2025-12-16   -8
Freq: D, dtype: int64

You may have noticed I prefer slicing notation, whereas the deprecation message suggests using a mask array. There’s a performance advantage to using slicing, and the notation is more compact than the mask array but less so than the .last() method. In IPython or Jupyter, we can use %timeit to quantify the difference:

In [10]: %timeit measurements.loc[measurements.index[-1] - pd.DateOffset(day=13):]
45.7 μs ± 2.36 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

In [11]: %timeit measurements.last('4D')
56.3 μs ± 14.9 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

In [12]: %timeit measurements.loc[measurements.index >= measurements.index[-1] - pd.DateOffset(day=13)]
89.2 μs ± 6.31 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

After spending some time with git blame and the Pandas-dev source code repository, the reasons for the deprecation of the first and last methods make sense:

  • there is unexpected behavior when passing certain kinds of offsets
  • they don’t behave analogously to SeriesGroupBy.first() and SeriesGroupBy.last()
  • they don’t respect time zones properly

Hopefully this has been a useful exploration of pd.Series.last (and .first), their deprecation, and how to replace them in your code with the more-explicit and better-defined masks and slices. Happy Coding!