NumPy Indexing — the lists and tuples Gotcha

In a recent session of Python Foundations for Scientists & Engineers, a question came up about indexing a NumPy ndarray. Beyond getting and setting single values, NumPy enables some powerful efficiencies through slicing, which produces views of an array’s data without copying, and fancy indexing, which allows use of more-complex expressions to extract portions of arrays. We have written on the efficiency of array operations, and the details of slicing are pretty well covered, from the NumPy docs on slicing, to this chapter of “Beautiful Code” by the original author of NumPy, Travis Oliphant.

Slicing is pretty cool because it allows fast efficient computations of things like finite difference, for say, computing numerical derivatives. Recall that the derivative of a function describes the change in one variable with respect to another:

\frac{dy}{dx}

And in numerical computations, we can use a discrete approximation:

\frac{dy}{dx} \approx \frac{\Delta x}{\Delta y}

And to find the derivative at any particular location i, you compute the ratio of differences:

\frac{\Delta x}{\Delta y}\big|_i = \frac{x_{i+1} - x_{i}}{y_{i+1} - y{i}}

NumPy allows you to use slicing to avoid setting up costly-for-Python for: loops by specifying start, stop, and step values in the array indices. This lets you subtracting all of the i indices from the i+1 indices at the same time by specifying one slice that starts at element 1 and goes to the end (the i+1 indices), and another that starts at 0 and goes up to but not including the last element. No copies are made during the slicing operations. I use examples like this to show how you can get 2 and sometimes 3 or more orders of magnitude speedups of the same operation with for loops.

>>> import numpy as np

>>> x = np.linspace(-np.pi, np.pi, 101)
>>> y = np.sin(x)

>>> dy_dx = (
...     (y[1:] - y[:-1]) /
...     (x[1:] - x[:-1])
... )
>>> np.sum(dy_dx - np.cos(x[:-1] + (x[1]-x[0]) / 2))  # compare to cos(x)
np.float64(-6.245004513516506e-16)  # This is pretty close to 0

Fancy indexing is also well documented (but the NumPy docs now use the more staid term “Advanced Integer Indexing“, but I wanted to draw attention to a “Gotcha” that has bitten me a couple of times. With fancy indexing, you can either make a mask of Boolean values, typically using some kind of boolean operator:

>>> a = np.arange(10)
>>> evens_mask = a % 2 == 0
>>> odds_mask = a % 2 == 1
>>> print(a[evens_mask])
[0 2 4 6 8]

>>> print(a[odds_mask])
[1 3 5 7 9]

Or you can specify the indices you want, and this is the Gotcha, with tuples or lists, but the behavior is different either way. Let’s construct an example like one we use in class. We’ll make a 2-D array b and construct at positional fancy index that specifies elements in a diagonal. Notice that it’s a tuple, as shown by the (,) and each element is a list of coordinates in the array.

>>> b = np.arange(25).reshape(5, 5)
>>> print(b)
[[ 0  1  2  3  4]
 [ 5  6  7  8  9]
 [10 11 12 13 14]
 [15 16 17 18 19]
 [20 21 22 23 24]]
>>> upper_diagonal = (
...     [0, 1, 2, 3],  # row indices
...     [1, 2, 3, 4],  # column indices
... )
>>> print(b[upper_diagonal])
[ 1  7 13 19]

In this case, the tuple has as many elements as there are dimensions, and each element is a list (or tuple, or array) of the indices to that dimension. So in the example above, the first element comes from b[0, 1], the second from b[1, 2] so on pair-wise through the lists of indices. The result is substantially different if you try to construct a fancy index from a list instead of a tuple:

>>> upper_diagonal_list = [
    [0, 1, 2, 3],
    [1, 2, 3, 4]
]
>>> b_with_a_list = b[upper_diagonal_list]
>>> print(b_with_a_list)
[[[ 0  1  2  3  4]
  [ 5  6  7  8  9]
  [10 11 12 13 14]
  [15 16 17 18 19]]

 [[ 5  6  7  8  9]
  [10 11 12 13 14]
  [15 16 17 18 19]
  [20 21 22 23 24]]]

What just happened?? In many places, lists and tuples have similar behaviors, but not here. What’s happening with the list version is different. This is in fact a form of broadcasting, where we’re repeating rows. Look at the shape of b_with_a_list:

>>> print(b_with_a_list.shape)
(2, 4, 5)

Notice that its dimension 0 has 2 elements, which is the same as the number of items in upper_diagoal_list. Notice the dimension 1 has 4 elements, corresponding to the size of each element in upper_diagoal_list. Then notice that dimension 2 matches the size of the rows of b, and hopefully it will be clear what’s happening. In upper_diagoal_list we’re constructing a new array by specifying the rows to use, so the first element of b_with_a_list (seen as the first block above) consist of rows 0, 1, 2, and 3 from b, and the second element is the rows from the second element of upper_diagonal_list. Let’s print it again with comments:

>>> print(b_with_a_list)
[[[ 0  1  2  3  4]   # b[0] \
  [ 5  6  7  8  9]   # b[1]  | indices are first element of
  [10 11 12 13 14]   # b[2]  | upper_diagonal_list
  [15 16 17 18 19]]  # b[3] /

 [[ 5  6  7  8  9]   # b[1] \
  [10 11 12 13 14]   # b[2]  | indices are second element of
  [15 16 17 18 19]   # b[3]  | upper_diagonal_list
  [20 21 22 23 24]]] # b[4] /

Forgetting this convention has bitten me more than once, so I hope this explanation helps you resolve some confusion if you should ever run into it.

On Software Craftsmanship

Last week I found myself engaged with a group of students from Los Alamos National Laboratories in our Software Engineering for Scientists & Engineers, known informally as Software Craftsmanship. Apart from the epic New Mexico skies, the grand vistas, and the welcome relief from the heat and humidity of my hometown Austin, what I particularly loved about the week was the focus on craftsmanship.

This class had a high proportion of people I’d worked with before learning Python programming, data analysis, and/or machine learning, so it was easy to build rapport. Questions and dialog flowed easily. One student had this to say:

The interactivity of the in-person class, paired with the detailed course slides, was very effective. The source control (git), readable code, refactoring, and unit testing sections were all very useful and will be directly impactful to my work. There were multiple instances throughout the week where I learned something that would have saved me significant time on a problem I had encountered within the last 6 months.

One of the things we cover in the class is code review, the practice of submitting your code for review and critique before it’s accepted into a project, in some ways similar to the academic peer-review process. At Diller Digital, we try model this process by submitting and responding to feedback on the course materials. In response to a session of Software Engineering earlier this year, students suggested we learn source control and the details of git at the start of the class and then use it in a workflow typical of small teams in an R&D environment. Diller Digital has a git server (powered by Gitea, a close analog to GitHub and GitLab), and we created a class repository and developed a couple of small libraries that can serve as best-practice examples of variable naming, use of the logging library, Sphinx-ready documentation, unit testing, and packaging using standard tooling. One of the many jokes about git is that you can learn how to do 90% of what you’ll need with only understanding 10% of what’s actually going on. I’m not sure about the numbers there, but I do know that using and practicing what you’ve learned makes all the difference.

The in-person, instructor-led format makes engagement much easier and lowers the barriers to asking questions and providing individualized help. But one of the important principals behind that is the role of effortful thinking in learning. I like the way Derek Muller (of Veritasium fame) explains in this video how we have two systems in our brain, one fast — for instinctive, rapid-fire processing of the kind you’re using to parse the words on this page, and one slow – the effortful, brain-taxing system required for understanding something.

It’s probably that effortful system you’re using trying to understand my point, and you’ll surely use it trying to tell whether 437 is evenly divisible by 7 in your head. It’s not quite as simple as two distinct systems, as the author of that idea, Daniel Kahneman, makes clear in his book, Thinking, Fast and Slow, but it gives us a useful mental model for talking about software craftsmanship, and why we teach the way we do at Diller Digital. One of the main takeaway points is that effortful thinking is necessary for learning, but not all effortful thinking results in useful learning.

One of the first ideas we introduce in Software Engineering is that of cognitive load and its management. Cognitive load is a measure of effortful thinking — it’s the effort required to understand something, and we would like that effort to be spent on important things like the business logic of an algorithm and not on trivial things like indentation and syntax. That’s the purpose using a coding standard — once your brain gets used to seeing code that’s formatted in a common way (for Python it’s embodied in PEP8), the syntax becomes transparent (it’s handed off to the fast thinking part of our brain), and you can see through it to the logic of the code and spend your effort understanding that. Code that’s not formatted that way introduces a small cognitive tax on each line that adds up to measurable fatigue over time. If you want an example of that kind of fatigue, try this little game.

So managing cognitive load informs choices of layout, use of white space, and selecting the names of Python objects, and this is one of the important things we teach in Software Engineering. But it also informs the way we design our courses. We introduce ideas and demonstrate them and then have our students spend effort internalizing them, first in a simple “Give It A Try” exercise and eventually in a comprehensive exercise. The goal is to direct our students’ effort to increasingly independent tasks, in what is sometimes called a “fading scaffold”, where early effort is guided closely, and in later efforts, students have more room to make and recover from mistakes. This is also the thinking behind the presence in some courses of “Live Coding” scripts, where demos and exercises are set up already, and the student only has to focus on the learning goal and not on typing all of the supporting code around it. These have proven to be especially popular in our Machine Learning and Deep Learning classes.

This also suggests a strategy for the effective use of Large Language Models for coding. Use them reduce effort where it’s not critical to gain understanding or to gain a skill. But don’t let them replace effortful thinking where it counts most — in learning and in crafting your scientific, engineering, or other analytical workflow. And if you want a guide in your learning journey, we’re here to help. Click here for the course schedule.

I have taken four courses with Diller Digital and this [Software Engineering] is by far the most useful one. Many of us have learned programming as a need to do research, but we do not have any formal background in computational programming. I think this course takes basic Python programming skills to a more formal level, aligned with people with programming background allowing us to improve the quality of code we produce, the efficiency in the implementation and collaboration. 
Also, hosting the course in person made a big difference for me. I was easily engaged the entire day, the exercises and the possibility to ask in person made the entire course smoother.

I think this course material is incredibly helpful for people who don’t have professional software engineering experience. Of all the courses I took from Diller Digital, I found this the most foundational and immediately useful.

Meet Your Instructors Series – Logan

This month we are featuring our knowledgeable instructor, Logan Thomas.

1) What is your name and where are you currently located?

My name is Logan Thomas, and I’m currently based in Austin, Texas.

2) How did you end up in engineering education?

I came into engineering education through a natural progression from industry into teaching. I started my career applying machine learning and data science in domains ranging from digital media to mechanical engineering to biotech. Along the way, I discovered how much I enjoy breaking down complex concepts and helping others level up their skills. This led me to teaching roles at Enthought and mentoring opportunities through conferences like SciPy, where I’ve served as the Tutorials Chair. I love helping others build confidence in technical topics.

3) How do you stay current with the latest advancements in engineering technology and industry practices?

I stay up-to-date through a combination of hands-on work, professional communities, and continuous learning. I regularly contribute to and attend conferences like SciPy, stay active on GitHub, and follow key publications and blogs in data science, machine learning, and software engineering. I also enjoy experimenting with new tools and libraries in side projects and applying them in my role as a Data Science Software Engineer at Fullstory.

4) Can you describe your teaching philosophy and how it aligns with Diller Digital’s mission and values?

My teaching philosophy is rooted in curiosity, empathy, and empowerment. I believe the best learning happens when students feel safe to ask questions, explore, and make mistakes. I aim to connect abstract concepts to real-world problems and encourage students to become confident problem-solvers. I bring not just technical depth but a coaching mindset that helps learners develop independence.

5) What engineering software and tools do you have experience with, and how do you incorporate them into your teaching?

I have extensive experience with Python, TensorFlow, PyTorch, PySpark, and data science libraries like numpy, pandas, scikit-learn, and matplotlib. I’ve also worked with engineering tools like MATLAB, Abaqus, and simulation platforms during my earlier mechanical engineering roles. In teaching, I use these tools to build hands-on labs and project workflows that mirror industry applications—for example, guiding students through feature engineering in Python or designing reproducible machine learning pipelines.

6) How do you balance theoretical knowledge with practical, hands-on learning in your classes?

I try to lead with intuition, then reinforce with both theory and practice. I introduce concepts through stories or visuals, connect them to math and science fundamentals, and then move into code or simulation exercises. I often use real-world datasets and scenarios to bridge the gap between textbook theory and professional problem-solving.

7) Can you discuss your experience with project-based learning and how you guide students through the data analysis workflow?

Project-based learning is at the core of how I teach. I’ve led corporate hackathons, taught project-based machine learning courses, and mentored students through the entire data science lifecycle—from framing the problem and wrangling data to building models and interpreting results. I emphasize documentation, version control, and modular design to instill good engineering habits while keeping things collaborative and fun.

8) What strategies do you use to assess student understanding and provide constructive feedback on their work?

I focus heavily on active engagement and asking probing questions to gauge where students are in their understanding. During live coding sessions, I pause frequently to ask why a certain approach might work or what might happen if we changed a parameter—this helps surface both strengths and misconceptions in real time. I also use “coding karaoke,” where students follow along and fill in missing pieces of code to reinforce concepts and promote deeper learning. These interactive techniques give me a window into their thought process, which is often more insightful than a finished project. When giving feedback, I keep it specific, kind, and actionable—usually highlighting what they did well and nudging them to reflect on one or two key areas to improve. I also encourage self-assessment and goal-setting to build metacognitive skills and confidence over time.

9) What strategies do you use to communicate complex engineering concepts to students with varying levels of understanding?

I rely on analogies, visual aids, interactive demos, and scaffolding. I check in often, ask open-ended questions, and adjust based on the group’s energy and comprehension. I also try to normalize “not knowing”—creating an environment where curiosity is more important than correctness. Teaching, for me, is more about coaching than lecturing.

10) What is your favorite way to spend a Saturday? Favorite meal?

My favorite Saturday is one where I get some good coffee, play outside with my wife and two boys, and maybe sneak in a run or catch a baseball game. In the evening, nothing beats a homemade meal—especially if I can grill it in the backyard with friends and family around.

Logan and family in Austin

Thanks for your answers to these questions, Logan so we can get to know you better as one of our respected instructors.

“Popping the Hood” in Python

One man holds the hood of a car open while he and his friend look at the engine together.

Last weekend found me elbow-deep in the guts of my car, re-aligning the timing chain after replacing a cam sprocket. As I reflected on the joys of working on a car with only 4 cylinders and a relatively spacious engine bay, I found myself reflecting on one of the things I love best about the Python programming language — that is the ability to proverbially “pop the hood” and see what’s going on behind the abstractions. (With a background in Mechanical Engineering, car metaphors come naturally to me.)

As an Open Source, well-documented, scripted language, Python is already accessible. But there are some tools that let you get pretty deeply into the inner workings in case you want to understand how things work or to optimize performance.

Use the Source!

The first and easiest way to see what’s going on is to look at the inline help using Python’s built-in help() function, which displays the docstring using a pager. But I almost always prefer using the ? and ?? in IPython or Jupyter to display the just the docstring or all of the source code if available. For example consider the relatively simple parseaddr function from email.utils:

In [1]: import email

In [2]: email.utils.parseaddr?
Signature: parseaddr(addr, *, strict=True)
Docstring:
Parse addr into its constituent realname and email address parts.

Return a tuple of realname and email address, unless the parse fails, in
which case return a 2-tuple of ('', '').

If strict is True, use a strict parser which rejects malformed inputs.
File:      /Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/email/utils.py
Type:      function

In our Python Foundations course, I can usually elicit some groans by encouraging my students to “Use the Source” with the ?? syntax, which displays the source code, if available:

In [3]: email.utils.parseaddr??
Signature: parseaddr(addr, *, strict=True)
Source:   
def parseaddr(addr, *, strict=True):
    """
    Parse addr into its constituent realname and email address parts.

    Return a tuple of realname and email address, unless the parse fails, in
    which case return a 2-tuple of ('', '').

    If strict is True, use a strict parser which rejects malformed inputs.
    """
    if not strict:
        addrs = _AddressList(addr).addresslist
        if not addrs:
            return ('', '')
        return addrs[0]

    if isinstance(addr, list):
        addr = addr[0]

    if not isinstance(addr, str):
        return ('', '')

    addr = _pre_parse_validation([addr])[0]
    addrs = _post_parse_validation(_AddressList(addr).addresslist)

    if not addrs or len(addrs) > 1:
        return ('', '')

    return addrs[0]
File:      /Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/email/utils.py
Type:      function

Looking at the next-to-last line, you see there’s a path to the source code. That’s available programmatically in the module‘s .__file__ attribute, so you could open and print the contents if you want. If we do that for Python’s this module, we can expose a fun little Easter Egg.

In [4]: import this
# <output snipped - but try it for yourself and see what's there.>

In [5]: with open(this.__file__, 'r') as f:
   ...:     print(f.read())
   ...: 
s = """Gur Mra bs Clguba, ol Gvz Crgref

Ornhgvshy vf orggre guna htyl.
Rkcyvpvg vf orggre guna vzcyvpvg.
Fvzcyr vf orggre guna pbzcyrk.
Pbzcyrk vf orggre guna pbzcyvpngrq.
Syng vf orggre guna arfgrq.
Fcnefr vf orggre guna qrafr.
Ernqnovyvgl pbhagf.
Fcrpvny pnfrf nera'g fcrpvny rabhtu gb oernx gur ehyrf.
Nygubhtu cenpgvpnyvgl orngf chevgl.
Reebef fubhyq arire cnff fvyragyl.
Hayrff rkcyvpvgyl fvyraprq.
Va gur snpr bs nzovthvgl, ershfr gur grzcgngvba gb thrff.
Gurer fubhyq or bar-- naq cersrenoyl bayl bar --boivbhf jnl gb qb vg.
Nygubhtu gung jnl znl abg or boivbhf ng svefg hayrff lbh'er Qhgpu.
Abj vf orggre guna arire.
Nygubhtu arire vf bsgra orggre guna *evtug* abj.
Vs gur vzcyrzragngvba vf uneq gb rkcynva, vg'f n onq vqrn.
Vs gur vzcyrzragngvba vf rnfl gb rkcynva, vg znl or n tbbq vqrn.
Anzrfcnprf ner bar ubaxvat terng vqrn -- yrg'f qb zber bs gubfr!"""

d = {}
for c in (65, 97):
    for i in range(26):
        d[chr(i+c)] = chr((i+13) % 26 + c)

print("".join([d.get(c, c) for c in s]))

Another way to do this is to use the inspect module from Python’s standard library. Among many other useful functions is getsource which returns the source code:

In [6]: import inspect
In [7]: my_source_code_text = inspect.getsource(email.utils.parseaddr)

This works for libraries and functions that are written in Python, but there is a class of functions that are implemented in C (for the most popular version of Python, known as CPython) and called builtins. Source code is not available for those in the same way. The len function is an example:

In [8]: len??
Signature: len(obj, /)
Docstring: Return the number of items in a container.
Type:      builtin_function_or_method

For these functions, it takes a little more digging, but this is Open Source Software, so you can go to the Python source code on Github, and look in the module containing the builtins (called bltinmodule.c). Each of the builtin functions is defined there with the prefix builtin_, and the source code for len is at line 1866 (at least in Feb 2025 when I wrote this):

static PyObject *
builtin_len(PyObject *module, PyObject *obj)
/*[clinic end generated code: output=fa7a270d314dfb6c input=bc55598da9e9c9b5]*/
{
    Py_ssize_t res;

    res = PyObject_Size(obj);
    if (res < 0) {
        assert(PyErr_Occurred());
        return NULL;
    }
    return PyLong_FromSsize_t(res);
}

There you can see that most of the work is done by another function PyObject_Size(), but you get the idea, and now you know where to look.

Step by Step

To watch the Python interpreter step through the code a line at a time and explore code execution, you can use the Python Debugger pdb, or its tab-completed and syntax-colored cousin ipdb. These allow you to interact with the code as it runs and execute arbitrary code in the context of any frame of execution, including printing out the value of variables. They are the basis for most of the Python debuggers built in to IDEs like Spyder, PyCharm, or VS Code. Since they are best demonstrated live, and since we walk through their use in our Software Engineering for Scientists & Engineers class, I’ll leave it at that.

Inside the Engine

Like Java and Ruby, Python runs in a virtual machine, commonly known as the “Interpreter” or “runtime”. So in contrast to compiling code in, say, C, where the result is an executable object file consisting of system- and machine-level instructions that can be run as an application by your operating system, when you execute a script in Python, your code gets turned into bytecode. Bytecode is a set of instructions for the Python virtual machine. It’s what we would write if we were truly writing for the computer (see my comments on why you still need to learn programming).

But while it’s written for the virtual machine, it’s not entirely opaque, and it can sometimes be instructive to take a look. In my car metaphor, this is a bit like removing the valve cover and checking the timing marks inside. Usually we don’t have to worry about it, but it can be interesting to see what’s going on there, as I learned when producing and answer for a Stack Overflow question.

In the example below, we make a simple function add. The bytecode is visible in the add.__code__.co_code attribute, and we can disassemble it using the dis library and turn the bytecode into something slightly more friendly for human eyes:

In [9]: import dis
In [10]: def add(x, y):
    ...:     return x + y
    ...: 
In [11]: add.__code__.co_code
Out[11]: b'\x95\x00X\x01-\x00\x00\x00$\x00'
In [12]: dis.disassemble(add.__code__)
  1           RESUME                   0

  2           LOAD_FAST_LOAD_FAST      1 (x, y)
              BINARY_OP                0 (+)
              RETURN_VALUE

In the output of disassemble, the number in the first column is the line number in the source code. The middle column shows the bytecode instruction (see the docs for their meaning), and the right-hand side shows the arguments. For example in line 2, LOAD_FAST_LOAD_FAST pushes references to x and y to the stack, and the next line BINARY_OP executes the + operation on them.

Incidentally, if you’ve ever noticed files with the .pyc extension or folders called __pycache__ (which are full of .pyc files) in your project directory, that’s where Python stores (or caches) bytecode when a module is imported so that next time, the import is faster.

In Conclusion

There’s obviously a lot more to say about bytecodes, the execution stack, the memory heap, etc. But my goal here is not so much to give a lesson in computer science as to give an appreciation for the accessibility of the Python language to curious users. Much as I think it’s valuable to be able to pop the hood on your car and point to the engine, the oil dipstick, the brake fluid reservoir, and the air filter, I believe it’s valuable to understand some of what’s going on “under the hood” of the Python code you may be using for data analysis or other kinds of scientific computing.