On Python environments, pip, and uv

Often at the start of classes I teach, I get a question about virtual environments — why do we use them? What are they? How do they work? What is your opinion about uv (or the latest packaging tool)? And as I may have mentioned before on this blog, I enjoy a good question, so I thought I’d share a bit on why we do what we do with virtual environments at Diller Digital.

Virtual Environments

First of all, what is a virtual environment? For most applications on a computer or mobile device, you just install it once to get started and then download periodic updates to keep it current. Creative suites with lots of modules and extensions like PhotoShop have a single executable application with a single collection of libraries that extend their capabilities. The MathWorks’ MATLAB works this way, too, with a single runtime interpreter and a collection of libraries curated by The MathWorks. In this kind of monolithic setup, the publisher of the software implicitly guarantees the interoperability of the component parts by controlling the software releases and conditions of whatever extension marketplace they offer.

The world of Python is different: there is no single entity curating the extensions; different libraries are added for different use cases, resulting in different sets of dependencies for different applications; and there is no guarantee of backward compatibility. In fact, managing dependencies and preserving working environments is part of the price to pay for Free Open Source Software (FOSS). And that’s where virtual environments come into play. If the world of FOSS can feel like the Wild West, then think of virtual environments as a way to make a safe “corral” where you have a Python interpreter (runtime) and a set of libraries that work together and can control when and if they are updated so they don’t break code that is working.

As a practical example, let’s consider the TensorFlow library, which Diller Digital uses in one version of our Deep Learning for Scientists & Engineers course (we also provide a version that uses PyTorch). TensorFlow has components that are compiled in C/C++, and when Python 3.12 was released in October 2023, it included several internal changes that caused many C-extension builds to fail, including TensorFlow’s. At that time, the latest version of TensorFlow was 2.14, and it could only be run with Python versions between 3.8 and 3.11. It was early 2024 before TensorFlow 2.16 was released, which finally enabled use of Python 3.12. By that time, Python 3.13 was being released, and the cycle started again. Thus, users of TensorFlow have specific version requirements for their Python runtime, and it sometimes varies by hardware.

Consider, though, that a user of TensorFlow may also have another unrelated project that could really benefit from a recent release of another library, say Pandas, that is only available (or only provides wheels for) newer versions of Python. In this case, the TensorFlow dependencies conflict with the Pandas dependencies, and if Python were a monolithic application, you would have to choose between one or the other.

Virtual environments ease that dilemma by creating independent runtime environments, each with their own Python executable (potentially of different versions) and set of libraries. Recent versions of Python (if downloaded and installed from Python.org) can sit next to each other in an operating system and run independently. In the macOS, these are generally found in the Library/Frameworks/ directory and can be accessed with python3 (which is an alias to the latest one installed) or using the full version number.

❯ which python3
/Library/Frameworks/Python.framework/Versions/3.13/bin/python3
❯ which python3.12
/Library/Frameworks/Python.framework/Versions/3.12/bin/python3.12
❯ which python3.13
/Library/Frameworks/Python.framework/Versions/3.13/bin/python3.13
❯ which python3.14
/Library/Frameworks/Python.framework/Versions/3.14/bin/python3.14

Note how each minor version (the 12 or 13 in 3.12 or 3.13) has its own runtime.

In Windows it’s similar, but the locations are different.

C:\Users\tim> where python
C:\Users\tim\AppData\Local\Programs\Python\Python314\python.exe
C:\Users\tim\AppData\Local\Programs\Python\Python313\python.exe
C:\Users\tim\AppData\Local\Programs\Python\Python312\python.exe
C:\Users\tim\AppData\Local\Microsoft\WindowsApps\python.exe

That last entry in the Windows example is sneaky — execute it and you’re taken to the Windows App store. I don’t recommend installing that way.

What we often do in Diller Digital classes is to create a virtual environment dedicated to the class. This makes sure we don’t break anything already set up and working, and if future changes to the libraries break the examples we used in class, there will at least be a working version in that environment. The native-Python way to do this (by which I mean “the way that uses only standard libraries that ship with Python”) is using the venv library. Many Python libraries have a “main” mode that you can invoke with the -m flag, so setting up a new Python environment looks like this:

python -m venv my_new_environment

Which python you use to invoke venv determines the version of the runtime that gets installed into the virtual environment. In Windows, if I wanted something other than the version that appears first in the list, I’d have to specify the entire path. So, for example, to create a version that used Python 3.12, I’d type:

C:\Users\tim\AppData\Local\Programs\Python\Python312\python.exe -m venv my_new_environment

In macOS as I showed above, I’d type

python3 -m venv my_new_environment 

and it would use version 3.13. The virtual environment amounts to a new directory in the place where I invoked the command with contents like this (for macOS environments):

my_new_environment├── bin
├── etc
├── include
├── lib
├── pyvenv.cfg
└── share

There may be some minor variations with platform, but critically, there’s a lib/python3.13/site-packages directory where libraries get installed, and a bin directory with executables (it’s the Scripts directory on Windows). Note that the Python runtime executables are actually just links to the Python executable used to create the virtual environment. When you “activate” a virtual environment, two things happen:
1 – The prompt changes, so that my_new_environment now appears at the start of the command prompt, and
2 – Some path-related environment variables are modified such that my_new_environment/bin is added to system path, and Python’s import path resolves to my_new_environment/lib/python3.13/site-packages.

The environment is activated with a script located at my_new_environment/bin/activate (macOS) or my_new_environment\Scripts\activate (Windows).

❯ source my_new_environment/bin/activate

my_new_environment ❯ which python
/Users/timdiller/my_new_environment/bin/python

my_new_environment ❯ python
Python 3.13.9 (v3.13.9:8183fa5e3f7, Oct 14 2025, 10:27:13) [Clang 16.0.0 (clang-1600.0.26.6)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path
['', '/Library/Frameworks/Python.framework/Versions/3.13/lib/python313.zip', '/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13', '/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/lib-dynload', '/Users/timdiller/my_new_environment/lib/python3.13/site-packages']

That’s it. That’s the magic. The fun begins when you start installing packages…

PIP, the Package Installer for Python

In Python, the standard installation tool is called pip, the Package Installer for Python. For many use cases, pip works just fine, and if I were to only ever use a Mac with an SSD, I would probably just stick with pip. Although you can use pip to install packages from anywhere (including local code or from a code repository like GitHub) pip‘s default repository is PyPI, the Python Package Index, a central repository for community-contributed libraries. All of the major science and engineering packages including NumPy, SciPy, and Pandas are published through PyPI.

When someone uses pip install to install a package into their Python environment, broadly speaking, two things happen:
1 – pip resolves the dependencies for the package to be installed (properly listing dependencies is up to the author(s)) and computes a list of everything that needs to be installed or updated; this is a non-trivial task involving a mini-language for specifying acceptable versions and SAT Solving, and accomplishing it efficiently was an early distinctive advantage for early 3rd-party packaging tools like Enthought’s EDM and Continuum’s (now Anaconda’s) conda; since version 20.3, pip generally handles dependency resolution as well as any of its competitors (more on this below).
2 – pip places copies of the packages into the site-packages directory, downloading new versions as needed. pip keeps a cache of libraries so that it doesn’t have to download each one from PyPI every time. Unlike some of the newer tools, pip does not try to de-duplicate installed files across environments with hard links or shared package stores. Each environment is independent and isolated.

pip defines the canonical package management behavior for Python: safe, debug-friendly, and transparent, if somewhat inefficient in terms of file system utilization. In practice, I’ve found little to complain about using pip on the macOS, but in Windows, the file-copying step can be painfully slow, especially if there is any kind of malware scanning tool involved.

Enter the 3rd-party Package Managers

In the early days of scientific computing in Python, Enthought published the Enthought Python Distribution (EPD), a semantic-versioned Python distribution with a runtime and a reasonably complete set of libraries that were extensively tested and guaranteed to “play nicely” together. In 2011, Continuum brought a competing product to market, Anaconda, with a focus on broad access to open source packages, including the newest releases. Anaconda shipped with a command line tool conda for managing its own version of virtual environments. Meanwhile, Enthought focused on rigorously-tested, curated package sets with security-conscious institutional customers in mind and evolved EPD into the Enthought Deployment Manager (EDM), which also defined “applications” that could be deployed within an institution using the edm command-line tool. This involved managing virtual environments in a registry or database that could be synced with an institutional server. Both edm and conda implement managed environments and employ shared package stores and hard links where possible to reduce the storage footprint of virtual environments. But, notably, they also both follow a cultural norm of “playing nicely” (more or less — edm is substantially better about this than conda) with pip and following the same (more or less) command-line interface. conda has the notable disadvantage of a substantially more-invasive installation. In my experience, it is harder to debug conda environments (I do this a lot on the first days of class) and can take substantial effort to reverse an installation of Anaconda and remove all of the “magic” it contributes.

Meanwhile, improvements to pip over the last several years, especially since late 2020, effectively erased the dependency-solving advantages offered by edm and conda, and the main advantages offered by them are space efficiency and familiarity. Enthought’s packages continue to be rigorously tested and curated, and Anaconda’s conda-forge remains a trusted source for the latest versions of packages. But neither of them was particularly optimized for the fast, ephemeral environments needed for continuous integration workflows.

Astral – uv

That brings us to a newer player, uv, published by Astral, whose focus is on high-performance tooling for Python written in Rust. (Rust is a compiled language that prioritizes memory and thread safety and is often found in web services and system software.) uv addresses the needs presented by continuous integration environments, where source code repositories like GitHub enable automatic actions for testing and building code as part of the regular development process. Many providers of cloud services like GitHub charge for compute time, so that minimizing the time to set up environments for testing translates directly to cost savings. Depending on the situation, uv can operate from 10-100 times faster than pip.

That performance comes with tradeoffs, however. uv intentionally diverges from some long-standing pip conventions in order to optimize for speed, reproducibility, and CI-friendly workflows. Most notably, it decouples installs from shell activation, aggressively deduplicates packages across environments, and it treats dependency resolution as a first-class, cached artifact in a lock file.

For example, whereas venv is quite explicit about which runtime is used and what the virtual environment is named, uv will implicitly create an environment with a default name as needed, and activation is often unnecessary. And consider caching, uv‘s aggressive use of references to a global package store deliberately discards the independent isolation of venv environments; environments are no longer self-contained as they are under venv management.

These choices optimize for speed and reproducibility in CI pipelines, but come at the cost of some of the simplicity and transparency that make venv and pip effective teaching tools. Those design choices make a lot of sense for automated build systems that create and destroy environments repeatedly, but they are less compelling in interactive or instructional settings, where environment creation happens infrequently and clarity matters more than raw speed. They also add a lot of additional conceptual “surface area” to cover during class.

Conclusion

We acknowledge the curated safety and testing of edm environments, the ubiquity of conda environments and the conda-forge ecosystem, and the impressive engineering and benefit to modern CI pipelines of uv. However, we generally prefer to stick with the standard, simple, tried-and-true venv + pip tooling for teaching and day-to-day development. It’s the easiest foundation to build on if students want to explore the other options.

NumPy Indexing — the lists and tuples Gotcha

In a recent session of Python Foundations for Scientists & Engineers, a question came up about indexing a NumPy ndarray. Beyond getting and setting single values, NumPy enables some powerful efficiencies through slicing, which produces views of an array’s data without copying, and fancy indexing, which allows use of more-complex expressions to extract portions of arrays. We have written on the efficiency of array operations, and the details of slicing are pretty well covered, from the NumPy docs on slicing, to this chapter of “Beautiful Code” by the original author of NumPy, Travis Oliphant.

Slicing is pretty cool because it allows fast efficient computations of things like finite difference, for say, computing numerical derivatives. Recall that the derivative of a function describes the change in one variable with respect to another:

\frac{dy}{dx}

And in numerical computations, we can use a discrete approximation:

\frac{dy}{dx} \approx \frac{\Delta x}{\Delta y}

And to find the derivative at any particular location i, you compute the ratio of differences:

\frac{\Delta x}{\Delta y}\big|_i = \frac{x_{i+1} - x_{i}}{y_{i+1} - y{i}}

NumPy allows you to use slicing to avoid setting up costly-for-Python for: loops by specifying start, stop, and step values in the array indices. This lets you subtracting all of the i indices from the i+1 indices at the same time by specifying one slice that starts at element 1 and goes to the end (the i+1 indices), and another that starts at 0 and goes up to but not including the last element. No copies are made during the slicing operations. I use examples like this to show how you can get 2 and sometimes 3 or more orders of magnitude speedups of the same operation with for loops.

>>> import numpy as np

>>> x = np.linspace(-np.pi, np.pi, 101)
>>> y = np.sin(x)

>>> dy_dx = (
...     (y[1:] - y[:-1]) /
...     (x[1:] - x[:-1])
... )
>>> np.sum(dy_dx - np.cos(x[:-1] + (x[1]-x[0]) / 2))  # compare to cos(x)
np.float64(-6.245004513516506e-16)  # This is pretty close to 0

Fancy indexing is also well documented (but the NumPy docs now use the more staid term “Advanced Integer Indexing“, but I wanted to draw attention to a “Gotcha” that has bitten me a couple of times. With fancy indexing, you can either make a mask of Boolean values, typically using some kind of boolean operator:

>>> a = np.arange(10)
>>> evens_mask = a % 2 == 0
>>> odds_mask = a % 2 == 1
>>> print(a[evens_mask])
[0 2 4 6 8]

>>> print(a[odds_mask])
[1 3 5 7 9]

Or you can specify the indices you want, and this is the Gotcha, with tuples or lists, but the behavior is different either way. Let’s construct an example like one we use in class. We’ll make a 2-D array b and construct at positional fancy index that specifies elements in a diagonal. Notice that it’s a tuple, as shown by the (,) and each element is a list of coordinates in the array.

>>> b = np.arange(25).reshape(5, 5)
>>> print(b)
[[ 0  1  2  3  4]
 [ 5  6  7  8  9]
 [10 11 12 13 14]
 [15 16 17 18 19]
 [20 21 22 23 24]]
>>> upper_diagonal = (
...     [0, 1, 2, 3],  # row indices
...     [1, 2, 3, 4],  # column indices
... )
>>> print(b[upper_diagonal])
[ 1  7 13 19]

In this case, the tuple has as many elements as there are dimensions, and each element is a list (or tuple, or array) of the indices to that dimension. So in the example above, the first element comes from b[0, 1], the second from b[1, 2] so on pair-wise through the lists of indices. The result is substantially different if you try to construct a fancy index from a list instead of a tuple:

>>> upper_diagonal_list = [
    [0, 1, 2, 3],
    [1, 2, 3, 4]
]
>>> b_with_a_list = b[upper_diagonal_list]
>>> print(b_with_a_list)
[[[ 0  1  2  3  4]
  [ 5  6  7  8  9]
  [10 11 12 13 14]
  [15 16 17 18 19]]

 [[ 5  6  7  8  9]
  [10 11 12 13 14]
  [15 16 17 18 19]
  [20 21 22 23 24]]]

What just happened?? In many places, lists and tuples have similar behaviors, but not here. What’s happening with the list version is different. This is in fact a form of broadcasting, where we’re repeating rows. Look at the shape of b_with_a_list:

>>> print(b_with_a_list.shape)
(2, 4, 5)

Notice that its dimension 0 has 2 elements, which is the same as the number of items in upper_diagoal_list. Notice the dimension 1 has 4 elements, corresponding to the size of each element in upper_diagoal_list. Then notice that dimension 2 matches the size of the rows of b, and hopefully it will be clear what’s happening. In upper_diagoal_list we’re constructing a new array by specifying the rows to use, so the first element of b_with_a_list (seen as the first block above) consist of rows 0, 1, 2, and 3 from b, and the second element is the rows from the second element of upper_diagonal_list. Let’s print it again with comments:

>>> print(b_with_a_list)
[[[ 0  1  2  3  4]   # b[0] \
  [ 5  6  7  8  9]   # b[1]  | indices are first element of
  [10 11 12 13 14]   # b[2]  | upper_diagonal_list
  [15 16 17 18 19]]  # b[3] /

 [[ 5  6  7  8  9]   # b[1] \
  [10 11 12 13 14]   # b[2]  | indices are second element of
  [15 16 17 18 19]   # b[3]  | upper_diagonal_list
  [20 21 22 23 24]]] # b[4] /

Forgetting this convention has bitten me more than once, so I hope this explanation helps you resolve some confusion if you should ever run into it.

Batching and Folding in Machine Learning – What’s the Difference?

In a recent session of Machine Learning for Scientists & Engineers, we were talking about the use of folds in cross-validation, and a student did one of my favorite things — he asked a perceptive question. “How is folding related to the concept of batching I’ve heard about for deep learning?” We had a good discussion about batching and folding in machine learning and what the differences and similarities are.

What is Machine Learning?

Terms like “AI” and “machine learning” have become nearly meaningless in casual conversation and advertising media—especially since the arrival of large language models like ChatGPT. At Diller Digital, we define AI (that is, “artificial intelligence”) as computerized decision-making, covering areas from robotics and computer vision to language processing and machine learning.

Machine learning refers to the development of predictive models that are configured, or trained, by exposure to sample data rather than by explicitly encoded interactions. For example, you can develop a classification model that sorts pictures into dogs and cats by showing it a lot of examples of photos of dogs and cats. (Sign up for the class to learn the details of how to do this.).

Or you can develop a regression model to predict the temperature at my house tomorrow by training the model on the last 10 years’ worth of measurements of temperature, pressure, humidity, etc. from my personal weather station.

Classical vs Deep Learning

Broadly speaking, there are two kinds of machine learning: what we at Diller Digital call classical machine learning and deep learning. Classical machine learning is characterized by relatively small data sets, and it requires a skilled modeler to do feature engineering to make the best use of the available (and limited) training data. This is the subject of our Machine Learning for Scientists & Engineers class. Deep Learning is a subset of machine learning that makes use of many-layered models that function in a rough analog to how the neurons in a human brain function. Training such models requires much more data but less manual feature engineering by the modeler. The skill in deep learning is that of configuring the architecture of the model, and that is the subject of our Deep Learning for Scientists & Engineers.

Parameters and Hyperparameters

There is one more pair of definitions we need to cover before we can talk about folding versus batching: parameters and hyperparameters.

At the heart of both kinds of machine learning is the adjustment of a model’s parameters, sometimes also called coefficients or weights. Simply stated, these are the coefficients of what boils down to a linear regression problem.

Each model also has what are called hyperparameters, or parameters that govern how the model behaves algorithmically. These might include things like how you score your model’s performance or what method you use to update the model weights.

The process of training a model is the process of adjusting the parameters until you get the best possible predictions from your model. For this reason, we typically divide our training data into two parts: one (the training data set) for adjusting the weights, the other (the testing data set) for assessing the performance of the model. It’s important to score your model on data that was not used in the training step because you’re testing its predictive power on things it hasn’t seen before.

What is Folding?

So this brings us finally to the subject of folding and batching. Folding typically arises in the context of cross-validation, when you’re trying to decide on the best hyperparameters to use for your model. That process involves fitting your model with different sets of hyperparameters and seeing which combination gives the best results. How can you do that without using your test data set? (If we used the test data set during training, that would be cheating because it would sacrifice the ability of your model to generalize for the short-term gain of a better result.) We divide our training data into folds and hold each fold back as a “mini-test” data set and train on the others. We successively hold each fold back and then average the scores across the folds. That becomes our cross-validation score and gives us a way to score that set of hyperparameters without dipping into the test data set.

Folds divide a training data set into sections, one of which is held out as a mini “test” section for scoring a combination of hyperparameters in cross-validation.

What is Batching?

Batching looks a lot like folding but is a distinct concept used in a different context. Batching arises in the context of training deep models, and it serves two purposes. First, training a deep learning model typically requires a lot of training data (orders of magnitude more data than classical methods), and except for trivial cases you can’t fit all the training data into working memory at the same time. You solve that problem by dividing the training data into batches in much the same way that you would divide it into folds for cross-validation, and then iteratively update the model parameters using each batch of data until you have used the entire training data set. One full pass through all of the batches is called an epoch. Training a deep learning model typically takes multiple epochs.

A training data set is divided into batches to reduce memory requirements and provide variation for model parameter refinement. Each batch is used once per training epoch.

Beyond considerations of working memory, there’s a second important reason to train a deep model on batches: because there are so many model parameters with so many possible configurations, and because of the way the layers of the model insulate some of the parameters from information in the test data set, it’s helpful that smaller batches are “noisier” and provide more variation for the training algorithm to use to adjust the model parameters. As a physical analogy, you might think of the way that shaking a pan while you poured sand into it would help it settle into a flat surface more quickly than just waiting for gravity to do the work for you, and without shaking you might end up with lumps and bumps.

So hopefully, by this point you can see how folding is similar to batching and how they are distinct concepts. They both similarly divide training data into segments. Folding is used in cross-validation for optimizing hyperparameters, and batching is used in training deep learning models to limit memory requirements and improve convergence for fitting model parameters.

Diller Digital offers Machine Learning for Scientists & Engineers and Deep Learning for Scientists & Engineers at least once per quarter. Sign up to join us, and bring your curiosity, questions, and toughest problems and see what you can learn! Maybe you’ll join the chorus of those who leave glowing feedback.