
Have you ever waited minutes or, sometimes, even hours for FID scores to compute, only to realize it’s slowing down your entire project? We may not know you personally, but something tells us that many of you may have encountered issues like this at some point. And let us tell you, it is quite annoying!
Yes, FID is an essential metric for GAN performance. But this doesn’t mean you should keep waiting during its computation, right?
Well, most people using the default settings face the same problem. Fortunately, there are several ways to optimize and speed up this process. And that’s exactly what we’ll cover here.
In this guide, we’ll discuss how to speed up TorchMetrics FID. We’ll elaborate on this topic through practical methods. This way, you can improve FID speed without sacrificing accuracy. So, let’s get started!
What Is the Main Reason for FID Being Slow?
Before learningto speed up TorchMetrics FID, let’s first clarify our concepts about FID:
- What is it?
- How does it get slow?
FID (also known as Fréchet Inception Distance) is a widely used metric for evaluating the quality of images generated by GANs (Generative Adversarial Networks). It measures the similarity of the generated pictures to real photos by comparing the statistical properties of their feature representations. But why exactly is this slow?
Simply put, FID requires computing the square root of a large covariance matrix, which is a computationally expensive step often performed using CPU-based routines. But that’s the only reason for FID being slow. Additionally, the following factors further slow down the whole process:
- Inefficient threading.
- Memory constraints.
- Suboptimal implementations.
Various Effective Techniques for Speeding Up TorchMetrics FID
Now that you’ve understood what FID is and why FID often gets slow, let’s see how to speed up TorchMetrics FID. So, if it is slow, use the following techniques to accelerate the whole thing:
Strategy # 1: Optimize CPU Threading
As we’ve mentioned earlier, one of the main reasons for slow FID computations is excessive CPU thread usage. But you can prevent unnecessary overhead by limiting the number of threads used by key libraries, such as the following:
- MKL.
- OMP.
- OpenBLAS.
But how to do this?
Well, before running your script, you can set these environment variables:
export MKL_NUM_THREADS=1
export OPENBLAS_NUM_THREADS=1
export OMP_NUM_THREADS=1
These libraries will be able to regulate CPU resource usage effectively, which will ensure fast and smooth execution.
Strategy # 2: Run FID Calculation Separately From Training
Let’s say that you are computing FID during model training. In that case, you may face the issue of resource conflicts, which often cause system crashes or slowdowns. So, what’s the solution here?
Well, you can execute FID computation on a dedicated GPU or in a separate process. Doing so will help avoid this issue, especially in distributed training settings.
But if you’re a PyTorch Lightning user, you can schedule FID evaluation at the end of each epoch rather than during training. Doing so can also help reduce interference with model updates.
Strategy # 3: Upgrade to the Latest TorchMetrics Version
Imagine a scenario where you’re using an older version of TorchMetrics. In that case, if you want to succeed at speeding up TorchMetrics FID, all you have to do is simply upgrade to the latest release. Doing so can bring significant performance improvements.
As of now, ‘version 0.9 and later’ introduced optimizations that reduce the number of calls to the update method. That’s how they cut down the computation time. So, simply switch to one of these variants by running the following script:
pip install --upgrade torchmetrics
Strategy # 4: Use a GPU-Based Matrix Square Root Computation
The default FID computation in many implementations relies on ‘SciPy’s sqrtm function.’ This generally runs on the CPU and can be slow for large datasets. So, you can use a ‘GPU-accelerated version’ that speeds up this step significantly.
A great alternative is the ‘faster-pytorch-fid’ package, which leverages PyTorch’s built-in operations to compute the matrix square root on the GPU. Here is the script for that:
pip install faster-pytorch-fid
Concluding Remarks
Computing TorchMetrics FID efficiently is crucial for evaluating GAN-generated images without slowing your workflow. By decoupling FID from training, limiting CPU threads, switching to GPU-based matrix operations, and upgrading TorchMetrics, you can significantly reduce computation time while maintaining accuracy. So, if you’ve been struggling to speed up TorchMetrics FID, now is the time to streamline your process and make your evaluations faster and more efficient. All you have to do is try one of these techniques in your next project and experience the difference yourself!