How to restrict with environment variables PyTorch to given GPUs?
Feb 18, 8:40pm

To restrict PyTorch to use only specific GPUs via environment variables, you can use the CUDA_VISIBLE_DEVICES environment variable. This variable controls which GPUs are visible to PyTorch (and other CUDA-based applications).

How to Set CUDA_VISIBLE_DEVICES

1. Linux/macOS (Bash)

Set the environment variable before running your Python script:

bash
CUDA_VISIBLE_DEVICES=0,1 python your_script.py
  • This restricts PyTorch to only use GPU 0 and GPU 1.

2. Windows (Command Prompt)

cmd
set CUDA_VISIBLE_DEVICES=0,1
python your_script.py

3. Permanently in a Script

If you want to set it programmatically in Python (before any PyTorch operations):

python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"  # Restrict to GPUs 0 and 1

How PyTorch Interprets GPU Indices

  • After setting CUDA_VISIBLE_DEVICES=0,1, PyTorch will see:
    • cuda:0 → Actual GPU 0
    • cuda:1 → Actual GPU 1
  • If you set CUDA_VISIBLE_DEVICES=1, PyTorch will only see one GPU (cuda:0), which corresponds to the actual GPU 1.

Example in Code

python
import torch

# Check available GPUs (after setting CUDA_VISIBLE_DEVICES)
print(f"Available GPUs: {torch.cuda.device_count()}")

# Use a specific GPU
device = torch.device("cuda:0")  # Refers to the first visible GPU
tensor = torch.randn(3, 3).to(device)

Notes

  • If you don’t set CUDA_VISIBLE_DEVICES, PyTorch will use all available GPUs.
  • This method works for both single and multi-GPU setups.
Feb 18, 8:40pm

This is a shared conversation and may include unverified content that doesn't reflect Mistral AI's views. Login to resume chatting.