To restrict PyTorch to use only specific GPUs via environment variables, you can use the CUDA_VISIBLE_DEVICES environment variable. This variable controls which GPUs are visible to PyTorch (and other CUDA-based applications).
CUDA_VISIBLE_DEVICESSet the environment variable before running your Python script:
bashCUDA_VISIBLE_DEVICES=0,1 python your_script.py
cmdset CUDA_VISIBLE_DEVICES=0,1 python your_script.py
If you want to set it programmatically in Python (before any PyTorch operations):
pythonimport os os.environ["CUDA_VISIBLE_DEVICES"] = "0,1" # Restrict to GPUs 0 and 1
CUDA_VISIBLE_DEVICES=0,1, PyTorch will see:
cuda:0 → Actual GPU 0cuda:1 → Actual GPU 1CUDA_VISIBLE_DEVICES=1, PyTorch will only see one GPU (cuda:0), which corresponds to the actual GPU 1.pythonimport torch # Check available GPUs (after setting CUDA_VISIBLE_DEVICES) print(f"Available GPUs: {torch.cuda.device_count()}") # Use a specific GPU device = torch.device("cuda:0") # Refers to the first visible GPU tensor = torch.randn(3, 3).to(device)
CUDA_VISIBLE_DEVICES, PyTorch will use all available GPUs.