To artificially fill the GPU memory on Linux, you can use tools like PyTorch, TensorFlow, or CUDA to allocate tensors or arrays on the GPU. Here’s how you can do it with PyTorch and CUDA directly:
Install PyTorch (if not already installed):
bashpip install torch
Run a Python script to allocate GPU memory:
pythonimport torch # Check if CUDA (GPU) is available if torch.cuda.is_available(): device = torch.device("cuda") print(f"Using GPU: {torch.cuda.get_device_name(0)}") else: raise RuntimeError("CUDA is not available") # Allocate a large tensor on the GPU # Adjust the size (e.g., 10000x10000) to fill the desired amount of memory tensor_size = (10000, 10000) tensor = torch.randn(tensor_size, device=device) print(f"Allocated tensor of size {tensor_size} on GPU") print(f"Memory allocated: {torch.cuda.memory_allocated() / 1024**3:.2f} GB") print(f"Memory reserved: {torch.cuda.memory_reserved() / 1024**3:.2f} GB") # Keep the tensor in memory (e.g., sleep or loop) input("Press Enter to release memory...")
If you prefer using CUDA directly, you can write a simple C/C++ program:
Install CUDA Toolkit (if not already installed):
Follow the NVIDIA CUDA Installation Guide.
Compile and run a CUDA program:
cpp#include <cuda_runtime.h> #include <stdio.h> int main() { size_t size = 10000 * 10000 * sizeof(float); // Adjust size as needed float *d_array; cudaMalloc(&d_array, size); printf("Allocated %zu bytes on GPU\n", size); // Keep the memory allocated getchar(); cudaFree(d_array); return 0; }
Compile with:
bashnvcc fill_gpu_memory.cu -o fill_gpu_memory ./fill_gpu_memory
Install TensorFlow (if not already installed):
bashpip install tensorflow-gpu
Run a Python script:
pythonimport tensorflow as tf if tf.config.list_physical_devices('GPU'): print("Using GPU") else: raise RuntimeError("GPU not available") # Allocate a large tensor tensor = tf.random.normal((10000, 10000)) print(f"Allocated tensor on GPU") # Keep the tensor in memory input("Press Enter to release memory...")
10000x10000 in the examples) to fill the desired amount of GPU memory.nvidia-smi in another terminal.