Install TensorFlow GPU on WSL2/Ubuntu 24.04 (Windows 10/11) | CUDA, cuDNN, TensorRT & PyTorch - GPU

July 04, 2024 • Harsh Shinde

Prerequisites

  • Windows 10 (Build 19044 or higher) or Windows 11
  • WSL2 installed and running Ubuntu 24.04
  • NVIDIA GPU with compute capability 3.5 or higher
  • Latest NVIDIA drivers installed on Windows

Step 1: System Update

First, update your system and install essential build tools:

sudo apt update sudo apt upgrade -y sudo apt install build-essential -y
Update system packages and install build tools

Step 2: Install Miniconda

Install Miniconda for managing Python environments:

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh
Download and install Miniconda

Step 3: Install CUDA

Download and install NVIDIA CUDA Toolkit:

wget https://developer.download.nvidia.com/compute/cuda/12.1.1/local_installers/cuda_12.1.1_530.30.02_linux.run sudo sh cuda_12.1.1_530.30.02_linux.run
Download and install CUDA 12.1.1

Environment Setup

Add CUDA to your PATH and LD_LIBRARY_PATH by editing ~/.bashrc:

export PATH=/usr/local/cuda-12.1/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
Add these lines to ~/.bashrc
source ~/.bashrc
Apply the changes

Configure dynamic linker run-time bindings:

/usr/local/cuda-12.1/lib64
Add this line to /etc/ld.so.conf
sudo ldconfig
Update the dynamic linker

Verify the CUDA installation:

echo $PATH echo $LD_LIBRARY_PATH sudo ldconfig -p | grep cuda nvcc --version
Verify CUDA installation and paths

Step 4: Install cuDNN

Note

Download cuDNN from the NVIDIA cuDNN Archive (requires free NVIDIA Developer account).

tar -xvf cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz cd cudnn-linux-x86_64-8.9.7.29_cuda12-archive sudo cp include/cudnn*.h /usr/local/cuda-12.1/include sudo cp lib/libcudnn* /usr/local/cuda-12.1/lib64 sudo chmod a+r /usr/local/cuda-12.1/include/cudnn*.h /usr/local/cuda-12.1/lib64/libcudnn* cd ..
Install cuDNN 8.9.7

Optional: Test cuDNN Installation

Create a test file test_cudnn.c with the following content:

#include #include int main() { cudnnHandle_t handle; cudnnStatus_t status = cudnnCreate(&handle); if (status == CUDNN_STATUS_SUCCESS) { printf("cuDNN successfully initialized.\n"); } else { printf("cuDNN initialization failed.\n"); } cudnnDestroy(handle); return 0; }
Test code for cuDNN
gcc -o test_cudnn test_cudnn.c -I/usr/local/cuda-12.1/include -L/usr/local/cuda-12.1/lib64 -lcudnn ./test_cudnn
Compile and run cuDNN test

Step 5: Install TensorRT

Note

Download TensorRT from NVIDIA TensorRT page.

tar -xzvf TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-12.0.tar.gz sudo mv TensorRT-8.6.1.6 /usr/local/TensorRT-8.6.1
Extract and install TensorRT

Update your environment variables:

export PATH=/usr/local/cuda-12.1/bin:/usr/local/TensorRT-8.6.1/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64:/usr/local/TensorRT-8.6.1/lib:$LD_LIBRARY_PATH
Add these lines to ~/.bashrc
source ~/.bashrc
Apply the changes

Step 6: Create Conda Environment and Install TensorFlow

conda create --name tf python=3.9 -y conda activate tf python -m pip install tensorflow[and-cuda]
Create environment and install TensorFlow
python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
Verify TensorFlow GPU support

Step 7: Install TensorRT Python Bindings

cd /usr/local/TensorRT-8.6.1/python pip install tensorrt-8.6.1-cp39-none-linux_x86_64.whl pip install tensorrt_dispatch-8.6.1-cp39-none-linux_x86_64.whl pip install tensorrt_lean-8.6.1-cp39-none-linux_x86_64.whl
Install TensorRT Python packages

Step 8: Install JupyterLab

pip install jupyterlab jupyter lab
Install and launch JupyterLab

Step 9: Install PyTorch with GPU Support

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
Install PyTorch with CUDA support
python -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"
Verify PyTorch GPU support

Troubleshooting

If you encounter any issues:

  • Ensure your NVIDIA drivers are up to date on Windows
  • Check if WSL2 can see your GPU: nvidia-smi
  • Verify environment variables: echo $LD_LIBRARY_PATH
  • Make sure you're using the correct CUDA version for your GPU

Performance Tips

To optimize GPU performance:

  • Use TensorRT for inference optimization
  • Enable mixed precision training when possible
  • Monitor GPU memory usage with nvidia-smi
  • Consider using Docker containers for isolated environments