This setup is to make the development platform fairly consistent with what would be used to run the same exact system on a server, and more importantly allowing us to work with multiple versions of Pytorch and nvidia drivers seamlessly without having to change configuration on the host system.
There are a few too many layers of abstraction here, and even the explanation (Pytorch on docker on WSL on Windows) is technically an excerpt on the underlying complexities and abstractions of each of docker, WSL and Windows. Not to mention the missing NGC containers that are not part of the abbreviated explanation.
This is a highly opinionated workflow that I have developed because I work on multiple different tech stacks at the same time which is largely enabled by docker. Since running docker on windows requires a virtualization backend, WSL is the natural choice for that. Developing within these containerized environments is usually done using VSCode devcontainers, but you can use any shell based editors within the container directly if needed.
Using this stack I have been able to run multiple personal, research and company projects, hopefully this enables you to do the same.
Follow instructions from https://docs.docker.com/desktop/setup/install/windows-install/, but essentially just run the following commands from your windows terminal
wsl --installNext install ubuntu by following instructions here https://documentation.ubuntu.com/wsl/latest/howto/install-ubuntu-wsl2/, essentially the following commands in windows terminal
wsl --install Ubuntu-24.04
wsl.exe --set-default Ubuntu-24.04You can install Docker CE (Community edition) directly within your Ubuntu-24.04 distro on WSL, but for convenience sake you may just want to use Docker Desktop.
You can install Docker desktop by following instructions from: https://docs.docker.com/desktop/setup/install/windows-install/#install-docker-desktop-on-windows
If you wish to install Docker CE then you can follow instruction from: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
This is mandatory for CUDA support within containers which is essential for any serious ML workload, you obviously need a Nvidia GPU to use this.
- Install the toolkit by following instructions from https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#with-apt-ubuntu-debian
- Attach the tookit to docker https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#configuring-docker
- Run the sample workload and ensure you can see your GPUs like shown in the sample: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/sample-workload.html
Assuming you are going to use VScode install the remote development extensions pack: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack, this installs multiple remote development extensions of which we will be using WSL and container ones already and you may use the SSH container at some point in future.
Install git in your WSL Ubuntu using
apt-get install -y git
Create a new SSH key by following instructions from here: https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
Associate the generated SSH key with your github account by following instructions from here: https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account
- git clone this simple repo of mine https://github.com/rijulg/numpynn by running
git clone git@github.com:rijulg/numpynn.gitin a WSL terminal. Note this uses the SSH based git access we just set up. - Open this repo in VSCode, if you are opening it from within WSL you can just run
code numpynnorcode <path_of_folder>. - You should get a prompt to open the folder in a devcontainer, but you can also open the command pallete (Ctrl+Shift+P) and then search for "Dev Containers: Reopen in Container"
This should now create a new image based on Pytorch, and then using a configuration stored in .devcontainer/docker-compose.yml start a container with your GPU(s) attached.