Ready-to-use pyproject.toml templates for PyTorch-based neural network projects, optimized for both local development and future PyPI distribution.
pyproject-cuda128.toml— installs PyTorch with CUDA 12.8 support from the official PyTorch wheel index.pyproject-cpu.toml— pure CPU-only version using standard PyPI wheels.
Both files are pre-configured with
package-mode = falsefor local experimentation (no need to structure your code as a package).
- Define which version of CUDA your GPU supports. You can do it here: CUDA Compability Table
- Make sure that CUDA installed on your PC. Guide: CUDA Quick Start | Windows | Linux
- In
pyproject-cuda128.tomlchange sourceurl = "https://download.pytorch.org/whl/cu128"to your version. For example, for CUDA v. 12.7 the end of url will becu127
- Choose your version:
# For NVIDIA GPU (CUDA 12.8) cp pyproject-cuda128.toml pyproject.toml # OR for CPU-only cp pyproject-cpu.toml pyproject.toml
- Initialize Poetry environment:
poetry install-
Run
poetry env activate -
Start coding! No
src/layout or package structure required
- Set
package-mode = true - Uncomment this line:
# packages = [{ include = "your_package_name", from = "src" }]- Move your code into
src/your_package_name/ - Ensure your project has a valid
__init__.py - Run
poetry buildandpoetry publish
Keep
package-mode = falseduring research/prototyping. Switch totrueonly when preparing a distributable library
torch,torchaudio,torchvision(CUDA or CPU)tensorboardnumpy,scikit-learnmatplotlib,seaborntqdm
Just remove it:
poetry remove unnecessary-dependency-nameMIT — feel free to fork, modify, and use in your projects.