NVIDIA drops AITune – auto-selects fastest inference backend for PyTorch models
NVIDIA just open-sourced AITune, a toolkit that benchmarks and automatically picks the fastest inference backend for your PyTorch model. Instead of manually trying TensorRT, ONNX Runtime, etc., AITune tests multiple options and selects the best-p…