| NVIDIA just open-sourced AITune, a toolkit that benchmarks and automatically picks the fastest inference backend for your PyTorch model. Instead of manually trying TensorRT, ONNX Runtime, etc., AITune tests multiple options and selects the best-performing one for your setup. Useful for anyone optimizing LLM or vision workloads without deep infra tuning. [link] [comments] |