Uncategorised

vLLM-Lens: Fast Interpretability Tooling That Scales to Trillion-Parameter Models

TL;DR: vLLM-Lens is a vLLM plugin for top-down interpretability techniques[1] such as probes, steering, and activation oracles. We benchmarked it as 8–44× faster than existing alternatives for single-GPU use, though we note a planned version of nnsight…