How to Install SillyTavern in a RunPod Instance | Runpod Blog
Want to upgrade from basic chat UIs? SillyTavern offers a more interactive interface for AI conversations. Here’s how to install it on your own RunPod instance.
Want to upgrade from basic chat UIs? SillyTavern offers a more interactive interface for AI conversations. Here’s how to install it on your own RunPod instance.
Runpod now offers encrypted volumes to help secure sensitive data stored in persistent volumes. This post outlines the benefits and tradeoffs of volume encryption, and explains how users can enable it during deployment. Encryption boosts data security …
The Runpod dashboard just got a streamlined upgrade. Here’s a quick look at what’s moved, what’s merged, and how new UI changes will make managing your pods and templates easier.
This guide breaks down everything you need to know about billing on RunPod—how credits are applied, what gets charged, and how to set up automatic or manual funding.
Learn how to extend the context length of LLaMa-2 models beyond their defaults using alpha_value and NTK-aware RoPE scaling—all without sacrificing coherency.
New 8k context models from TheBloke—like WizardLM, Vicuna, and Manticore—allow longer, more immersive text generation in Oobabooga. With more room for character memory and story progression, these models enhance AI storytelling.
Runpod and Defined.ai launch a pilot program to provide startups with access to high-quality training data and compute, enabling sector-specific fine-tuning and closing the data wealth gap.
Runpod has a new look — and a sharper focus. Explore the redesigned site, refreshed brand, and the platform powering real-time inference, custom LLMs, and open-source AI workflows.
Runpod has significantly improved the performance and reliability of its automated GitHub integration by fixing a bottleneck in the container image upload pipeline that caused slow or timed-out builds. By rewriting key components of the registry image …
In early December 2025, Mistral AI released Mistral Large 3 and Devstral 2, two open models under the Apache 2.0 license, with Mistral Large 3 targeting frontier-scale reasoning and long-context workloads and Devstral 2 focusing on developer use cases …