LocalLLaMA

LocalLLaMA

Intel B70 with Qwen3.5 35B

Intel recently released support for Qwen3.5: https://github.com/intel/llm-scaler/releases/tag/vllm-0.14.0-b8.1 Anyone with a B70 willing to run a lllama benchy with the below settings on the 35B model? uvx llama-benchy –base-url $URL –model $MODEL –…

LocalLLaMA

Pre-1900 LLM Relativity Test

Wanted to share one of my personal projects, since similar work has been shared here. TLDR is that I trained an LLM from scratch on pre-1900 text to see if it could come up with quantum mechanics and relativity. The model was too small to do mean…

Scroll to Top