| Released today on HF. Built by L'Électron Rare (https://github.com/L-electron-Rare) — our local-first AI platform FineFab. The training toolkit went public the day before: https://github.com/L-electron-Rare/KIKI-Mac\_tunner (MLX for Mac Studio, distills Claude Opus into Mistral Large 123B). Full pipeline is open, not just the artifact. **Architecture** - Domain router → top-4 selection among 35 LoRA stacks - Base: Qwen3.5-35B-A3B (MoE, 256 experts, 3B active/token) - LoRA rank 16 on q/k/v/o, top-2 routing per stack - Null-space projection between stacks to mitigate catastrophic forgetting - Negotiator (CAMP + Catfish) arbitrates conflicting stack outputs - Anti-bias layer (KnowBias + RBD) before output - Aeon memory (Atlas graph + Trace log) for cross-session persistence **Specs** - GGUF Q4_K_M, llama.cpp / Ollama / LM Studio - Context 262K tokens - Apache 2.0 - French + English interleaved **35 domains** chat-fr, reasoning, python, typescript, cpp, rust, html-css, shell, sql, yaml-json, lua-upy, docker, devops, llm-orch, llm-ops, ml-training, kicad-dsl, kicad-pcb, spice, electronics, components, power, emc, dsp, embedded, stm32, iot, platformio, freecad, web-frontend, web-backend, music-audio, math, security **Dataset** — also released, Apache 2.0 489K instruction-following examples: - 50,116 real Claude CLI sessions from our 5-node P2P mesh during embedded consulting work (GrosMac M5, Tower 28t, CILS i7, KXKM-AI RTX 4090, VM) - 2,529 Codex/Copilot sessions - 364,045 from 19 filtered open HF datasets (CodeFeedback, French-Alpaca, Electronics StackExchange, stm32-hal-dataset, JITX components…) - Opus teacher distillation for chat-fr + reasoning - 32 original curated seed sets **Honest caveats** - No external reproducible benchmark yet. Internal held-out eval only. v4 roadmap. - Aeon memory needs external backends (Qdrant, Neo4j) for production. - Max 4 concurrent stacks; combos matter, some well-exercised, others less. - Solo/small team project, two weeks, consumer hardware. Not a lab release. Model: https://huggingface.co/clemsail/micro-kiki-v3 Dataset: https://huggingface.co/datasets/clemsail/micro-kiki-v3-dataset Training toolkit (MLX Mac Studio): https://github.com/L-electron-Rare/KIKI-Mac_tunner Ecosystem: https://github.com/L-electron-Rare Feedback, forks, negative benchmarks all welcome. [link] [comments] |