| My newest hobby is fine-tuning a Chinese open-source LLM to generate Pierre Menard, Author of the Quixote (originally by Borges). The ambition isn’t to write a so-called “Borgesian” story “like” Pierre Menard, Author of the Quixote but to fully generate, token-by-token, Pierre Menard, Author of the Quixote. Importantly, this can’t just be a mere act of machine transcription, or even memorizing the story in the weights [to-do: attach paper]. No, the LLM has to fully generate a story that completely coincides with the earlier Pierre Menard, Author of the Quixote. Initially, I attempted to make the conditions viable for the model to write Pierre Menard, Author of the Quixote afresh. One proposed strategy on X is to situate Borges in Kimi K2.5-Thinking by putting the entire life history and literary influences of Borges into Kimi’s system prompt. Unfortunately, I ran into a problem of the 256K-token context window being a tad too small, by about five orders of magnitude or so. I then considered doing more advanced fine-tuning to imitate Borges’ intellectual influences and life trajectory. Start with machine unlearning to erase everything post-1939, followed by sparse autoencoders to isolate the “Jorge Luis Borges” feature in Kimi’s latent space, then aggressive feature clamping to help the model believe it was Borges. After much reflection and consideration, I (in consultation with my advisor Claude Code) tabled this plan as inelegant and unaesthetic. No, it’s not enough to merely generate a Pierre Menard, Author of the Quixote as Borges would’ve written it. The central conceit is generating Pierre Menard, Author of the Quixote from the perspective of a 2026-era LLM, and so-called “contamination” by Borges himself is constitutive of the semantic space any modern-day LLM draws from. I’ll spare you the boring technical details, but after much angst and many false starts, I’ve slowly and painstakingly gotten Kimi to generate small snippets of Pierre Menard, Author of the Quixote, though outputting the full text has eluded me. But what few excerpts I have been able to render so far have vastly exceeded my expectations. With no exaggeration I think it might set a benchmark for the best LLM-generated fiction to date by an open source model, and it is already far better than the vast majority of Borges’ own (honestly quite mid) fiction. Borges, for example, wrote the following: [link] [comments] |