| Yesterday, I saw an impressive presentation of Qwen 3.6 27B's SVG capabilities on the sub. To maximize the model's capabilities in terms of SVG generation, I put together a closed-loop harness with the help of Claude and Codex, and plugged Qwen3.6-27b into the system. The loop uses the Agno framework for specifications and Pi as a coding agent. It renders the output SVG and feeds a PNG back to Qwen Vision, utilizing a two-round judging system to identify problems. The result is then fed back for a new iteration. Attached are the SVG renders for the same prompts as in the referenced post. I used Qwen3.6-27B-UD-Q5_K_XL in the loop. If anyone would like to experiment with the harness, it is available here. Long context is a must. The prompts are from the original post above: - Create svg image of a pelican riding a bicycle - Create svg image of a capybara wearing a kimono drinking matcha tea - Create svg image of a flamingo knitting a colorful sweater - Create svg image of a sushi roll wearing sunglasses driving a go-kart - Create svg image of a Victorian-era robot reading a newspaper in a cafe - Create a svg image of a time-lapse composite showing a flower blooming, wilting, and transforming into butterflies across four seasons, all in one frame with seasonal lighting [link] [comments] |