Qwen3.5-4B-Base-ZitGen-V1

Qwen3.5-4B-Base-ZitGen-V1

Hello LocalLLamas,

I'd like to share a fine-tuned model I've been working on:

Model: https://huggingface.co/lolzinventor/Qwen3.5-4B-Base-ZitGen-V1

I thought some of you might find it interesting. It is an image captioning fine-tune optimized for Stable Diffusion prompt generation (i.e., image-to-prompt).

What Makes This Unique

What makes this fine-tune unique is that the dataset (images + prompts) was generated entirely by LLMs tasked with regenerating a target image.

The Process

The process is as follows:

  1. The target image and the last generated image (blank if it's the first step) are provided to an LLM with a comparison prompt.
  2. The LLM outputs a detailed description of each image and the key differences between them.
  3. The comparison results and the last generated prompt (empty if it's the first step) are provided to an LLM with an SD generation prompt.
  4. The output prompt is sent to the ComfyUI API using Z-Image Turbo, and the output image is captured.
  5. Repeat N times.

Training Details

The system employed between 4 and 6 rounds of comparison and correction to generate each prompt-image pair. In theory, this process adapts the prompt to minimize the difference between the target image and the generated image, thereby tailoring the prompt to the specific SD model being used.

The prompts were then ranked and filtered to remove occasional LLM errors, such as residuals from the original prompt or undesirable artifacts (e.g., watermarks). Finally, the prompts and images were formatted into the ShareGPT dataset format and used to train Qwen 3.5 4B.

Dataset

Given that all the data used to create the fine-tune was created synthetically, is it free from any copyright issues?

submitted by /u/lolzinventor
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top