shipped v2.3.0 this week. biggest things:
- new models: GLM 5.1, Qwen 3.5, Gemma 4 support added. glm 5.1 was integrated on release day because i was curious how it performs and honestly its pretty solid for the size
- hardware-aware onboarding: the app now detects your GPU VRAM on first launch and recommends models that actually fit. no more guessing if a 70B will run on your 8GB card (it won't lol)
- model bundles: one-click install for chat + image + video models matched to your hardware
- comfyui plug & play: downloads, installs and launches comfyui with the right checkpoints automatically. no manual workflow setup
- framepack i2v: image-to-video generation running on 6GB VRAM. still experimenting with it but the results are surprisingly usable
- img2img: basic image-to-image pipeline, nothing fancy but it works
its a standalone app for running local AI stuff - chat, image gen, video gen in one place. runs on windows and linux, no docker needed.
repo: https://github.com/PurpleDoubleD/locally-uncensored
happy to answer questions if anyone's curious about the implementation
[link] [comments]