Commit Graph

263 Commits

Author SHA1 Message Date
leejet
cb1d975e96 feat: add wan2.1/2.2 support (#778)
* add wan vae suppport

* add wan model support

* add umt5 support

* add wan2.1 t2i support

* make flash attn work with wan

* make wan a little faster

* add wan2.1 t2v support

* add wan gguf support

* add offload params to cpu support

* add wan2.1 i2v support

* crop image before resize

* set default fps to 16

* add diff lora support

* fix wan2.1 i2v

* introduce sd_sample_params_t

* add wan2.2 t2v support

* add wan2.2 14B i2v support

* add wan2.2 ti2v support

* add high noise lora support

* sync: update ggml submodule url

* avoid build failure on linux

* avoid build failure

* update ggml

* update ggml

* fix sd_version_is_wan

* update ggml, fix cpu im2col_3d

* fix ggml_nn_attention_ext mask

* add cache support to ggml runner

* fix the issue of illegal memory access

* unify image loading processing

* add wan2.1/2.2 FLF2V support

* fix end_image mask

* update to latest ggml

* add GGUFReader

* update docs
2025-09-06 18:08:03 +08:00
Wagner Bruna
2eb3845df5 fix: typo in the verbose long flag (#783) 2025-09-04 00:49:01 +08:00
stduhpf
4c6475f917 feat: show usage on unknown arg (#767) 2025-09-01 21:38:34 +08:00
SmallAndSoft
f0fa7ddc40 docs: add compile option needed by Ninja (#770) 2025-09-01 21:35:25 +08:00
SmallAndSoft
a7c7905c6d docs: add missing dash to docs/chroma.md (#771) 2025-09-01 21:34:34 +08:00
Wagner Bruna
eea77cbad9 feat: throttle model loading progress updates (#782)
Some terminals have slow display latency, so frequent output
during model loading can actually slow down the process.

Also, since tensor loading times can vary a lot, the progress
display now shows the average across past iterations instead
of just the last one.
2025-09-01 21:32:01 +08:00
NekopenDev
0e86d90ee4 chore: add Nvidia 30 series (cuda arch 86) to build 2025-09-01 21:21:34 +08:00
leejet
5900ef6605 sync: update ggml, make cuda im2col a little faster 2025-08-03 01:29:40 +08:00
Daniele
5b8996f74a Conv2D direct support (#744)
* Conv2DDirect for VAE stage

* Enable only for Vulkan, reduced duplicated code

* Cmake option to use conv2d direct

* conv2d direct always on for opencl

* conv direct as a flag

* fix merge typo

* Align conv2d behavior to flash attention's

* fix readme

* add conv2d direct for controlnet

* add conv2d direct for esrgan

* clean code, use enable_conv2d_direct/get_all_blocks

* format code

---------

Co-authored-by: leejet <leejet714@gmail.com>
2025-08-03 01:25:17 +08:00
Wagner Bruna
f7f05fb185 chore: avoid setting GGML_MAX_NAME when building against external ggml (#751)
An external ggml will most likely have been built with the default
GGML_MAX_NAME value (64), which would be inconsistent with the value
set by our build (128). That would be an ODR violation, and it could
easily cause memory corruption issues due to the different
sizeof(struct ggml_tensor) values.

For now, when linking against an external ggml, we demand it has been
patched with a bigger GGML_MAX_NAME, since we can't check against a
value defined only at build time.
2025-08-03 01:24:40 +08:00
Seas0
6167e2927a feat: support build against system installed GGML library (#749) 2025-08-02 11:03:18 +08:00
leejet
f6b9aa1a43 refector: optimize the usage of tensor_types 2025-07-28 23:18:29 +08:00
Wagner Bruna
7eb30d00e5 feat: add missing models and parameters to image metadata (#743)
* feat: add new scheduler types, clip skip and vae to image embedded params

- If a non default scheduler is set, include it in the 'Sampler' tag in the data
embedded into the final image.
- If a custom VAE path is set, include the vae name (without path and extension)
in embedded image params under a `VAE:` tag.
- If a custom Clip skip is set, include that Clip skip value in embedded image
params under a `Clip skip:` tag.

* feat: add separate diffusion and text models to metadata

---------

Co-authored-by: one-lithe-rune <skapusniak@lithe-runes.com>
2025-07-28 22:00:27 +08:00
stduhpf
59080d3ce1 feat: change image dimensions requirement for DiT models (#742) 2025-07-28 21:58:17 +08:00
R0CKSTAR
8c3c788f31 feat: upgrade musa sdk to rc4.2.0 (#732) 2025-07-28 21:51:11 +08:00
leejet
f54524f620 sync: update ggml 2025-07-28 21:50:12 +08:00
leejet
eed97a5e1d sync: update ggml 2025-07-24 23:04:08 +08:00
Ettore Di Giacinto
fb86bf4cb0 docs: add LocalAI to README's UIs (#741) 2025-07-24 22:39:26 +08:00
leejet
bd1eaef93e fix: convert f64 to f32 and i64 to i32 when loading weights 2025-07-24 00:59:38 +08:00
Erik Scholz
ab835f7d39 fix: correct head dim check and L_k padding of flash attention (#736) 2025-07-24 00:57:45 +08:00
Daniele
26f3f61d37 docs: add sd.cpp-webui as an available frontend (#738) 2025-07-23 23:51:57 +08:00
Oleg Skutte
1896b28ef2 fix: make --taesd work (#731) 2025-07-15 00:45:22 +08:00
leejet
0739361bfe fix: avoid macOS build failed 2025-07-13 20:18:10 +08:00
leejet
ca0bd9396e refactor: update c api (#728) 2025-07-13 18:48:42 +08:00
stduhpf
a772dca27a feat: add Instruct-Pix2pix/CosXL-Edit support (#679)
* Instruct-p2p support

* support 2 conditionings cfg

* Do not re-encode the exact same image twice

* fixes for 2-cfg

* Fix pix2pix latent inputs + improve inpainting a bit + fix naming

* prepare for other pix2pix-like models

* Support sdxl ip2p

* fix reference image embeddings

* Support 2-cond cfg properly in cli

* fix typo in help

* Support masks for ip2p models

* unify code style

* delete unused code

* use edit mode

* add img_cond

* format code

---------

Co-authored-by: leejet <leejet714@gmail.com>
2025-07-12 15:36:45 +08:00
Wagner Bruna
6d84a30c66 feat: overriding quant types for specific tensors on model conversion (#724) 2025-07-08 00:11:38 +08:00
stduhpf
dafc32d0dd feat: add support for f64/i64 and clip_g diffusers model (#681) 2025-07-06 23:24:55 +08:00
idostyle
225162f270 fix: mark encoder.embed_tokens.weight as unused tensor (#721) 2025-07-06 23:10:10 +08:00
leejet
b9e4718fac fix: correct --chroma-enable-t5-mask argument 2025-07-06 11:11:47 +08:00
leejet
1ce1c1adca feat: make lora graph size variable 2025-07-05 22:44:22 +08:00
stduhpf
19fbfd8639 feat: override text encoders for unet models (#682) 2025-07-04 22:19:47 +08:00
Wagner Bruna
76c72628b1 fix: fix a few typos on cli help and error messages (#714) 2025-07-04 22:15:41 +08:00
vmobilis
3bae667f3d fix: break the line after skipping tensors in VAE (#591) 2025-07-03 22:50:42 +08:00
stduhpf
8d0819c548 fix: actually use embeddings with SDXL (#657) 2025-07-03 22:39:57 +08:00
Binozo
7a8ff2e819 docs: add golang cgo bindings to README (#635) 2025-07-02 23:19:49 +08:00
rmatif
0927e8e322 docs: add Android app to README (#647) 2025-07-02 23:18:16 +08:00
stduhpf
83ef4e44ce feat: add T5 with llama.cpp naming convention support (#654) 2025-07-02 23:13:00 +08:00
leejet
7dac89ad75 refector: reuse some code 2025-07-01 23:33:50 +08:00
stduhpf
9251756086 feat: add CosXL support (#683) 2025-07-01 23:13:04 +08:00
leejet
ecf5db97ae chore: fix windows build and release 2025-07-01 23:05:48 +08:00
stduhpf
ea46fd6948 fix: force zero-initialize output of tiling (#703) 2025-07-01 23:01:29 +08:00
leejet
23de7fc44a chore: avoid warnings when building on linux 2025-06-30 23:49:52 +08:00
rmatif
d42fd59464 feat: add OpenCL backend support (#680) 2025-06-30 23:32:23 +08:00
Wagner Bruna
0d8b39f0ba fix: avoid crash on sdxl loras (#658)
Some SDXL LoRAs (eg. PCM) can exceed 12k nodes.
2025-06-30 23:29:32 +08:00
R0CKSTAR
539b5b9374 fix: fix musa docker build (#662)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-06-30 23:27:40 +08:00
Wagner Bruna
b1fc16b504 fix: allow resetting clip_skip to its default value (#697) 2025-06-30 23:23:21 +08:00
leejet
d6c87dce5c docs: add chroma doc 2025-06-29 23:58:15 +08:00
leejet
a28d04dd81 fix: fix the issue in parsing --chroma-disable-dit-mask 2025-06-29 23:52:36 +08:00
leejet
45d0ebb30c style: format code 2025-06-29 23:40:55 +08:00
stduhpf
b1cc40c35c feat: add Chroma support (#696)
---------

Co-authored-by: Green Sky <Green-Sky@users.noreply.github.com>
Co-authored-by: leejet <leejet714@gmail.com>
2025-06-29 23:36:42 +08:00