Ggml

Llama.Cpp

8 Schwachstellen gefunden.

Hinweis: Diese Liste kann unvollständig sein. Daten werden ohne Gewähr im Ursprungsformat bereitgestellt.
  • EPSS 0.15%
  • Veröffentlicht 01.04.2026 16:59:59
  • Zuletzt bearbeitet 03.04.2026 16:10:52

llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process m...

  • EPSS 0.04%
  • Veröffentlicht 24.03.2026 00:01:40
  • Zuletzt bearbeitet 24.03.2026 15:53:48

llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This cau...

  • EPSS 0.01%
  • Veröffentlicht 12.03.2026 16:39:37
  • Zuletzt bearbeitet 12.03.2026 21:07:53

llama.cpp is an inference of several LLM models in C/C++. Prior to b8146, the gguf_init_from_file_impl() in gguf.cpp is vulnerable to an Integer overflow, leading to an undersized heap allocation. Using the subsequent fread() writes 528+ bytes of att...

Exploit
  • EPSS 0.36%
  • Veröffentlicht 07.01.2026 23:37:59
  • Zuletzt bearbeitet 02.02.2026 19:12:36

llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a...

Exploit
  • EPSS 0.08%
  • Veröffentlicht 24.06.2025 03:21:19
  • Zuletzt bearbeitet 27.08.2025 14:01:31

llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behav...

Medienbericht
  • EPSS 0.22%
  • Veröffentlicht 17.06.2025 20:04:40
  • Zuletzt bearbeitet 27.08.2025 13:48:14

llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/voc...

  • EPSS 0.1%
  • Veröffentlicht 22.07.2024 18:15:04
  • Zuletzt bearbeitet 27.08.2025 16:20:20

llama.cpp provides LLM inference in C/C++. Prior to b3427, llama.cpp contains a null pointer dereference in gguf_init_from_file. This vulnerability is fixed in b3427.

  • EPSS 0.21%
  • Veröffentlicht 26.04.2024 21:15:49
  • Zuletzt bearbeitet 02.09.2025 18:30:15

Llama.cpp is LLM inference in C/C++. There is a use of uninitialized heap variable vulnerability in gguf_init_from_file, the code will free this uninitialized variable later. In a simple POC, it will directly cause a crash. If the file is carefully c...