Vllm-project

Vllm

22 Schwachstellen gefunden.

Hinweis: Diese Liste kann unvollständig sein. Daten werden ohne Gewähr im Ursprungsformat bereitgestellt.
  • EPSS 0.08%
  • Veröffentlicht 02.02.2026 23:16:06
  • Zuletzt bearbeitet 23.02.2026 18:19:12

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. Wi...

Exploit
  • EPSS 0.02%
  • Veröffentlicht 27.01.2026 22:01:13
  • Zuletzt bearbeitet 30.01.2026 14:41:25

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.14.1, a Server-Side Request Forgery (SSRF) vulnerability exists in the `MediaConnector` class within the vLLM project's multimodal feature set. The load_from...

  • EPSS 0.06%
  • Veröffentlicht 21.01.2026 21:13:11
  • Zuletzt bearbeitet 30.01.2026 14:43:22

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowi...

Exploit
  • EPSS 0.02%
  • Veröffentlicht 10.01.2026 06:39:02
  • Zuletzt bearbeitet 27.01.2026 21:03:47

vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially craf...

  • EPSS 0.21%
  • Veröffentlicht 01.12.2025 22:45:42
  • Zuletzt bearbeitet 03.12.2025 17:52:26

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entr...

  • EPSS 0.06%
  • Veröffentlicht 21.11.2025 01:22:37
  • Zuletzt bearbeitet 04.12.2025 17:40:47

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g...

  • EPSS 0.07%
  • Veröffentlicht 21.11.2025 01:21:29
  • Zuletzt bearbeitet 04.12.2025 17:42:10

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is prope...

  • EPSS 0.11%
  • Veröffentlicht 21.11.2025 01:18:38
  • Zuletzt bearbeitet 04.12.2025 17:14:20

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Co...

Exploit
  • EPSS 0.53%
  • Veröffentlicht 07.10.2025 14:15:38
  • Zuletzt bearbeitet 16.10.2025 18:02:09

vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison tha...

  • EPSS 0.34%
  • Veröffentlicht 21.08.2025 14:41:03
  • Zuletzt bearbeitet 09.10.2025 18:04:53

vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.10.1.1, a Denial of Service (DoS) vulnerability can be triggered by sending a single HTTP GET request with an extremely large header to an HTTP endpoint....