Vllm-project

Vllm

28 Schwachstellen gefunden.

Hinweis: Diese Liste kann unvollständig sein. Daten werden ohne Gewähr im Ursprungsformat bereitgestellt.
  • EPSS 0.03%
  • Veröffentlicht 06.04.2026 15:40:03
  • Zuletzt bearbeitet 07.04.2026 13:20:11

vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.19.0, a Denial of Service vulnerability exists in the vLLM OpenAI-compatible API server. Due to the lack of an upper bound validation on the n parameter ...

  • EPSS 0.04%
  • Veröffentlicht 06.04.2026 15:38:53
  • Zuletzt bearbeitet 07.04.2026 13:20:11

vLLM is an inference and serving engine for large language models (LLMs). From 0.7.0 to before 0.19.0, the VideoMediaIO.load_base64() method at vllm/multimodal/media/video.py splits video/jpeg data URLs by comma to extract individual JPEG frames, but...

  • EPSS 0.04%
  • Veröffentlicht 06.04.2026 15:36:52
  • Zuletzt bearbeitet 07.04.2026 13:20:11

vLLM is an inference and serving engine for large language models (LLMs). From 0.16.0 to before 0.19.0, a server-side request forgery (SSRF) vulnerability in download_bytes_from_url allows any actor who can control batch input JSON to make the vLLM b...

  • EPSS 0.06%
  • Veröffentlicht 02.04.2026 18:59:49
  • Zuletzt bearbeitet 03.04.2026 16:10:23

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before version 0.18.0, Librosa defaults to using numpy.mean for mono downmixing (to_mono), while the international standard ITU-R BS.775-4 specifies a wei...

  • EPSS 0.03%
  • Veröffentlicht 26.03.2026 23:56:53
  • Zuletzt bearbeitet 30.03.2026 18:56:21

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's expli...

Exploit
  • EPSS 0.02%
  • Veröffentlicht 09.03.2026 21:16:15
  • Zuletzt bearbeitet 18.03.2026 18:36:10

vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer...

  • EPSS 0.09%
  • Veröffentlicht 02.02.2026 23:16:06
  • Zuletzt bearbeitet 23.02.2026 18:19:12

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. Wi...

Exploit
  • EPSS 0.02%
  • Veröffentlicht 27.01.2026 22:01:13
  • Zuletzt bearbeitet 30.01.2026 14:41:25

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.14.1, a Server-Side Request Forgery (SSRF) vulnerability exists in the `MediaConnector` class within the vLLM project's multimodal feature set. The load_from...

  • EPSS 0.02%
  • Veröffentlicht 21.01.2026 21:13:11
  • Zuletzt bearbeitet 30.01.2026 14:43:22

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowi...

Exploit
  • EPSS 0.02%
  • Veröffentlicht 10.01.2026 06:39:02
  • Zuletzt bearbeitet 27.01.2026 21:03:47

vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially craf...