Vllm

Vllm

24 Schwachstellen gefunden.

Hinweis: Diese Liste kann unvollständig sein. Daten werden ohne Gewähr im Ursprungsformat bereitgestellt.
Exploit
  • EPSS 0.37%
  • Veröffentlicht 07.10.2025 14:15:38
  • Zuletzt bearbeitet 16.10.2025 18:02:09

vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison tha...

  • EPSS 0.27%
  • Veröffentlicht 21.08.2025 14:41:03
  • Zuletzt bearbeitet 09.10.2025 18:04:53

vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.10.1.1, a Denial of Service (DoS) vulnerability can be triggered by sending a single HTTP GET request with an extremely large header to an HTTP endpoint....

Exploit
  • EPSS 0.32%
  • Veröffentlicht 30.05.2025 18:38:45
  • Zuletzt bearbeitet 01.07.2025 20:42:13

vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern...

Exploit
  • EPSS 0.35%
  • Veröffentlicht 30.05.2025 17:36:16
  • Zuletzt bearbeitet 19.06.2025 00:55:27

vLLM, an inference and serving engine for large language models (LLMs), has a Regular Expression Denial of Service (ReDoS) vulnerability in the file `vllm/entrypoints/openai/tool_parsers/pythonic_tool_parser.py` of versions 0.6.4 up to but excluding ...

Exploit
  • EPSS 0.87%
  • Veröffentlicht 20.05.2025 17:32:27
  • Zuletzt bearbeitet 13.08.2025 16:35:57

vLLM, an inference and serving engine for large language models (LLMs), has an issue in versions 0.6.5 through 0.8.4 that ONLY impacts environments using the `PyNcclPipe` KV cache transfer integration with the V0 engine. No other configurations are a...

  • EPSS 1.31%
  • Veröffentlicht 06.05.2025 16:53:52
  • Zuletzt bearbeitet 31.07.2025 18:05:30

vLLM is an inference and serving engine for large language models. In a multi-node vLLM deployment using the V0 engine, vLLM uses ZeroMQ for some multi-node communication purposes. The secondary vLLM hosts open a `SUB` ZeroMQ socket and connect to an...

Exploit
  • EPSS 2.48%
  • Veröffentlicht 30.04.2025 00:25:00
  • Zuletzt bearbeitet 28.05.2025 19:12:58

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.6.5 and prior to 0.8.5, having vLLM integration with mooncake, are vulnerable to remote code execution due to using pickle based serializat...

Exploit
  • EPSS 0.57%
  • Veröffentlicht 30.04.2025 00:24:53
  • Zuletzt bearbeitet 28.05.2025 19:15:56

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. T...

Exploit
  • EPSS 0.45%
  • Veröffentlicht 30.04.2025 00:24:45
  • Zuletzt bearbeitet 14.05.2025 19:59:42

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.5.2 and prior to 0.8.5 are vulnerable to denial of service and data exposure via ZeroMQ on multi-node vLLM deployment. In a multi-node vLLM...

Exploit
  • EPSS 1.25%
  • Veröffentlicht 20.03.2025 10:10:40
  • Zuletzt bearbeitet 31.07.2025 14:48:32

vllm-project vllm version v0.6.2 contains a vulnerability in the MessageQueue.dequeue() API function. The function uses pickle.loads to parse received sockets directly, leading to a remote code execution vulnerability. An attacker can exploit this by...