CVE-2025-25183 - How Predictable Hash Collisions in vLLM Can Let Attackers Interfere with AI Responses
Summary:
vLLM is a popular inference and serving engine often used to run large language models (LLMs) efficiently. It supports advanced features like prefix caching
CVE-2022-26388 - Hard-Coded Passwords Threaten ELI Electrocardiographs
*Published June 2024*
*By SecurePulse Research Team*
Medical devices help save lives — but what happens when those same devices are left wide open for attackers?