HIGH hallucination attacksfastapibasic auth

Hallucination Attacks in Fastapi with Basic Auth

Hallucination Attacks in Fastapi with Basic Auth — how this specific combination creates or exposes the vulnerability

A hallucination attack in an API context occurs when an LLM or AI-assisted component generates plausible but false information, such as fabricated endpoints, parameters, or data relationships. When a Fastapi service protected only by Basic Auth exposes an LLM-facing endpoint (for example, an OpenAI-compatible chat or completion route), the combination can amplify risk: authentication may be verified, but the logic that interprets user input and produces AI-generated responses is not grounded in the API’s actual schema and runtime behavior.

Fastapi makes it straightforward to add Basic Auth via HTTP dependencies, but if route handlers then pass unchecked user messages directly to an LLM and return the model’s raw response, hallucinations can lead to misleading instructions, invented operations, or suggested administrative actions. Because Fastapi often uses OpenAPI generation, a developer might assume the spec fully describes the runtime surface; however, hallucinations produced by an integrated LLM are not captured in the spec and can expose endpoints that behave differently than documented.

Consider an endpoint that accepts a user query, forwards it to an LLM, and returns the model’s answer without validating the context against the API’s actual operations. If the LLM hallucinates an administrative endpoint such as /admin/reset or invents parameters that imply privilege escalation, a client trusting the response may be directed to take unintended actions. In black-box scanning, middleBrick’s LLM/AI Security checks probe such routes with active prompt injection and system prompt extraction techniques, checking whether unauthenticated or improperly scoped LLM endpoints reveal sensitive instructions or allow inference of credentials. Even when Basic Auth guards the route, a misconfigured dependency or a permissive CORS policy can allow an attacker to chain a crafted prompt with authenticated requests, increasing the likelihood of unauthorized guidance or data exfiltration.

Additionally, Fastapi applications that rely on OpenAPI/Swagger spec analysis with full $ref resolution may miss runtime deviations introduced by LLM outputs. middleBrick cross-references spec definitions with runtime findings to highlight mismatches, which is valuable when an LLM is used to dynamically suggest parameters or paths. Without strict input validation and output schema enforcement, hallucinated content can bypass intended constraints, making it essential to couple Basic Auth with rigorous request validation and controlled LLM context windows.

Basic Auth-Specific Remediation in Fastapi — concrete code fixes

To mitigate hallucination and authorization risks in Fastapi when using Basic Auth, enforce strict input validation, constrain LLM context, and ensure authentication is applied consistently to all sensitive routes. Below are concrete, working code examples that demonstrate secure patterns.

First, implement HTTP Basic Auth with explicit dependencies and avoid relying on global assumptions about credentials:

from fastapi import Fastapi, Depends, HTTPException, status
from fastapi.security import HTTPBasic, HTTPBasicCredentials
import secrets

app = Fastapi()
security = HTTPBasic()

def get_current_credentials(credentials: HTTPBasicCredentials = Depends(security)):
    # Replace with secure credential verification, e.g., constant-time compare
    if credentials.username != "admin" or credentials.password != "S3cur3P@ss!":
        raise HTTPException(
            status_code=status.HTTP_401_UNAUTHORIZED,
            detail="Invalid credentials",
            headers={"WWW-Authenticate": "Basic"},
        )
    return credentials

Next, protect endpoints that invoke LLMs by combining the auth dependency with strict input sanitization and bounded prompts:

from fastapi import APIRouter, Depends
from pydantic import BaseModel, Field
from typing import List

router = APIRouter()

class UserQuery(BaseModel):
    messages: List[str] = Field(..., max_items=5, min_items=1)

@router.post("/chat")
def chat(query: UserQuery, creds: HTTPBasicCredentials = Depends(get_current_credentials)):
    # Validate and sanitize each message to reduce injection and hallucination surface
    cleaned = [msg.strip() for msg in query.messages if msg.strip() and len(msg) <= 500]
    if not cleaned:
        raise HTTPException(status_code=400, detail="No valid messages provided")

    # Build a tightly scoped prompt that references only allowed operations
    system_prompt = "You are a helpful assistant for the /chat endpoint. Do not invent endpoints or parameters." 
    # Here you would call your LLM client with cleaned messages and system_prompt
    # For example:
    # response = llm_client.chat.completions.create(model="...", messages=[{"role": "system", "content": system_prompt}, {"role": "user", "content": cleaned}])
    # return {"response": response.choices[0].message.content}
    return {"response": "[simulated response]"}

Finally, ensure that middleware or dependencies enforce secure CORS and content-type handling to prevent credential leakage in cross-origin requests:

from fastapi.middleware.cors import CORSMiddleware

app.add_middleware(
    CORSMiddleware,
    allow_origins=["https://trusted.example.com"],
    allow_credentials=True,
    allow_methods=["POST"],
    allow_headers=["Content-Type", "Authorization"],
)

These patterns reduce the likelihood that hallucinated LLM outputs will misrepresent available endpoints or suggest unauthorized actions. By anchoring user prompts to a narrow, validated scope and consistently applying the Basic Auth dependency, you limit both information leakage and the impact of potential hallucinations.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

How does middleBrick detect hallucination risks in Fastapi endpoints using Basic Auth?
middleBrick runs LLM/AI Security checks that include active prompt injection probes and output scanning. It attempts to extract system prompts, override instructions, and elicit fabricated endpoint suggestions, then compares reported behaviors against the OpenAPI/Swagger spec with full $ref resolution to identify mismatches between documented and runtime behavior.
Can Basic Auth alone prevent hallucination attacks in Fastapi?
No. Basic Auth can verify client identity for the route, but it does not validate or constrain LLM-generated responses. Without input validation, bounded prompts, and output schema checks, hallucinations can still lead to misleading guidance or apparent privilege escalation suggestions.