HIGH prompt injectiondjangohmac signatures

Prompt Injection in Django with Hmac Signatures

Prompt Injection in Django with Hmac Signatures — how this specific combination creates or exposes the vulnerability

Prompt injection occurs when an attacker can influence the behavior of an LLM by injecting malicious instructions through untrusted input. In a Django application that uses Hmac Signatures to verify the integrity of requests, a common pattern is to sign a payload (such as a user message or API request) with a shared secret and pass the signature in a header. If the application uses the signed payload as a source for LLM prompts without revalidating intent, an attacker who can control part of the signed data may be able to shift the LLM’s behavior.

Consider this scenario: a Django endpoint accepts a JSON body with a user message and an Hmac-SHA256 signature in a header, verifies the signature, and then directly includes the user message in a system or user prompt sent to an LLM. Because the signature ensures integrity but not semantic safety, an attacker who discovers the signing key or exploits a weak verification implementation could craft signed payloads that embed jailbreak instructions, data exfiltration prompts, or role changes. The LLM sees these injected tokens as part of the intended instruction set, and the application’s trust in the signature does not protect against prompt-level manipulation.

Additionally, if the Django application exposes an unauthenticated endpoint that accepts user-controlled text and forwards it to an LLM—even when that text is later accompanied by a valid Hmac Signature—the LLM endpoint itself becomes a target for probing. Attackers may attempt to leak system prompts, override instructions, or exploit cost mechanisms through crafted inputs. Because Hmac Signatures protect transport integrity rather than prompt semantics, they do not prevent prompt injection unless the application explicitly treats all user-influenced fields as hostile and applies strict input validation and output encoding before constructing prompts.

In practice, the risk is heightened when developers assume that signature verification is sufficient for end-to-end security. Hmac Signatures prevent tampering with known fields, but they do not sanitize or constrain the content of those fields for LLM safety. An attacker might include newline-separated jailbreak sequences, role-playing overrides, or tool-invocation patterns inside a signed message. If the Django view concatenates these fields into a prompt without filtering or structured guidance, the LLM may comply with the injected instructions, leading to unauthorized behavior or information disclosure.

Hmac Signatures-Specific Remediation in Django — concrete code fixes

To mitigate prompt injection when using Hmac Signatures in Django, treat the signed payload as integrity-protected data rather than trusted instruction. Always validate and sanitize user-controlled fields before incorporating them into LLM prompts, and enforce strict schema constraints. Below is a secure pattern that verifies the Hmac signature and then applies allowlisting and escaping before building prompts.

import hmac
import hashlib
import json
from django.http import JsonResponse
from django.views import View
from django.conf import settings

def verify_hmac_signature(body: bytes, signature_header: str, secret: str) -> bool:
    """Verify that the signature matches the body using Hmac-SHA256."""
    expected = hmac.new(secret.encode('utf-8'), body, hashlib.sha256).hexdigest()
    return hmac.compare_digest(expected, signature_header)

class SafeLLMView(View):
    secret = settings.SHARED_HMAC_SECRET

    def post(self, request):
        try:
            body = request.body
            signature = request.META.get('HTTP_X_SIGNATURE')
            if not signature or not verify_hmac_signature(body, signature, self.secret):
                return JsonResponse({'error': 'Invalid signature'}, status=400)

            data = json.loads(body)
            user_message = data.get('message', '')

            # Strict allowlist: only permit alphanumeric, basic punctuation, length limits
            if not isinstance(user_message, str) or len(user_message) > 500:
                return JsonResponse({'error': 'Invalid message'}, status=400)

            safe_message = user_message.replace('\n', ' ').strip()

            # Construct prompt with user message as a clearly separated user input
            system_prompt = 'You are a helpful assistant. Respond concisely.'
            user_prompt = f'User: {safe_message}'
            full_prompt = f'{system_prompt}\n{user_prompt}'

            # Here you would call your LLM client, e.g.
            # response = llm_client.complete(full_prompt)
            response_text = 'This is a simulated safe response.'

            return JsonResponse({'response': response_text})
        except json.JSONDecodeError:
            return JsonResponse({'error': 'Invalid JSON'}, status=400)
        except Exception:
            return JsonResponse({'error': 'Server error'}, status=500)

This example demonstrates signature verification, strict type and length checks, newline normalization to reduce prompt injection surface, and clear separation between system instructions and user-provided content. Do not use the raw user message directly in system instructions or concatenate it without constraints.

Additionally, secure your Django settings by storing the shared secret in environment variables and rotating it periodically. Combine this with Django’s built-in CSRF and input validation mechanisms, and consider adding rate limiting to reduce probing opportunities. Remember that Hmac Signatures protect integrity, but safety for LLM interactions requires explicit handling of user content at the prompt construction layer.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Does using Hmac Signatures alone prevent prompt injection in Django?
No. Hmac Signatures ensure request integrity but do not sanitize user content. You must validate and constrain user-influenced fields before including them in LLM prompts to prevent prompt injection.
What additional measures should be taken beyond Hmac verification?
Apply allowlists, length limits, and input normalization; keep system prompts distinct from user input; avoid passing raw user messages into system roles; and implement rate limiting and output scanning where possible.