Prompt Injection in Grape with Mutual Tls
Prompt Injection in Grape with Mutual Tls
Grape is a Ruby API micro-framework commonly used to build RESTful endpoints. When you expose a Grape endpoint that accepts user input and forwards it to an LLM, and you also enforce Mutual TLS (mTLS) for client authentication, the combination can create conditions where an authenticated client can inject malicious instructions into the LLM prompt. mTLS ensures the client is known, but it does not constrain what that client is allowed to say.
Consider a Grape API that uses mTLS to authenticate clients and then passes client-supplied parameters to an LLM endpoint. An attacker with a valid client certificate can send crafted query parameters or JSON payloads designed to leak the system prompt or change the assistant’s behavior. For example, a parameter like user_input might be concatenated into a prompt string without strict sanitization or separation, enabling classic prompt injection patterns such as ignoring prior instructions or revealing the system role.
Because middleBrick includes an LLM/AI Security check category, it specifically tests for system prompt leakage and active prompt injection. In the context of Grape + mTLS, this means the scanner will probe the endpoint with sequential attacks—including system prompt extraction, instruction override, DAN jailbreak, data exfiltration, and cost exploitation—while the API is protected by client certificates. The scanner validates that mTLS is being presented correctly (the server requests and validates the client cert), but it also checks whether authenticated prompts can override system instructions or cause the model to output PII, API keys, or executable code.
Real-world patterns observed in Grape APIs include missing input delimiters between system and user content, weak regex filters that attackers bypass, and improper handling of role tags. Since mTLS only authenticates identity and not intent, the security risk remains high if the application logic does not treat user input as untrusted. middleBrick’s active prompt injection probes will attempt to trick the model through these logic flaws, regardless of the transport-layer assurance provided by mTLS.
Mutual Tls-Specific Remediation in Grape
Remediation focuses on ensuring that mTLS is correctly implemented and that LLM prompts are constructed in a way that user input cannot alter the intended instruction hierarchy. You should enforce client certificate validation, scope the allowed principals, and isolate user data from system prompts using strict formatting or separate API routes.
Below are concrete Grape code examples that demonstrate proper mTLS setup and safe prompt construction.
1. Enforcing Mutual TLS in a Grape API
Use Rack TLS settings to require and verify client certificates. The example shows how to configure the SSL context and validate the client certificate against a trusted CA.
# config.ru or a Rackup file
require 'openssl'
require 'grape'
ssl_options = {
verify_mode: OpenSSL::SSL::VERIFY_PEER,
cert_store: OpenSSL::X509::Store.new,
# Provide a CA bundle that signs allowed client certificates
ca_file: '/path/to/ca-bundle.pem'
}
# In a config.ru file
map '/api' do
run MyGrapeAPI
end
In your main API file, ensure the environment contains verified client certificate details and reject unauthenticated requests.
# api.rb
require 'grape'
class MyGrapeAPI < Grape::API
format :json
before do
unless env['SSL_CLIENT_VERIFY'] == 'SUCCESS'
error!('Client certificate required', 403)
end
# Optionally restrict to specific distinguished names
allowed_dn = 'CN=trusted-client,OU=Services,DC=example,DC=com'
client_dn = env['SSL_CLIENT_S_DN']
unless client_dn == allowed_dn
error!('Unauthorized client DN', 403)
end
end
resource :chat do
desc 'Send a message to the LLM assistant' do
params do
requires :message, type: String, desc: 'User message, must not contain instructions'
end
end
post do
user_message = params[:message]
# Safe prompt construction: keep user input as data, not instruction
system_prompt = 'You are a helpful assistant. Respond factually and briefly.'
final_prompt = <<~PROMPT.strip
<|system|>
#{system_prompt}
<|user|>
#{user_message}
PROMPT
# Call your LLM endpoint here with final_prompt
# Do not concatenate user_message directly into system instructions
{ response: "Processed securely" }
end
end
end
2. Prompt Construction Best Practices
- Never embed user input into the system role or instruction block.
- Use strict delimiters like
<|system|>and<|user|>and avoid dynamic role assignment based on user claims. - Validate and sanitize input even when mTLS is used; treat all user data as untrusted.
By combining mTLS for identity assurance with disciplined prompt engineering, you reduce the risk that authenticated clients can manipulate the LLM behavior. middleBrick’s LLM/AI checks will verify that your endpoints resist both prompt injection and system leakage under authenticated conditions.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |