Cache Poisoning in Phoenix with Bearer Tokens
Cache Poisoning in Phoenix with Bearer Tokens — how this specific combination creates or exposes the vulnerability
Cache poisoning in Phoenix occurs when an attacker manipulates cached responses so that subsequent requests receive malicious or incorrect data. When Bearer tokens are used for authorization, a common misconfiguration is to route authenticated and unauthenticated requests through the same cache key, inadvertently storing a response containing a valid Bearer token and serving it to unauthorized users.
Phoenix applications that use plug pipelines to authenticate via Bearer tokens may inadvertently cache responses that include sensitive data or tokens when the cache key omits the Authorization header. For example, consider a pipeline that authenticates requests with a token but does not exclude authenticated responses from the cache, or uses a cache key based only on path and query parameters:
defmodule MyAppWeb.Plugs.CacheKey do
@behaviour Plug
def init(opts), do: opts
def call(conn, _opts) do
# Dangerous: does not include Authorization header in cache key
cache_key = "v1/items/" <> conn.params["id"]
Plug.RedisCache.lookup_or_store(conn, cache_key, fn ->
# Response may include sensitive data or a token if improperly handled
MyApp.Repo.get(Item, conn.params["id"])
end)
end
end
If an authenticated request with a Bearer token receives a response that includes a token or private data and is stored under the same key, an unauthenticated request for the same resource can receive that cached response, exposing the token or sensitive fields. This violates the principle of storing responses keyed by user context when authorization is involved.
Additionally, if downstream APIs or GraphQL resolvers behind Phoenix use cached values that include authorization metadata, and the cache does not differentiate by token scope or audience, one token’s cached data could be returned to another token with lesser privileges. This is a BOLA/IDOR pattern enabled by caching decisions that ignore the Authorization header in routing or fragment caching keys.
The LLM/AI Security checks in middleBrick specifically test for system prompt leakage and output exposure, which can indicate whether cached or shared responses inadvertently reveal tokens or PII. middleBrick scans your unauthenticated attack surface in 5–15 seconds and includes checks for Data Exposure and Authorization misconfigurations that commonly underpin cache poisoning scenarios.
To detect such risks, integrate the middleBrick CLI to scan endpoints and review per-category breakdowns. The CLI provides JSON output suitable for scripting and can be run frequently without credentials:
$ middlebrick scan https://api.example.com/items/1
{ "score": "C", "findings": [ { "category": "Data Exposure", "severity": "high", "description": "Response may contain authorization token in cached payload" } ] }
Bearer Tokens-Specific Remediation in Phoenix — concrete code fixes
Remediate cache poisoning with Bearer tokens by ensuring cache keys incorporate authorization context and that responses containing tokens are never served to unauthorized roles. Below are concrete, realistic examples for Phoenix.
1. Include the Authorization header in the cache key
When caching per-user data, derive the cache key from a combination of resource identifier and the token or user ID extracted from the Authorization header. This prevents one user’s cached response from being served to another:
defmodule MyAppWeb.Plugs.CacheKeyWithAuth do
@behaviour Plug
def init(opts), do: opts
def call(conn, _opts) do
case get_auth_token(conn) do
nil ->
# Public cache key for unauthenticated requests
cache_key = "public/" <> conn.request_path
token ->
# Authenticated cache key includes token identifier (e.g., token hash or user ID)
token_digest = :crypto.hash(:sha256, token) |> Base.url_encode64()
cache_key = "user/#{token_digest}/" <> conn.request_path
Plug.Conn.cache_key(conn, cache_key)
end
end
defp get_auth_token(conn) do
with ["Bearer " << token>>] <- [conn |> Plug.Conn.get_req_header("authorization") >> List.first()], do: token
end
end
2. Avoid caching responses that contain tokens or sensitive fields
Ensure that JSON serialization excludes sensitive fields when caching authenticated responses. Use Jason.Encoder options or a changeset-based approach to strip tokens before caching:
defmodule MyAppWeb.ItemView do
use MyAppWeb, :view
def render("show.json", %{item: item, token: nil}), do: render_item(item)
def render("show.json", %{item: item, token: token}) do
# Exclude token from cached payload
item
|> Map.from_struct()
|> Map.drop([:token, :internal_notes])
|> Jason.encode!()
end
defp render_item(item) do
item
|> Map.from_struct()
|> Map.drop([:token])
|> Jason.encode!()
end
end
3. Use varied cache-control headers per authorization scope
Configure cache_control to instruct caches not to store responses that contain authorization-sensitive data:
defmodule MyAppWeb.ItemController do
use MyAppWeb, :controller
def show(conn, %{"id" => id}) do
case MyApp.Items.get_item_with_auth(id, conn |> get_auth_token()) do
{:ok, item, nil} ->
conn
|> put_resp_header("cache-control", "public, max-age=3600")
|> render("show.json", item: item, token: nil)
{:ok, item, token} ->
conn
|> put_resp_header("cache-control", "no-store")
|> render("show.json", item: item, token: token)
end
end
defp get_auth_token(conn) do
with ["Bearer " << token>>] <- [conn |> Plug.Conn.get_req_header("authorization") >> List.first()], do: token
end
end
middleBrick’s GitHub Action can be added to CI/CD pipelines to fail builds if security scores drop, helping to catch cache misconfigurations before deployment. Its MCP Server enables scanning APIs directly from your IDE, supporting rapid verification during development.