HIGH hallucination attacksadonisjsfirestore

Hallucination Attacks in Adonisjs with Firestore

Hallucination Attacks in Adonisjs with Firestore — how this specific combination creates or exposes the vulnerability

Hallucination attacks in an AdonisJS application using Google Cloud Firestore occur when an AI-driven client or an AI-augmented backend generates plausible but false Firestore queries, paths, or document structures that do not exist in the deployed database. Because Firestore permissions often rely on loose wildcard rules in security rules and AdonisJS route models may resolve dynamic identifiers without strict validation, these hallucinated references can bypass intended access controls, leak unrelated documents, or trigger unintended operations.

In AdonisJS, route model binding typically loads a document by ID or a composite key. If an AI suggests an identifier that maps to a different document or a deeper path, the application may silently return or mutate data outside the expected scope. For example, a user-supplied parameter like projectId combined with an AI-generated subcollection name can lead to traversal beyond the intended tenant or dataset. Firestore’s recursive wildcard rules (match /{document=**}) can amplify this when combined with overly permissive read or write conditions, allowing hallucinated paths to satisfy rules that were meant to be restrictive.

Additionally, Firestore’s schema-less nature means an AI can propose document fields or nested maps that appear valid but conflict with server-side business logic in AdonisJS services. If the application trusts client-supplied nested data and writes it directly to Firestore (e.g., via docRef.set(payload)), hallucinated fields may overwrite critical metadata or inject unexpected types, leading to authorization bypasses or inconsistent state. The lack of a strict schema enables these injected fields to pass basic validation if the code uses generic merge operations without explicit field allowlists.

Another vector involves AI-generated queries that include non-existent collection group names or invalid composite keys. AdonisJS query builders that dynamically construct Firestore queries based on AI suggestions can produce paths that resolve to different root collections, exposing data from unrelated domains. Because Firestore does not enforce collection-level existence checks at rule-evaluation time, such hallucinated paths may still match broad rules, returning unintended documents or allowing writes to unauthorized collections.

These risks are compounded when the application uses Firestore authentication helpers in AdonisJS to conditionally scope rules. If an AI hallucinates a user-specific path (e.g., substituting a different UID), the rule evaluation may still permit access due to wildcard patterns, resulting in horizontal privilege escalation. The combination of permissive Firestore security rules, dynamic route model resolution in AdonisJS, and AI-generated inputs creates a scenario where the attack surface extends beyond traditional injection to include semantic and structural hallucinations.

Firestore-Specific Remediation in Adonisjs — concrete code fixes

Remediation focuses on strict schema enforcement, canonical path construction, and explicit rule scoping in Firestore combined with disciplined model handling in AdonisJS. Avoid dynamic concatenation of collection or document names based on AI or untrusted input. Instead, use a fixed set of validated collections and enforce ownership via user identifiers stored in document metadata, not in path components suggested by an AI.

Use Firestore transactions or batched writes with explicit field allowlists to prevent hallucinated fields from being persisted. In AdonisJS, create service methods that validate incoming data against a defined schema before constructing a Firestore reference. Below is an example using the Firebase Admin SDK within an AdonisJS service to write a project document with strict field control:

const { Firestore } = require('@google-cloud/firestore');
const firestore = new Firestore();

class ProjectService {
  async createProject(userId, input) {
    const allowedFields = ['name', 'description', 'settings'];
    const projectData = {};
    allowedFields.forEach((field) => {
      if (input.hasOwnProperty(field)) {
        projectData[field] = input[field];
      }
    });
    const docRef = firestore.collection('projects').doc(input.projectId);
    await docRef.set({
      ...projectData,
      ownerId: userId,
      updatedAt: Firestore.FieldValue.serverTimestamp(),
    });
    return docRef.id;
  }
}
module.exports = ProjectService;

For reads, resolve paths using a fixed prefix and the authenticated user’s UID rather than trusting a client-supplied path. The following AdonisJS route demonstrates constructing a canonical document reference and validating the resolved document belongs to the requesting user:

Route.get('/projects/:projectId', async ({ params, auth }) => {
  const user = await auth.authenticate();
  const projectRef = admin.firestore().collection('projects').doc(params.projectId);
  const doc = await projectRef.get();
  if (!doc.exists) {
    throw new Error('Project not found');
  }
  const data = doc.data();
  if (data.ownerId !== user.id) {
    throw new Error('Unauthorized');
  }
  return data;
});

In Firestore security rules, prefer specific collections and explicit document ID checks over recursive wildcards. Define rules that scope access by owner ID and validate that requested paths match an allowed pattern. For example:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /projects/{projectId} {
      allow read, write: if request.auth != null
        && request.auth.uid == request.resource.data.ownerId
        && projectId.matches('^[a-zA-Z0-9_-]{1,64}$');
    }
  }
}

To mitigate hallucinated subcollection access, avoid broad collection group rules for sensitive data. Instead, explicitly enumerate allowed subcollections and validate document references server-side in AdonisJS before traversal. Combine this with input validation libraries in AdonisJS to reject unexpected fields and types, ensuring that even if an AI suggests additional properties, they are discarded before persistence.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Can hallucination attacks modify Firestore documents in Adonisjs?
Yes, if the application uses permissive Firestore rules and dynamically constructs document paths from AI-influenced input, hallucinated document IDs or subcollection names can lead to unintended writes. Mitigate by using fixed collections, server-side path validation, and strict field allowlists before any write.
How does middleBrick help detect hallucination risks in an Adonisjs Firestore setup?
middleBrick scans your API endpoints in black-box mode, testing unauthenticated attack surfaces and validating rule configurations. It checks for issues like excessive wildcard rules, missing ownership validation, and unsafe data handling patterns that can amplify hallucination risks, providing findings with severity and remediation guidance.