HIGH insecure designhanamimongodb

Insecure Design in Hanami with Mongodb

Insecure Design in Hanami with Mongodb — how this specific combination creates or exposes the vulnerability

Insecure design in a Hanami application using MongoDB often stems from mismatches between object-relational mapping patterns and MongoDB’s document-oriented model, combined with weak authorization logic at the service or repository layer. Hanami encourages explicit, use-case-centric architecture, but when developers map MongoDB documents to domain entities without enforcing strict input constraints or ownership checks, they risk introducing Insecure Direct Object References (IDOR) and Broken Function Level Authorization (BFLA).

For example, consider a multi-tenant SaaS where each organization’s data is stored in MongoDB under an org_id field. If a Hanami entity like Account exposes a class method such as AccountRepository#find_by_id(id) that only uses the provided id to query MongoDB without verifying that the requested org_id matches the requesting user’s organization, the design fails to enforce tenant boundaries. An attacker can manipulate the id parameter to access documents belonging to other organizations, because the query does not include the tenant context. This is a classic BFLA/IDOR pattern.

Another insecure design pattern arises when authorization decisions are made client-side or in the UI layer rather than in the domain or repository layer. Hanami’s view components may conditionally render edit buttons based on a flag like can_edit?, but if the corresponding update endpoint does not re-evaluate permissions against the actual MongoDB document’s ownership or role-based access control (RBAC) attributes, the design is fundamentally insecure. Similarly, embedding sensitive fields such as api_key or password_hash directly in MongoDB documents without encryption at rest or field-level protection can lead to Data Exposure, especially if the deployment does not enforce TLS between Hanami and MongoDB.

The interaction with MongoDB’s schema flexibility exacerbates these risks. Without a strongly enforced document schema, it is easy to introduce optional fields that change behavior at runtime, such as an is_admin flag that is accidentally trusted by the Hanami service layer. If the application deserializes raw MongoDB results into domain objects without validating and sanitizing fields, attackers may supply crafted payloads that set privileged attributes. This intersects with Input Validation failures, where untrusted data shapes document structure or access control logic. The absence of an OpenAPI contract that aligns with MongoDB’s actual response shape can also lead to unsafe consumption patterns in downstream services, a concern addressed by middleBrick’s Unsafe Consumption checks.

Finally, rate limiting and monitoring may be designed at the HTTP gateway level only, without considering MongoDB-specific operations. A Hanami endpoint that performs unbounded queries or batch operations on large datasets can lead to denial-of-service or excessive data exposure. middleBrick’s Rate Limiting and Data Exposure checks highlight such weaknesses by correlating runtime behavior with design assumptions, ensuring that insecure design decisions are surfaced alongside implementation findings.

Mongodb-Specific Remediation in Hanami — concrete code fixes

To secure Hanami applications using MongoDB, apply tenant-aware query constraints, strict input validation, and explicit authorization checks within domain services and repositories. Below are concrete patterns and code examples.

1. Enforce Tenant Context in Queries

Always include the organization identifier in MongoDB queries. Define a repository method that requires the tenant context and use it in all data access points.

module Web::Controllers::Accounts
  class Show
    include Web::Action

    def initialize(account_repo: AccountRepository.new)
      @account_repo = account_repo
    end

    def call(params)
      account = @account_repo.find_by_id_and_org(params[:id], current_user.org_id)
      # ... render or redirect
    end
  end
end

# In lib/account_repository.rb
class AccountRepository
  def collection
    ::Mongo::Collection.new(database, :accounts)
  end

  def find_by_id_and_org(id, org_id)
    document = collection.find({ _id: BSON::ObjectId.from_string(id), org_id: org_id }).first
    AccountMapper.from_document(document)
  end
end

2. Validate and Authorize Input Fields

Validate incoming parameters against a strict schema and ensure that mutable fields like roles or permissions cannot be set by the client. Use Hanami’s validators and apply them before constructing MongoDB update operations.

# lib/validators/account_validator.rb
class AccountValidator
  def self.update(params)
    Hanami::Validations::Params.new(params).validate do
      required(:id).filled(:str, format?: /^[a-f0-9]{24}$/)
      optional(:role).maybe(:str, included_in?:: ['user', 'moderator'])
      # Prevent client-controlled privilege escalation
      forbidden(:is_admin)
    end
  end
end

# In a service object
result = AccountValidator.update(params)
if result.success?
  collection.update_one(
    { _id: BSON::ObjectId.from_string(params[:id]), org_id: user.org_id },
    { '$set': { role: params[:role] } }
  )
end

3. Encrypt Sensitive Fields and Enforce TLS

For fields such as API keys or personal data, use field-level encryption before storing in MongoDB, and ensure the MongoDB driver is configured to require TLS.

require 'openssl'
require 'base64'

class AccountRepository
  def encrypt_field(plaintext)
    key = Base64.strict_decode64(ENV['MONGO_ENC_KEY'])
    cipher = OpenSSL::Cipher.new('aes-256-gcm').encrypt
    cipher.key = key
    iv = cipher.random_iv
    encrypted = cipher.update(plaintext) + cipher.final
    { iv: Base64.strict_encode64(iv), data: Base64.strict_encode64(encrypted), tag: Base64.strict_encode64(cipher.auth_tag) }
  end

  def store_with_encryption(attrs)
    collection.insert_one(
      attrs.merge(
        api_key_enc: encrypt_field(attrs[:api_key]),
        created_at: Time.now.utc
      )
    )
  end
end

Configure the MongoDB client with TLS in the Hanami initializer:

# config/initializers/mongo.rb
client = Mongo::Client.new(
  [ ENV['MONGO_URI'] ],
  ssl: true,
  ssl_verify: true
)

4. Apply Least Privilege and RBAC in Repository Methods

Design repository methods that accept user roles and enforce permissions at the query level, avoiding reliance on downstream checks.

class ReportRepository
  def sensitive_collection(user)
    if user.role == 'admin'
      ::Mongo::Collection.new(database, :reports)
    else
      ::Mongo::Collection.new(database, :reports).tap do |col|
        col.instance_eval do
          def for_user(org_id, user_id)
            find({ org_id: org_id, user_id: user_id })
          end
        end
      end
    end
  end
end

5. Complement with middleBrick Checks

Use middleBrick’s CLI to validate that your remediation aligns with security expectations. Scan your endpoints to detect remaining IDOR, BFLA, and Data Exposure risks:

$ middlebrick scan https://api.example.com/openapi.json

Review findings in the Web Dashboard or via the GitHub Action to ensure continuous monitoring. The MCP Server integration can also help you assess API security directly from your AI coding assistant during development.

Frequently Asked Questions

How does middleBrick detect insecure design patterns involving MongoDB in Hanami?
middleBrick runs 12 security checks in parallel, including BOLA/IDOR, BFLA/Privilege Escalation, Data Exposure, and Input Validation. By correlating OpenAPI/Swagger specs with runtime behavior, it identifies missing tenant constraints, unsafe parameter usage, and unencrypted sensitive fields specific to MongoDB documents in Hanami services.
Can I use middleBrick in CI/CD to prevent insecure MongoDB designs in Hanami before deployment?
Yes. With the Pro plan, the GitHub Action can gate merges and fail builds if the security score drops below your configured threshold. It scans staging APIs on a configurable schedule and provides compliance mappings to help enforce secure design practices early in the development lifecycle.