Injection Flaws in Adonisjs with Basic Auth
Injection Flaws in Adonisjs with Basic Auth — how this specific combination creates or exposes the vulnerability
When Basic Authentication is used in an AdonisJS application, credentials are typically transmitted in an Authorization header as base64-encoded username:password. While Basic Auth itself does not introduce injection, it can shape the request surface that an injection flaw targets. Injection flaws arise when untrusted input is concatenated into commands or queries without proper validation or parameterization. In AdonisJS, common injection vectors include SQL injection through query builder or raw Knex calls, command injection via system process invocations, and injection within file paths or shell-like operations.
Consider a scenario where a route protected by Basic Auth uses the decoded username to build dynamic queries or filesystem operations. If the username (or any user-controlled value derived from the request) is interpolated into SQL strings or shell commands, an attacker who can observe or guess valid credentials might craft malicious payloads that execute unintended commands or leak data. For example, using raw Knex with string concatenation like User.query().whereRaw('username = ' + username) is vulnerable to SQL injection. Similarly, spawning a child process with interpolated user input can lead to command injection (e.g., exec('tar -czf ' + username + '.tar.gz /data')).
AdonisJS applications often rely on middleware for authentication. If Basic Auth middleware extracts credentials and attaches the username to the request context for downstream use without strict validation, downstream handlers may treat that value as safe. This becomes problematic when the value is reused in database lookups, dynamic query building, or logging operations that interact with external systems. Attackers may leverage injection flaws to bypass intended access controls, escalate privileges, or extract sensitive information, especially if the application assumes authenticated requests are inherently trustworthy.
LLM/AI Security checks are particularly relevant when endpoints exposed under Basic Auth also expose language models or AI tooling. If user-controlled data derived from Basic Auth (such as username or roles) is passed into prompts or tool-calling logic without sanitization, it can contribute to prompt injection or unauthorized tool usage. Although Basic Auth does not directly expose AI endpoints, combining it with unchecked user input in AI-related routes increases the attack surface. For instance, failing to validate or escape the username before including it in a system prompt could enable prompt injection attempts against an LLM endpoint.
To detect such risks, middleBrick scans unauthenticated attack surfaces and includes checks for Input Validation, Authentication, and LLM/AI Security. The scanner correlates Basic Auth usage patterns with injection-prone code paths and tests whether authenticated context values are safely handled. Findings typically highlight where user-influenced data (e.g., from decoded credentials) flows into queries, commands, or AI prompts without adequate sanitization or parameterization, and provide remediation guidance aligned with OWASP API Top 10 and common secure coding practices.
Basic Auth-Specific Remediation in Adonisjs — concrete code fixes
Remediation focuses on preventing injection by avoiding string interpolation for queries and commands, validating and sanitizing all user-influenced data, and ensuring that authenticated context values are treated as untrusted. Below are concrete, safe patterns for AdonisJS when using Basic Auth.
1. SQL Injection Prevention
Always use parameterized queries or the query builder’s binding mechanisms. Never concatenate user input into raw SQL strings.
// Unsafe: string concatenation
// const user = request.input('username');
// const results = await db.rawQuery(`SELECT * FROM users WHERE username = '${user}'`);
// Safe: use bindings with Knex (query builder)
const user = request.input('username');
const results = await db.from('users').where('username', user).limit(1);
// Safe: use parameterized raw queries
const user = request.input('username');
const results = await db.rawQuery('SELECT * FROM users WHERE username = ?', [user]);
2. Command Injection Prevention
Avoid direct shell command construction with user input. Use built-in APIs or strict allowlists. If child processes are necessary, pass arguments as an array and avoid shell interpolation.
// Unsafe: direct concatenation
// const username = request.input('username');
// exec(`tar -czf ${username}.tar.gz /data`);
// Safe: use child process with array arguments and no shell interpolation
const { execFile } = require('child_process');
const username = request.input('username');
// Validate username against an allowlist or strict pattern before use
if (!/^[a-z0-9_-]{3,30}$/.test(username)) {
throw new Error('Invalid username');
}
execFile('tar', ['-czf', `${username}.tar.gz`, '/data']);
3. Validate and Sanitize Basic Auth Credentials
Treat decoded credentials as untrusted. Validate format, length, and character set. Do not use raw credentials in sensitive contexts without normalization.
// Example: validate username from Basic Auth before use
const authHeader = request.headers().authorization; // 'Basic base64string'
let username = null;
if (authHeader && authHeader.startsWith('Basic ')) {
const decoded = Buffer.from(authHeader.slice(6), 'base64').toString('utf-8');
const [user] = decoded.split(':');
// Strict validation: alphanumeric + underscore/dash, 3–30 chars
if (user && /^[a-zA-Z0-9_-]{3,30}$/.test(user)) {
username = user;
}
}
if (!username) {
throw new Error('Unauthorized');
}
// Use username safely in queries with bindings as shown above
4. Secure LLM/AI Handling
If your endpoints interact with LLMs, ensure that any value derived from Basic Auth is sanitized before inclusion in prompts or tool calls. Escape or remove control characters and avoid using raw credentials in system prompts.
// Unsafe: direct inclusion in prompt
// const username = request.input('username');
// const prompt = `User ${username} requests data.`;
// Safe: sanitize and escape
const username = 'safe_user123';
const prompt = `User ${username.replace(/[`$]/g, '')} requests data.`;
5. Middleware and Route Guards
Use AdonisJS middleware to enforce authentication and normalize the request context. Attach only validated, normalized values to the request object for downstream use.
// Example middleware (start/handlers/middleware/basic_auth.ts)
import { HttpContextContract } from '@ioc:Adonis/Core/HttpContext'
export default class BasicAuthMiddleware {
public async handle({ request, response, auth }: HttpContextContract, next: () => Promise) {
const authHeader = request.headers().authorization
if (!authHeader || !authHeader.startsWith('Basic ')) {
return response.unauthorized()
}
const decoded = Buffer.from(authHeader.slice(6), 'base64').toString('utf-8')
const [user, pass] = decoded.split(':')
// Validate and authenticate against your user store
const isValid = await validateUserCredentials(user, pass)
if (!isValid) {
return response.unauthorized()
}
// Attach only safe, validated data
request.authUser = { username: user }
await next()
}
}