Apigee for AI / ML engineers
What middleBrick covers
- Black-box scanning with no agents or SDK dependencies
- Detection of authentication and JWT misconfigurations
- LLM adversarial probe coverage across multiple depth tiers
- OpenAPI 3.x and Swagger 2.0 spec analysis with $ref resolution
- Authenticated scans with header allowlist controls
- Continuous monitoring and diff tracking via Pro tier
Overview for AI and ML engineering workflows
For AI and ML engineers, APIs move models, datasets, and inferences between services. The scanner evaluates these endpoints without access to model weights or training data, focusing on transport security, input handling, and exposure of sensitive artifacts. It maps findings to OWASP API Top 10 (2023) and supports audit evidence for SOC 2 Type II and PCI-DSS 4.0 controls relevant to data-in-motion.
Scan coverage for model-serving endpoints
Black-box scanning supports any language and framework, including common model-serving stacks such as TensorFlow Serving, TorchServe, and Triton. It probes authentication schemes, resource consumption patterns, and data exposure risks like PII in inference responses or API key leakage in logs. Detection includes JWT misconfigurations, sensitive data in claims, and unsafe consumption surfaces such as webhook endpoints used for callback-driven training pipelines.
- Authentication bypass and JWT configuration issues
- Data exposure via inference responses
- Unsafe webhook and callback surfaces
- Over-exposure of internal fields in model metadata
Authenticated scanning for gated model APIs
With Starter tier and above, authenticated scans validate protections behind sign-in or token-based access. Supported methods include Bearer tokens, API keys, Basic auth, and cookies. Domain verification requires DNS TXT record or HTTP well-known file ownership, ensuring only the domain owner can scan credentialed endpoints. The scanner forwards a strict allowlist of headers, preventing accidental leakage of internal routing or tracing tokens.
middlebrick scan https://api.example.com/v1/models
LLM and AI security probing
The scanner includes 18 adversarial probes across Quick, Standard, and Deep tiers targeting LLM interfaces. These probes assess system prompt extraction, instruction override attempts, DAN and roleplay jailbreaks, data exfiltration patterns, base64 and ROT13 encoding bypasses, translation-embedded injection, few-shot poisoning, markdown injection, multi-turn manipulation, indirect prompt injection, token smuggling, tool-abuse patterns, and nested instruction injection. Findings highlight prompt-injection surfaces and model-assist features that may expose sensitive prompts or training context.
OpenAPI spec analysis and reporting
The tool parses OpenAPI 3.0, 3.1, and Swagger 2.0 documents with recursive $ref resolution, cross-referencing spec definitions against runtime behavior. It flags undefined security schemes, deprecated operations, and missing pagination that can lead to oversized responses. Reports provide prioritized remediation guidance aligned with OWASP API Top 10 (2023), helping teams integrate findings into CI/CD and internal review processes.