HIGH zone transferdocker

Zone Transfer on Docker

How Zone Transfer Manifests in Docker

Zone transfer attacks in Docker environments typically occur when misconfigured DNS services within containers expose internal network information to external actors. In containerized applications, zone transfers can reveal the complete DNS namespace of a Docker network, exposing service discovery information that should remain internal.

The most common Docker-specific manifestation involves Bind DNS servers running inside containers with default configurations. When a container exposes port 53 and runs an unpatched Bind version (such as 9.10.x or earlier), attackers can perform AXFR (zone transfer) requests to enumerate all services and their IP addresses within the Docker network.

Consider this vulnerable Docker Compose configuration:

version: '3.8'
services:
  bind-dns:
    image: bind:9.10
    ports:
      - "53:53/udp"
      - "53:53/tcp"
    volumes:
      - ./named.conf:/etc/bind/named.conf
      - ./zones:/etc/bind/zones
    cap_add:
      - NET_ADMIN

This setup allows external zone transfers because the default named.conf often includes:

allow-transfer { any; };
also-notify { any; };

Attackers can exploit this by sending AXFR requests to the container's IP address. Using dig:

dig AXFR @<container_ip>

Another Docker-specific scenario involves Kubernetes clusters where CoreDNS is misconfigured with forward . /etc/resolv.conf and lacks proper ACLs. This allows zone transfers from pods outside the intended namespace.

Container orchestration platforms exacerbate this issue because they often use predictable service names and IP ranges. An attacker who successfully performs a zone transfer on one service can map the entire application topology, identifying potential targets for further attacks.

Multi-stage Docker builds can also introduce zone transfer vulnerabilities when DNS configuration files are copied without proper validation. For example:

FROM alpine:latest AS builder
RUN apk add --no-cache bind-tools
RUN dig AXFR example.com > zone.txt

FROM nginx:alpine
COPY --from=builder /zone.txt /usr/share/nginx/html/zone.txt
EXPOSE 80

This build process could inadvertently expose zone transfer data in the final image if the source zone file contains sensitive information.

Docker-Specific Detection

Detecting zone transfer vulnerabilities in Docker environments requires both runtime scanning and configuration analysis. The most effective approach combines network-level detection with container inspection.

Network-level detection involves monitoring DNS traffic on port 53 for AXFR queries. Using tcpdump inside a Docker host:

tcpdump -i docker0 port 53 and 'tcp[13] & 0x10 != 0' | grep -i axfr

This command captures AXFR requests traversing the Docker bridge network. For comprehensive scanning, tools like masscan can scan all running containers:

masscan -p53 --rate=1000 -oG results.txt $(docker network ls --format "{{.Name}}" | xargs docker network inspect -f "{{range .IPAM.Config}}{{.Subnet}}{{end}}")

middleBrick's Docker-specific scanning identifies zone transfer vulnerabilities by testing each exposed port 53 endpoint with AXFR requests. The scanner automatically detects Docker containers running DNS services and attempts zone transfers against them, reporting findings with severity levels based on the amount of data exposed.

Configuration analysis using docker inspect reveals potential zone transfer risks:

docker inspect $(docker ps -q) | jq '.[] | select(.Config.ExposedPorts | has("53/udp") or has("53/tcp"))'

This identifies containers exposing DNS ports. Further analysis of mounted volumes and environment variables can reveal Bind configurations that permit zone transfers.

For Kubernetes environments, kubectl commands can identify vulnerable CoreDNS configurations:

kubectl get configmaps -n kube-system coredns -o yaml | grep -A5 -B5 'forward'

Container runtime security tools like Falco can detect anomalous DNS behavior:

- rule: ZoneTransferAttempt
  desc: Detects AXFR requests from containers
  condition: evt.type = dns_request and dns.type = "AXFR" and container.id exists
  output: Zone transfer attempt from container %container.name (%container.id)
  priority: WARNING

Image scanning tools should check for vulnerable Bind versions and default configurations. Using trivy:

trivy image --vuln-type config $(docker images --format "{{.Repository}}:{{.Tag}}")

This identifies containers with known vulnerable DNS software versions.

Docker-Specific Remediation

Remediating zone transfer vulnerabilities in Docker requires both configuration hardening and network segmentation. The most effective approach combines proper Bind configuration with Docker network isolation.

Start with a secure Bind configuration that explicitly denies zone transfers:

options {
    directory "/var/cache/bind";
    allow-query { localhost; internal-network; };
    allow-transfer { none; };
    also-notify { none; };
    recursion no;
};

zone "example.com" {
    type master;
    file "/etc/bind/zones/db.example.com";
    allow-query { internal-network; };
};

Integrate this configuration into a Docker Compose file with proper network isolation:

version: '3.8'
services:
  bind-dns:
    image: bind:9.18
    volumes:
      - ./secure-named.conf:/etc/bind/named.conf
      - ./zones:/etc/bind/zones
    networks:
      - internal
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE

  web-app:
    image: nginx:alpine
    networks:
      - internal

networks:
  internal:
    driver: bridge

This configuration ensures the DNS service is only accessible to containers within the internal network, not from external hosts.

For Kubernetes environments, use NetworkPolicies to restrict DNS access:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: dns-restriction
spec:
  podSelector:
    matchLabels:
      app: bind-dns
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: internal-services
    ports:
    - protocol: TCP
      port: 53

Container runtime hardening includes dropping unnecessary capabilities:

FROM bind:9.18

# Drop all capabilities except those needed for DNS
USER root
RUN capsh --drop=cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read --keep=cap_net_bind_service --user=root --command=/usr/sbin/named

EXPOSE 53/udp 53/tcp
CMD ["/usr/sbin/named", "-g"]

Runtime detection using Falco rules can alert on zone transfer attempts:

falco -r /etc/falco/falco_rules.local.yaml

Complement these measures with image scanning in CI/CD pipelines using tools like Anchore or Trivy to ensure only hardened DNS images are deployed.

Frequently Asked Questions

How can I test if my Docker DNS service is vulnerable to zone transfers?
Use dig AXFR @<container_ip> to attempt a zone transfer from an external host. If you receive zone data, your service is vulnerable. For comprehensive testing, use middleBrick's automated scanning which specifically tests for zone transfer vulnerabilities across all exposed DNS endpoints in your Docker environment.
Does Docker's default bridge network provide protection against zone transfers?
No, Docker's default bridge network does not provide inherent protection. If a container exposes port 53 to the host, it remains accessible from external networks. You need explicit network policies, firewall rules, or proper Bind configuration with allow-transfer { none; } to prevent zone transfers.