Bind DNS Sinkhole, Elasticsearch and Logstash

Sinkhole DNS

I wanted to track DNS queries that get send to nameservers that do not serve a particular domain or network. I used a Bind DNS server that logged the query and returned a fixed response. The logs get parsed by Logstash and stored in Elasticsearch for analysis.

Install bind

Installing bind is easy via the bind9 package :

sudo apt-get install bind9

This will add a new user ‘bind’ and store the configuration files in /etc/bind.

For this setup I want bind to behave as an authoritative nameserver for every possible domain and always reply with the same result.

The core bind configuration file is /etc/bind/named.conf. I commented the default zones and added a custom ‘catch-all’ DNS zone.

include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.local";
#include "/etc/bind/named.conf.default-zones";

zone "." {
    type master;
    //type hint;
    file "/etc/bind/db.root.honeypot";
};

The zone file, /etc/bind/db.root.honeypot, has the minimal configuration to reply with 127.0.0.1 (change this to another IP if you want to track what happens after the DNS query).

$TTL    10
@       IN      SOA     localhost. root.localhost. (
                              1         ; Serial
                             10         ; Refresh
                             10         ; Retry
                             10         ; Expire
                             10 )       ; Negative Cache TTL
;

        IN  NS  localhost   
*       IN  A   127.0.0.1

You also have to configure some of the options of bind in /etc/bind/named.conf.options.

options {
    directory "/var/cache/bind";

    // forwarders {
    //  8.8.8.8;    
    // };

    dnssec-validation auto;
    recursion no;
    allow-transfer { none; };

    auth-nxdomain no;    # conform to RFC1035
    // listen-on-v6 { any; };
    statistics-file         "/var/log/named/named_stats.txt";
    memstatistics-file      "/var/log/named/named_mem_stats.txt";
    version "9.9.1-P2";
};

logging{

  channel query_log {
    file "/var/log/named/query.log";
    severity info;
    print-time yes;
    print-severity yes;
    print-category yes;
  };

  category queries {
    query_log;
  };
};

The options above disable recursion, return a custom version number and enable logging.

  • recursion no : disable recursive lookups;
  • allow-transfer { none; } : no zone transfers allowed;
  • statistics-file and memstatistics-file : DNS stats (via rndc);
  • version “9.9.1-P2” : return a custom server version;
  • // listen-on-v6 { any; }; : do not listen on IPv6;

If the logging directory, /var/log/named, doesn’t exist already then you have to create it and make sure it is owned by the user bind.

mkdir /var/log/named
sudo chown bind /var/log/named

Then restart bind, check the output of your syslog messages and try some lookups.

sudo /etc/init.d/bind9 restart
host www.google.com 127.0.0.1

Apparmor.d and bind

It’s possible that you get a permission denied on the log directory when restarting bind on Ubuntu.

named[11625]: isc_stdio_open '/var/log/named/query.log' failed: permission denied

This is caused by AppArmor. You can allow write access to these files by changing the AppArmor profile /etc/apparmor.d/usr.sbin.named and check that it contains

/var/log/named/** rw,
/var/log/named/ rw,

Logstash configuration for Bind

Now that bind is logging properly to a text file we can configure Logstash to parse the Bind log files. The Logstash configuration file is the one that I previously used for Using ELK as a dashboard for honeypots. I only list the relevant changes below. You can get all of the configuration from Github.

############################################################
# DNS honeypot
#
  if [type] == "dnshpot" {
    grok {
       match => [ "message", "%{MONTHDAY:day}-%{MONTH:month}-%{YEAR:year} %{TIME:time} queries: info: client %{IP:srcip}#%{DATA:srcport}%{SPACE}\(%{DATA:hostname}\): query: %{DATA:hostname2} %{DATA:querytpe3} %{DATA:querytype} %{DATA:querytype2} \(%{IP:dstip}\)" ]
    }
    mutate {
      add_field => [ "dstport", "53" ]
    }
    mutate {
      strip => [ "srcip", "dstip", "hostname", "srcport" , "hostname2", "querytype", "querytype2" ]
    }
    mutate {
      add_field => [ "timestamp", "%{day}-%{month}-%{year} %{time}" ]
    }
    date {
      match => [ "timestamp", "dd-MMM-YYYY HH:mm:ss.SSS" ]
    }
  }

Logrotating

Do not forget to rotate the query log file.

/var/log/named/query.log {
        monthly
        rotate 12
        compress
        delaycompress
        missingok
        notifempty
        create 644 root root
}

One thought on “Bind DNS Sinkhole, Elasticsearch and Logstash

  1. Hi
    This was exactly what I was looking for. I only have 1 problem, when a Windows host does a lookup, it appends my local domain to the lookup. How do I prevent this or can I forward these requests to our actual name servers? Complete noob when it comes to bind.

    Thanks for the fantastic write-up.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.