DNS CAA Record Adoption – Scanner and Results

The CAB forum members have voted and are in favor of making the CAA Checking Mandatory. All CAs (Certificate Authorities) will need to comply with the CAA (Certificate Authority Authorization) verification by September 2017.

The details are described in RFC 6844 – Abstract:

The Certification Authority Authorization (CAA) DNS Resource Record

allows a DNS domain name holder to specify one or more Certification

Authorities (CAs) authorized to issue certificates for that domain.

CAA Resource Records allow a public Certification Authority to

implement additional controls to reduce the risk of unintended

certificate mis-issue. This document defines the syntax of the CAA

record and rules for processing CAA records by certificate issuers.

This announcement made me curious about to what degree top HTTPS sites have opt’d into this by including CAA resource records for their properties…

I put together a simple scanner that queries the DNS records for each of the HTTPS ready Alexa Top 1 million sites. [Update] Code is now available on github

I did an initial scan totaling ~670k DNS records, resulting in ~0.05% having CAA resource records.

Given the low percentage of adoption, I am curious to observe how this changes over time. To that end, I’ve setup the scan to run periodically and post the results to @CAA_bot on twitter. Follow the account if you’re interested in being updated on progress.


ELK + Netflow

This guide assumes you’ve got a running ELK stack, and is tailored for a docker installation based on docker-elk. See my ELK on docker guide here

Also, the Netflow source configuration specifcs are for a Ubiquity EdgeRouter – you’ll need to get the specifics for your device if different.

Ubiquity EdgeRouter Config

set system flow-accounting interface eth0
set system flow-accounting netflow version 9
set system flow-accounting netflow server <ip> port 2055
set system flow-accounting netflow enable-egress

Logstash configure

cd ~/docker-elk

Open ./logstash/pipeline/logstash.conf in your editor

input {                                                                                                                                                                                                                                                                                               
  udp {                                                                                                                                               
    port => 2055                                                                                                                                      
    codec => netflow {                                                                                                                                
      versions => [5, 9]                                                                                                                              
    type => netflow                                                                                                                                   

output {                                                                                                                                              

  if [type] == "netflow" {                                                                                                                            
      elasticsearch {                                                                                                                                 
        index => "logstash_nf-%{+YYYY.MM.dd}"                                                                                                         
        hosts => "elasticsearch:9200"                                                                                                                 
    } else {                                                                                                                                          
        elasticsearch {                                                                                                                               
                hosts => "elasticsearch:9200"                                                                                                         

Open docker-compose.yml in your editor, and add the following to the logstash service to ensure the NetFlow port 2055 is routed to the logstash container

      - "2055:2055/udp"

Restart ELK

sudo docker-compose up -d

sudo docker-compose restart

Kibana configuration

  • Browse to Kibana
  • Goto Management, Index Pattern, and create a few pattern based on
    `logstash-nf* and select netflow.last_switched


If it’s working you should see the flows in the Kibana discover tab (make sure to select logstash-nf*)

Now you can go ahead an do some data mining, visualizations and dashboards

Some filter and visualization ideas

Purpose: See what outsiders (bots/worms etc) are attempting to
connection in via your WAN interface on low ports < 1024

Chart Style: Pie
Search: netflow.l4_dst_port:<1024 AND netflow.direction:0
Buckets: Split by netflow.l4_dst_port
Add Sub-Bucket: Split by netflow.ipv4_src_addr.keyword

Purpose: Look at 23/TCP Telnet based Botnet activity over time

Chart Style: Area
Search: netflow.l4_dst_port:23 AND netflow.direction:0
Y-Axis: Count
X-Axis: Date Histogram with netflow.last_switched

Purpose: Look into what a specific LAN side IoT device is connecting too

Chart Style: Pie
Search: netflow.ipv4_src_addr:<device ip> AND netflow.direction:1
Buckets: Split by netflow.l4_dst_port
Add Sub-Bucket: Split by netflow.ipv4_src_addr.keyword

What’s next?

I’ll look to enrich the NetFlow data with: