ELK – Auto-delete older Logstash indices

The following is an approach to auto-delete Logstash indices in Elasticsearch every X days. The following steps are to be run on your ELK host.

Get curator-cli

sudo pip install elasticsearch-curator -U

Create script

cd ~/
vim elasticsearch_del.sh

My preference is to delete indices older than 30 days, change the 30 to your preference. Then save the file.

#!/bin/bash                                                                                                                                           
/usr/local/bin/curator_cli "$@" delete_indices --filter_list '[{"filtertype":"age","source":"creation_date","direction":"older","unit":"days","unit_count":30},{"filtertype":"pattern","kind":"prefix","value":"logstash"}]'

Now make the script executable:
chmod +x elasticsearch_del.sh

Then run the script to make sure it works – use the --dry-run argument to test (i.e. not actually take any action):

./elasticsearch_del.sh --dry-run

If you you’re happy with the output and want to run it for real:

./elasticsearch_del.sh

Setup a CRON schedule job

crontab -e

Add the following line – changing the schedule to your preference. This runs it every Saturday at 5pm:

0 17 * * SAT /home/db/elasticsearch_del.sh

DNS CAA Record Adoption – Scanner and Results

The CAB forum members have voted and are in favor of making the CAA Checking Mandatory. All CAs (Certificate Authorities) will need to comply with the CAA (Certificate Authority Authorization) verification by September 2017.

The details are described in RFC 6844 – Abstract:

The Certification Authority Authorization (CAA) DNS Resource Record

allows a DNS domain name holder to specify one or more Certification

Authorities (CAs) authorized to issue certificates for that domain.

CAA Resource Records allow a public Certification Authority to

implement additional controls to reduce the risk of unintended

certificate mis-issue. This document defines the syntax of the CAA

record and rules for processing CAA records by certificate issuers.

This announcement made me curious about to what degree top HTTPS sites have opt’d into this by including CAA resource records for their properties…

I put together a simple scanner that queries the DNS records for each of the HTTPS ready Alexa Top 1 million sites. [Update] Code is now available on github

I did an initial scan totaling ~670k DNS records, resulting in ~0.05% having CAA resource records.

Given the low percentage of adoption, I am curious to observe how this changes over time. To that end, I’ve setup the scan to run periodically and post the results to @CAA_bot on twitter. Follow the account if you’re interested in being updated on progress.

ELK + Netflow

This guide assumes you’ve got a running ELK stack, and is tailored for a docker installation based on docker-elk. See my ELK on docker guide here

Also, the Netflow source configuration specifcs are for a Ubiquity EdgeRouter – you’ll need to get the specifics for your device if different.

Ubiquity EdgeRouter Config

configure
set system flow-accounting interface eth0
set system flow-accounting netflow version 9
set system flow-accounting netflow server <ip> port 2055
set system flow-accounting netflow enable-egress
commit
save

Logstash configure

cd ~/docker-elk

Open ./logstash/pipeline/logstash.conf in your editor

input {                                                                                                                                                                                                                                                                                               
  udp {                                                                                                                                               
    port => 2055                                                                                                                                      
    codec => netflow {                                                                                                                                
      versions => [5, 9]                                                                                                                              
    }                                                                                                                                                 
    type => netflow                                                                                                                                   
  }                                                                                                                                                   
}                                                                                                                                                                                                                                                                                                   

output {                                                                                                                                              

  }                                                                                                                                                   
  if [type] == "netflow" {                                                                                                                            
      elasticsearch {                                                                                                                                 
        index => "logstash_nf-%{+YYYY.MM.dd}"                                                                                                         
        hosts => "elasticsearch:9200"                                                                                                                 
      }                                                                                                                                               
    } else {                                                                                                                                          
        elasticsearch {                                                                                                                               
                hosts => "elasticsearch:9200"                                                                                                         
        }                                                                                                                                             
    }                                                                                                                                                 
}

Open docker-compose.yml in your editor, and add the following to the logstash service to ensure the NetFlow port 2055 is routed to the logstash container

  ports:                                                                                                                            
      - "2055:2055/udp"

Restart ELK

sudo docker-compose up -d

sudo docker-compose restart

Kibana configuration

  • Browse to Kibana
  • Goto Management, Index Pattern, and create a few pattern based on
    `logstash-nf* and select netflow.last_switched

Test

If it’s working you should see the flows in the Kibana discover tab (make sure to select logstash-nf*)

Now you can go ahead an do some data mining, visualizations and dashboards

Some filter and visualization ideas

Purpose: See what outsiders (bots/worms etc) are attempting to
connection in via your WAN interface on low ports < 1024

Chart Style: Pie
Search: netflow.l4_dst_port:<1024 AND netflow.direction:0
Buckets: Split by netflow.l4_dst_port
Add Sub-Bucket: Split by netflow.ipv4_src_addr.keyword

Purpose: Look at 23/TCP Telnet based Botnet activity over time

Chart Style: Area
Search: netflow.l4_dst_port:23 AND netflow.direction:0
Y-Axis: Count
X-Axis: Date Histogram with netflow.last_switched

Purpose: Look into what a specific LAN side IoT device is connecting too

Chart Style: Pie
Search: netflow.ipv4_src_addr:<device ip> AND netflow.direction:1
Buckets: Split by netflow.l4_dst_port
Add Sub-Bucket: Split by netflow.ipv4_src_addr.keyword

What’s next?

I’ll look to enrich the NetFlow data with:

Sources

ELK on docker + Syslog

Versions used:

  • Ubuntu 16.04
  • Docker version 17.03.1-ce, build c6d412e
  • Docker-Compose version 1.12.0-rc2, build 08dc2a4

Install Docker CE

Install a few basics that we need:

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common

Add docker.com’s GPG key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the apt-get repo:

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

Update package list

sudo apt-get update

Install ‘docker-ce’ package

sudo apt-get install docker-ce

Test to verify all is well

sudo docker run hello-world                                                                                                            

Install Docker-Compose

Get the docker-compose release and pipe to local file

curl -L https://github.com/docker/compose/releases/download/1.12.0-rc2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose

Now move the file to a location within your path

sudo mv ~/docker-compose /usr/local/bin/docker-compose

And make it executable

sudo chmod +x /usr/local/bin/docker-compose

Verify it’ll run by verifying the version

docker-compose --version

Install ELK (Elasticsearch, Logstash and Kibana)

Fortunately this has been made easy for us thanks to a project on GiHub – so to get the basics running we’d just need to clone one repo:

cd ~\
git clone https://github.com/deviantony/docker-elk

Configure ELK stack

Elasticsearch

Move into the directory:

cd docker-elk

Elasticsearch uses a hybrid mmapfs / niofs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions. We’ll need to increase the vm.max_map_count value:

Elevate to root:

sudo su -

Increase the value now:

sysctl -w vm.max_map_count=262144

Make it persist across reboots:

Add vm.max_map_count=262144 setting into ​​ /etc/sysctl.conf. You can verify after rebooting by running sysctl vm.max_map_count

Drop back to previous user:

exit

Now lets configure it such that the elasticsearch data (indexes) is held outside of the container – for example:

mkdir -p ~/docker-elk/elasticsearch/data

Now we need to configure the binding in the docker-compose.yml file for the elasticsearch service. Use you favorite editor to open ~/docker-elk/docker-compose.yml and add the following between the build and ports section of the elasticsearch service – while we are here lets set the auto-restart policy too:

  restart: always                                                                                                                                   
    volumes:                                                                                                                                          
      - ./elasticsearch/data:/usr/share/elasticsearch/data

Logstash + ‘Syslog’ input

The default logstash configuration has a TCP listener on 5000, however for my (our?) purposes we want to listen for syslog messages. Now there are syslog logstash inputs around – but “syslog” is not exactly standard, and there are a both of varying RFCs – so instead of using a rigid input – lets use a standard TCP and UDP listener, and have a grok expression do the parsing for us – that way we can tweak if necessary. Most of my syslog sources of rsyslog – so the grok expression I have seems to play nice with that.

Use your editor and open ./logstash/pipeline/logstash.conf and make the contents look like this:

You’ll notice there is provision for if/when grok cannot parse the message, that it will be logged to the host file system at /var/log/failed_syslog_events-. This is useful to look at and see what is failing and adjust the grok expression as appropriate. You’ll also see that port 5000 was selected for syslog – however the syslog standard port is 514 – this is because ports below 1024 require root privilege and I don’t want run logstash as root. Instead we will modify the docker-compose config such that the host will port forward 514 -> 5000.

Lets do that now, use your editor and open `docker-compose.yml’, then configure the ports section of the logstash service as follows, again we will do the restart policy at the same time:

  restart: always
  ports:                                                                                                                                            
      - "514:5000"                                                                                                                                    
      - "514:5000/udp"

While you’re in there, add the same restart policy for the kibana service

Start-up ELK stack

Assumption is you’re in ~/docker-elk directory

In case you’ve started stuff already, stop all:

sudo docker-compose down

Get the docker events to print to the terminal, useful for the first time run/troubleshooting issues:

sudo docker events &

Start it up ELK in the background:

sudo docker-compose up -d

Verify each of the 3 containers for ELK are running: sudo docker ps and you should see the port forwarding is in place for 514 -> 5000 (both TCP and UDP)

At this point it’s good to wait a few minutes – or until the docker-events die down, as it can take quite a few minutes for the full stack to be up and ready.

Kibana needs some data to populate it’s indicies – so lets pump the contents of /var/log/syslog on the host in via TCP on port 514

nc localhost 514 < /var/log/syslog

Now go to a browser and goto your Kibana instance which listens on port 5601 by default:

http://:5601

You’ll be taken to the Index Patterns page, and so long as the data you pumped it was correct (and all prior steps) you should just be-able to accept the defaults and it’ll create a indexing pattern for you.

Go to the ‘Discover’ page and you should hopefully see the logs shown that you pumped in earlier. If you do – your basic ELK stack with syslog is now functional, and you can move onto getting clients sending their logs to it, and start creating visualizations and dashboards on Kibana.

If the docker-events are annoying – do a

sudo killall docker-events

Drop a question/comment and I’ll try and assist.

References:

timhaak/plex docker upgrade

I’m using the timhaak/plex docker image

Here is to upgrade:

sudo docker pull timhaak/plex
sudo docker rm plex

Get it running again:

sudo docker run --restart=always -d --name=<shortname> -h <hostname> -v <config-location>:/config -v <media-location>:/data -p 32400:32400 timhaak/plex

Replace:
<shortname> with what you want the container to be called
<hostname> with what you want the PMS to be called
<config-location> with the location of your Plex config (note to self, mine is: /opt/plex-data/)
<media-location> with the location of your videos/media

Verify upgrade

Show running containers:
sudo docker ps

Take note of the container ID for plex

Get a bash shell to the running Plex container:

sudo docker exec -it <containerid> /bin/bash

Verify the version installed

dpkg-query -s plexmediaserver | grep "Version"

ChromeOS + OpenVPN (+ TLSAuth)

This is a guide to get OpenVPN (with TLS Auth) working for a ChromeOS client. Note this guide assumes you to have control of the OpenVPN server and associated configuration. This guide doesn’t explain the specifics of port forwarding on your router, or use of Dynamic DNS – if you’re doing all the below I’ll assume you know about doing those things – if not there are plenty of tutorials around.

Versions used:

  • ChromeOS 57.0.2987.115 beta – on Samsung Chromebook Plus
  • Ubuntu 14.04 LTS (Bit old I know, but systemd 😦 )
  • OpenVPN 2.3.2 (openvpn 2.3.2-7ubuntu3.1)

Install OpenVPN server and easy-rsa

sudo apt-get install openvpn easy-rsa
sudo mkdir /etc/openvpn/easy-rsa/
sudo cp -r /usr/share/easy-rsa/* /etc/openvpn/easy-rsa/

Create certificates

cd /etc/openvpn/easy-rsa

Edit vars file to update the values

  • Set KEY_SIZE to 2048
  • Also set KEY_COUNTRY, KEY_PROVINCE, KEY_CITY, KEY_ORG, KEY_EMAIL parameters. Don’t leave any of these parameters blank.

Run ./vars to load parameters
Run ./clean-all to clear keys and previous files

Now lets create our CA cert and key:

Run ./build-ca. The majority of the defaults will be loaded of the var specified values, but you must enter the Common Name (CN) – enter a name that identifies your CA. MyVPN-CA for example. This will create two files 1) ca.crt your CA cert (public) and 2) ca.key you CA private key (secret!)

Now to create the server cert and key:

Run ./build-key-server server. Like the previous command most values can be defaulted. When prompted for CN, enter server. Then select yes for both Sign Certificate and Commit. This will create two files 1) server.crt your servers cert (public) and 2) server.key your servers private key (secret!)

Time for the client(s) cert and key(s):

Run ./build-key-client client1. When prompted for CN, enter a name unique for each client – e.g. client1. Then select yes for both Sign Certificate and Commit. This will create two files 1) client1.crt your clients cert (public) and 2) client1.key your clients private key (secret!)

Now we need to put the client cert and key into a format understood by ChromeOS, namely pkcs12. Run openssl pkcs12 -export -in client1.crt -inkey client1.key -certfile ca.crt -name MyClient -out client1.p12. Enter an export passphrase. This will create a file called client1.p12.

You can repeat the above each client, and just increment the client number: client2, client3 etc…

Now to generate the Diffie Hellman parameters. Run ./build-dh – this may take a few to many minutes. This will create a file called dh2048.pem – this is not secret.

Finally, we should create an OpenVPN static key. Run openvpn --genkey --secret ./keys/ta.key. This will create a file called ta.key – this is another secret. Now we need this is a strange and specific format for ChromeOS where it’s all in one line with inline line break escape characters ‘\n’. So lets do that with a bit of Perl – grep -v '#' ./keys/ta.key | perl -p -e 's/\n/\\n/' > ./keys/ta-oneliner.key.

Now we need to copy the files required by the server into the appropriate directory for your OpenVPN server, like this: cp ./keys/ca.rt ./keys/server.crt ./keys/server.key ./keys/ta.key ./keys/dh2048.pem /etc/openvpn/

While we are here, there are a number of files that you need to get to your client (e.g ChromeOS). There many ways to do this – for example copy somewhere using scp then copying into Google Drive. The files your client needs are client1.p12, ca.crt and ta-oneliner.key.

Configure server

sudo nano /etc/openvpn/server.conf

Here is the content of mine with comments for each line – known to work with ChromeOS clients (see version above)

Enable IPv4 forwarding:

Edit /etc/sysctl.confand uncomment net.ipv4.ip_forward=1 to enable IP forwarding. Then make it come into effect by running sudo sysctl -p /etc/sysctl.conf

Restart Openvpn server:

sudo service openvpn restart. And verify it’s actually running – sudo service openvpn status. If it’s not look in \var\log\syslog for any errors/hints.

Client Configuration (ChromeOS)

Open Chrome – and goto chrome://settings/certificates

Select ‘Authorities’, then ‘Import’, and load in the ca.crt file. When prompted tick the ‘Trust this certificate for identifying websites.’ You should see your certificate in the list under the ‘Private’ parent.

In the same certificates window select the ‘Your Certificates’ tab – then ‘Import and Bind to device…’ and load in the client.p12 and enter the passphrase you specified when creating it. You should now see your client certificate listed.

Now we need to create a ONC file for ChromeOS:

  1. Generate two random GUIDs via https://www.uuidgenerator.net/ or similar. Refresh the page to get your second one. Take note of both, I will refer to them as GUID#1 and GUID#2
  2. Copy the following into a text editor on your ChromeBook

3. Replace the following values in the above files:

  • <GUID#1> – paste value from earlier
  • <GUID#2> – paste value from earlier
  • <VPN_NAME>: Enter a name for your connection. This what you’ll see in the ChromeOS VPN UI.
  • <CA-CERT>: this is the contents of the CA.crt, without the header lines, on one long line, so it will be one long string of base64 encoded ascii, typically begining with “MII” and continuing on for some lines, remove the newlines in the cert. The footer line “—–END CERTIFICATE—–” is also not included.
  • <HOSTNAME>: This is simply the hostname of your VPN server. Do not include port – as this is specified by the ‘port’ parameter – change that if you’re not using 443.
  • <USERNAME>: Is your username on the vpn server.
  • <TLS_AUTH_KEY>: This one is the TLS auth key. Open ta-oneliner.key and paste the contents.

Save your ONC file. Not it contains secret information to treat accordingly. Any filename will do, but maintain the .onc extension

Now we need to install the ONC file:

  • In Chrome goto chrome://net-internals#chromeos
  • Click ‘Choose File’ under ‘Import ONC file’
  • Set your ONC file. Note you may get no postive or negative response from the import attempt. Just go to the VPN UI in the ChromeOS launcher – if the import succeeded you’ll see your VPN connection listed.
Test!

Drop comments/queries below and I’ll assist if I can.

Source and extra reading