Versions used:
- Ubuntu 16.04
- Docker version 17.03.1-ce, build c6d412e
- Docker-Compose version 1.12.0-rc2, build 08dc2a4
Install Docker CE
Install a few basics that we need:
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
Add docker.com’s GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Add the apt-get repo:
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
Update package list
sudo apt-get update
Install ‘docker-ce’ package
sudo apt-get install docker-ce
Test to verify all is well
sudo docker run hello-world
Install Docker-Compose
Get the docker-compose release and pipe to local file
curl -L https://github.com/docker/compose/releases/download/1.12.0-rc2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
Now move the file to a location within your path
sudo mv ~/docker-compose /usr/local/bin/docker-compose
And make it executable
sudo chmod +x /usr/local/bin/docker-compose
Verify it’ll run by verifying the version
docker-compose --version
Install ELK (Elasticsearch, Logstash and Kibana)
Fortunately this has been made easy for us thanks to a project on GiHub – so to get the basics running we’d just need to clone one repo:
cd ~\
git clone https://github.com/deviantony/docker-elk
Configure ELK stack
Elasticsearch
Move into the directory:
cd docker-elk
Elasticsearch uses a hybrid mmapfs / niofs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions. We’ll need to increase the vm.max_map_count
value:
Elevate to root:
sudo su -
Increase the value now:
sysctl -w vm.max_map_count=262144
Make it persist across reboots:
Add vm.max_map_count=262144
setting into /etc/sysctl.conf
. You can verify after rebooting by running sysctl vm.max_map_count
Drop back to previous user:
exit
Now lets configure it such that the elasticsearch data (indexes) is held outside of the container – for example:
mkdir -p ~/docker-elk/elasticsearch/data
Now we need to configure the binding in the docker-compose.yml file for the elasticsearch service. Use you favorite editor to open ~/docker-elk/docker-compose.yml
and add the following between the build and ports section of the elasticsearch service – while we are here lets set the auto-restart policy too:
restart: always
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
Logstash + ‘Syslog’ input
The default logstash configuration has a TCP listener on 5000, however for my (our?) purposes we want to listen for syslog messages. Now there are syslog logstash inputs around – but “syslog” is not exactly standard, and there are a both of varying RFCs – so instead of using a rigid input – lets use a standard TCP and UDP listener, and have a grok expression do the parsing for us – that way we can tweak if necessary. Most of my syslog sources of rsyslog – so the grok expression I have seems to play nice with that.
Use your editor and open ./logstash/pipeline/logstash.conf
and make the contents look like this:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
input { | |
tcp { | |
port => 5000 | |
type => syslog | |
} | |
udp { | |
port => 5000 | |
type => syslog | |
} | |
} | |
## Add your filters / logstash plugins configuration here | |
filter { | |
if [type] == "syslog" { | |
grok { | |
match => { "message" => "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{PO | |
SINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } | |
} | |
} | |
} | |
output { | |
if [type] == "syslog" and "_grokparsefailure" in [tags] { | |
file { path => "/var/log/failed_syslog_events-%{+YYYY-MM-dd}" } | |
} | |
elasticsearch { | |
hosts => "elasticsearch:9200" | |
} | |
} |
You’ll notice there is provision for if/when grok cannot parse the message, that it will be logged to the host file system at /var/log/failed_syslog_events-
. This is useful to look at and see what is failing and adjust the grok expression as appropriate. You’ll also see that port 5000 was selected for syslog – however the syslog standard port is 514 – this is because ports below 1024 require root privilege and I don’t want run logstash as root. Instead we will modify the docker-compose config such that the host will port forward 514 -> 5000.
Lets do that now, use your editor and open `docker-compose.yml’, then configure the ports section of the logstash service as follows, again we will do the restart policy at the same time:
restart: always
ports:
- "514:5000"
- "514:5000/udp"
While you’re in there, add the same restart policy for the kibana service
Start-up ELK stack
Assumption is you’re in ~/docker-elk
directory
In case you’ve started stuff already, stop all:
sudo docker-compose down
Get the docker events to print to the terminal, useful for the first time run/troubleshooting issues:
sudo docker events &
Start it up ELK in the background:
sudo docker-compose up -d
Verify each of the 3 containers for ELK are running: sudo docker ps
and you should see the port forwarding is in place for 514 -> 5000 (both TCP and UDP)
At this point it’s good to wait a few minutes – or until the docker-events die down, as it can take quite a few minutes for the full stack to be up and ready.
Kibana needs some data to populate it’s indicies – so lets pump the contents of /var/log/syslog on the host in via TCP on port 514
nc localhost 514 < /var/log/syslog
Now go to a browser and goto your Kibana instance which listens on port 5601 by default:
http://:5601
You’ll be taken to the Index Patterns page, and so long as the data you pumped it was correct (and all prior steps) you should just be-able to accept the defaults and it’ll create a indexing pattern for you.
Go to the ‘Discover’ page and you should hopefully see the logs shown that you pumped in earlier. If you do – your basic ELK stack with syslog is now functional, and you can move onto getting clients sending their logs to it, and start creating visualizations and dashboards on Kibana.
If the docker-events are annoying – do a
sudo killall docker-events
Drop a question/comment and I’ll try and assist.
References:
- https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
- https://www.elastic.co/guide/en/logstash/current/config-examples.html
- https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-cli-run-prod-mode
- https://github.com/deviantony/docker-elk/blob/master/docker-compose.yml
- https://docs.docker.com/engine/installation/linux/ubuntu/
Hi,
I tried to install ELK, but when I try to run this command (sudo docker-compose down) I get error message: ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
in “./docker-compose.yml”, line 7, column 13
Could you help me, what is wrong? Here is my docker-compose-yml:
version: ‘2’
services:
elasticsearch:
build: elasticsearch/
volumes:
– ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
– ./elasticsearch/data:/usr/share/elasticsearch/data
ports:
– “9200:9200”
– “9300:9300”
environment:
ES_JAVA_OPTS: “-Xmx256m -Xms256m”
networks:
– elk
logstash:
build: logstash/
volumes:
– ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
– ./logstash/pipeline:/usr/share/logstash/pipeline
restart: always
ports:
– “514:5000”
– “514:5000/udp”
environment:
LS_JAVA_OPTS: “-Xmx256m -Xms256m”
networks:
– elk
depends_on:
– elasticsearch
kibana:
build: kibana/
volumes:
– ./kibana/config/:/usr/share/kibana/config
ports:
– “5601:5601”
networks:
– elk
depends_on:
– elasticsearch
networks:
elk:
driver: bridge
thanks a lot, best regards: Peter
It appears to the a missing the required indenting to nest each row under it’s parent.
You can validate YAML by using something like http://www.yamllint.com/
I took a stab at it – yamllint reports this one as valid: https://gist.github.com/dboyd13/cfd08a65aba77a285290989ddb7813ec
Look here for the YAML indentation spec – http://yaml.org/spec/1.2/2009-07-21/spec.html#id2576668