I would wish to keep services in three different containers and will try to link to each containers to access ELK stack.
Installation is very easy .. we will pull docker images and run the containers.
Elasticsearch:
$sudo docker run --name elasticsearch -d -p 9200:9200 -p 9300:9300 elasticsearch
9c7d52445691015e21a7007e35aca935b9a0dcbbd9560f170cc07c8adc08ae63
$ sudo docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9c7d52445691 elasticsearch "/docker-entrypoint.s" 30 seconds ago Up 30 seconds 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp elasticsearch
$
test your configurations
$ curl http://localhost:9200
{
"name" : "PIvLNU_",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Rx7oxSxESvqo-8GNXkDzCA",
"version" : {
"number" : "5.3.1",
"build_hash" : "5f9cf58",
"build_date" : "2017-04-17T15:52:53.846Z",
"build_snapshot" : false,
"lucene_version" : "6.4.2"
},
"tagline" : "You Know, for Search"
}
Kibana: visualization tool which connects to elasticsearch
since elasticsearch is already running, we need to point kibana container to elasticsearch container.
$sudo docker run --name kibana -d -p 5601:5601 --link elasticsearch:elasticsearch kibana
2090b8df39a44016e401e8b2fb4e2d79f4d674e9f02ae5794f1ed484fb28913e
$ sudo docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2090b8df39a4 kibana "/docker-entrypoint.s" 7 seconds ago Up 4 seconds 0.0.0.0:5601->5601/tcp kibana
$ curl http://localhost:5601
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';
var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;
}</script>[sunlnx@fedora ~]$
Point your browser to http://localhost:5601 which would re-direct to kibana default index page..
Logstash:
we would try to take standard input and would it has to be reflected on the kibana dashboard.
create a configuration file for syslog and would start container.
$cd $PWD/logstash/
$cat logstash.conf
# input from keyboard
input{
stdin {}
}
#output to elasticcontainer
output {
elasticsearch { hosts => ["elasticsearch:9200"] }
}
we are referencing elasticsearch from container and we will link these two containers together.
$sudo docker run -it --rm --name logstash --link elasticsearch:elasticsearch -v $PWD:/config logstash -f /config/logstash.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
10:41:27.928 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
10:41:28.003 [LogStash::Runner] INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"b359fa87-aef2-4cb1-a533-84767399a0f7", :path=>"/var/lib/logstash/uuid"}
10:41:29.340 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs u
.
.
<snip>
this is test1
this is test2
this is test3
logstash would trigger and pushed all messages to elasticsearch, where elasticsearch would created an index page. refresh kibana dashboard, you would now see an index page, click on 'create' and proceed next.
Click on 'discover', you would now see all the messages being taken from keyboard.
Example 2:
you can also create your own file for port forward TCP and map to your localhost and try to see telnet. you must be able to see the logs in the kibana dashboard.
$cat portfw.conf
input{
tcp {
port => 9500
}
}
output {
elasticsearch { hosts => ["elasticsearch:9200"] }
}
$telnet localhost 9500
<type your message>
I hope you would find this easy for you to configure ELK stack. you could find more reference on the logstash configuration examples from "https://www.elastic.co/guide/en/logstash/current/config-examples.html"
Thanks