Sunday, 12 November 2017

​Docker Swarm Basic Tutorial

I tried to provide at my best basic tutorial on Docker Swarm on how to configure/join workers to manager(master) nodes and run services. orchestrating service through Docker Swarm.
After reading this blog, you would definitely come across basic idea on the container orchestration... 

What is container Orchestration ?

Container Orchestration systems is where the next action is likely to be in the movement towards Building, Shipping, Running containers at scale. The list of most popular software that currently provides a solution for this are Kubernetes, Docker Swarm and others.

Why do we need container orchestration ?

Imagine you might had to run hundred of containers, you can easily see if they are running in a distributed mode, ensuring your cluster is up and running etc 

Few features are:

-  health checks on containers
-  Launch exact count of containers for an particular Docker image
-  Scaling number of containers up and down depending on the load
-  Perform rolling update on softwares across containers
-  ... more 

I hope reader would be aware of Docker basic commands ..

The first step is to create a set of Docker machines that will act as nodes in our Docker Swarm. I am going to create 5 Docker Machines, where one of them will act as the Manager (Leader) and the other will be worker nodes.

I will be using below Docker machines as hostnames with their roles ..

node1(192.168.0.28) <- Manager
 
node2(192.168.0.27) <- Worker
node3(192.168.0.26) <- Worker
node4(192.168.0.25) <- Worker
node5(192.168.0.24) <- Worker

Creating Swarm Cluster

Once your machines are setup, you can now proceed with setting up swarm. The first thing to do is to initilize swarm. Login to node1 and initilize swarm

root@node1# docker swarm init --advertise-addr 192.168.0.28
Swarm initialized: current node (wga1nmopjbir2ks92rxntj9dz) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
root@node1#

You will also notice that the output mentions the docker swarm join command to use in case you want another node to join as a worker. Keep in mind that you can have a node join as a worker or as a manager. At any point in time, there is only one LEADER and the other manager nodes will be as backup in case the current LEADER opts out.

root@node1# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wga1nmopjbir2ks92rxntj9dz *   node1               Ready               Active            Leader
root@node1#

You could use tokens to join either other nodes as a manager or worker. see below on how to get tokens ?

Joining as Worker Node

To find out the join command for a worker, trigger below command 

root@node1# docker swarm join-token worker
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377
root@node1#

Joining as Manager Node

root@node1# docker swarm join-token manager
To add a manager to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-a9j3npou6i5pb90fs9h92ez3u 192.168.0.28:2377
root@node1#

Adding worker nodes to Swarm

you need to SSH to node{2..5} and join them as workers.

root@node2# docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377
This node joined a swarm as a worker.
root@node2#

root@node3# docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377
This node joined a swarm as a worker.
root@node3#

root@node4# docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377
This node joined a swarm as a worker.
root@node4#

root@node5# docker swarm join --token SWMTKN-1-3j0boiejfa0dhp82ar51fl1yvsscuwk3n1jppvzafvbv6mycyo-13fkd65lfhh2d1yb31fd59tuk 192.168.0.28:2377
This node joined a swarm as a worker.
root@node5#

Login to manager node(node1) and check the status

root@node1# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wga1nmopjbir2ks92rxntj9dz *   node1               Ready               Active         Leader
4wzc1nesi1hxa3ekq753w0koq     node2               Ready               Active
4nrsuwslw149mg4nwlipmxxlo     node3               Ready               Active
jbdizz7gvfdz03shtgu23h8kv     node4               Ready               Active
afpi9osujgcsqkdiqg4tnz6nb     node5               Ready               Active
root@node1#

you could also execute 'docker info' and check out swarm section.

Swarm: active
 NodeID: wga1nmopjbir2ks92rxntj9dz
 Is Manager: true
 ClusterID: 7xpyowph2y1q2q12rhrkzzfei
 Managers: 1  ==> Swarm markered as 1 manager among total 5 active nodes
 Nodes: 5
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 192.168.0.28
 Manager Addresses:
  
Testing Swarm orchestration
 
we have swarm up and running, its time to schedule our containers on it. we will not focus on the application and no need for us to worry where the appilcation is going to run. 
we will tell the manager to run the containers for us and it will take care of scheduling our container and send these commands to our nodes to distribute.

we will run 'nginx' container and expose 'port 80'. we could specify number of instances to launch via 'replicas'. 

All would be runing from the 'node1' managerical node and we could use for administration. 

root@node1# docker service create --replicas 5 -p 80:80 --name web nginx
lwvjgxcxrg5wqw2fwrgzosnnw 
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
root@node1#

you could now see the status of the service being orchestrated to different nodes.

root@node1# docker service ls                                                                     PORTS  
ID                  NAME                MODE                REPLICAS            nginx:latest        *:80->80/tcp
MAGE               PORTS
lwvjgxcxrg5w        web                 replicated          8/8                 nginx:latest        *:80->80/tcp
root@node1#

root@node1# docker service ps web
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
l00c9u3kmfbn        web.1               nginx:latest        node1               Running             Running 5 minutes ago
nzzi7z3wvikl        web.2               nginx:latest        node2               Running             Running 5 minutes ago
91uizczo4dp3        web.3               nginx:latest        node3               Running             Running 5 minutes ago
nor39f9gu3j2        web.4               nginx:latest        node4               Running             Running 5 minutes ago
mtn3et6gu5on        web.5               nginx:latest        node5               Running             Running 5 minutes ago
root@node1#

Accessing service

you can acess the service by hitting any of the manager or worker node. It does not matter any particular node does not have any container scheduled on it.


root@node1# curl http://192.168.0.28:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@node1#

Scaling up and Scaling down

Currently we are running 5 containers and lets assume we require 10 containers, fire below to scale up. 

root@node1# docker service scale web=10

root@node1# docker service ps web
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
l00c9u3kmfbn        web.1               nginx:latest        node1               Running             Running 18 minutes ago
nzzi7z3wvikl        web.2               nginx:latest        node2               Running             Running 18 minutes ago
91uizczo4dp3        web.3               nginx:latest        node3               Running             Running 18 minutes ago
nor39f9gu3j2        web.4               nginx:latest        node4               Running             Running 18 minutes ago
mtn3et6gu5on        web.5               nginx:latest        node5               Running             Running 18 minutes ago
t1reh7fnzfg9        web.6               nginx:latest        node3               Running             Running 10 minutes ago
qiyegknsz90w        web.7               nginx:latest        node1               Running             Running 10 minutes ago
zqsafvln29ft        web.8               nginx:latest        node2               Running             Running 10 minutes ago
mx6ezdng0n1i        web.9               nginx:latest        node4               Running             Running 16 seconds ago
xrmugsonlbvl        web.10              nginx:latest        node5               Running             Running 16 seconds ago
root@node1# 

you can reduce with same command, it would reduce your containers on the nodes.
root@node1# docker service scale web=5 

Draining node

whenever the node is ACTIVE, it will be always ready to accept tasks from Manager. 

root@node1#docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wga1nmopjbir2ks92rxntj9dz *   node1               Ready               Active              Leader
4wzc1nesi1hxa3ekq753w0koq     node2               Ready               Active
4nrsuwslw149mg4nwlipmxxlo     node3               Ready               Active
jbdizz7gvfdz03shtgu23h8kv     node4               Ready               Active
afpi9osujgcsqkdiqg4tnz6nb     node5               Ready               Active
root@node1#

When the node is active, it can receive new tasks:
- during a service update to scale up
- during a rolling update
- when you set another node to Drain availability
- when a task fails on another active node

But sometimes, we have to bring the Node down for some maintenance reason. This meant by setting the Availability to Drain mode. Let us try that with one of our nodes.

since there are 10 containers runing, each node is having 2 containers on it. lets made one of the node drain and see how the containers are being orchestrated without disrupting service.

root@node1# docker inspect node5
.
.
<snip>
        "Spec": {
            "Labels": {},
            "Role": "worker",
            "Availability": "active"  ==> status set to active
.
.
<snip>
            }
        },
        "Status": {
            "State": "ready",
            "Addr": "192.168.0.24"
        }
<snip>


root@node1# docker node update --availability drain node5
node5
root@node1#

root@node1# docker service ps web
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                     ERROR               PORTS
l00c9u3kmfbn        web.1               nginx:latest        node1               Running             Running 30 minutes ago
nzzi7z3wvikl        web.2               nginx:latest        node2               Running             Running 30 minutes ago
91uizczo4dp3        web.3               nginx:latest        node3               Running             Running 30 minutes ago
nor39f9gu3j2        web.4               nginx:latest        node4               Running             Running 30 minutes ago
e5pdetk2lghs        web.5               nginx:latest        node3               Running             Running less than a second ago
mtn3et6gu5on         \_ web.5           nginx:latest        node5               Shutdown            Shutdown 1 second ago
t1reh7fnzfg9        web.6               nginx:latest        node3               Running             Running 21 minutes 
agoqiyegknsz90w        web.7               nginx:latest        node1               Running             Running 21 minutes 
agozqsafvln29ft        web.8               nginx:latest        node2               Running             Running 21 minutes 
agomx6ezdng0n1i        web.9               nginx:latest        node4               Running             Running 11 minutes ago
m23nbrdvb635        web.10              nginx:latest        node4               Running             Running less than a second ago
xrmugsonlbvl         \_ web.10          nginx:latest        node5               Shutdown            Shutdown less than a second ago
root@node1#

root@node1# docker inspect node5
<snip>
"Spec": {
            "Labels": {},
            "Role": "worker",
            "Availability": "drain"
<snip>

You could now see the containers on the node5 has been re-scheduled on node3 and node4. 
you could now get on with maintenance on node5 and once they are up, you would require to make them into active state. On active state, containers will again get replicated.

Remove service

You could remove the service with 'rm' command.
root@node1# docker service rm web

Rolling upgrade

root@node1# docker service update --image <imagename>:<version> web

Thanks for re-sharing...

Saturday, 9 September 2017

Configure Mail Server & Setting up Mail Client: Fedora Linux 25

​Here, I would let you know how to configure mail server(MTA) and client(MUA) using evolution which is by default.

Environment: Fedora 25

​we will understand few components before we setup email configurations/settings

MUA(Mail User Agent) or Mail Client: Application used to write/send/read email messages.
e.g Evolution, KMail, Outlook etc.. text based mail clients like pine, mail ..etc 

MTA(Mail Transfer Agent):Transferring email messages from one computer to another(intranet or Internet). We would be configuring postfix in this section

MDA(Mail Delivery Agent): It will receive emails from the MTA and delivers them to relevant mailbox MUA(e.g Dovecot). There are few of the popular MDA which would remove unwanted email messages or spam before they reach MUA Inbox.(e.g Procmail ..etc )

SMTP(Simple Mail Transfer Protocol): communicates language that the MTA use to talk to each other and transfer message back and forth.

Architecture


Configuring MTA

Login as 'root' to perform below steps.
Note: SElinux was disabled.

- install postfix.
#dnf install postfix -y

-Take a backup copy of the file and copy paste below contents and change according to your infra setup.(lines marked in red)

#mv /etc/postfix/main.cf /etc/postfix/main.cf.original

# cat /etc/postfix/main.cf
     1  compatibility_level = 2
     2  queue_directory = /var/spool/postfix
     3  command_directory = /usr/sbin
     4  daemon_directory = /usr/libexec/postfix
     5  data_directory = /var/lib/postfix
     6  mail_owner = postfix
     7  myhostname = fedora.localhost.com
     8  mydomain = localhost.com
     9  myorigin = $mydomain
    10  inet_interfaces = $myhostname
    11  inet_protocols = ipv4
    12  mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
    13  unknown_local_recipient_reject_code = 550
    14  mynetworks = 192.168.122.0/24, 127.0.0.0/8, 10.0.0.0/24
    15  alias_maps = hash:/etc/aliases
    16  alias_database = hash:/etc/aliases
    17  smtpd_banner = $myhostname ESMTP
    18  debug_peer_level = 2
    19  debugger_command =
    20           PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin
    21           ddd $daemon_directory/$process_name $process_id & sleep 5
    22  sendmail_path = /usr/sbin/sendmail.postfix
    23  newaliases_path = /usr/bin/newaliases.postfix
    24  mailq_path = /usr/bin/mailq.postfix
    25  setgid_group = postdrop
    26  html_directory = no
    27  manpage_directory = /usr/share/man
    28  sample_directory = /usr/share/doc/postfix/samples
    29  readme_directory = /usr/share/doc/postfix/README_FILES
    30  meta_directory = /etc/postfix
    31  shlib_directory = /usr/lib64/postfix
    32  message_size_limit = 10485760
    33  mailbox_size_limit = 1073741824
    34  smtpd_sasl_type = dovecot
    35  smtpd_sasl_path = private/auth
    36  smtpd_sasl_auth_enable = yes
    37  smtpd_sasl_security_options = noanonymous
    38  smtpd_sasl_local_domain = $myhostname
    39  smtpd_recipient_restrictions = permit_mynetworks,permit_auth_destination,permit_sasl_authenticated,reject

- start service persistent after reboot 
# systemctl start postfix
# systemctl enable postfix

- If you had your firewall being running, add service 'smtp'. 
#firewall-cmd --add-service=smtp --permanent
#firewall-cmd --reload

Configuring MDA

- install dovecot
#dnf install dovecot -y

-Take a backup copy of the file and copy paste below contents and change according to your infra setup.(lines marked in red)

#mv /etc/dovecot/dovecot.conf /etc/dovecot/dovecot.conf.original
#mv /etc/dovecot/conf.d/10-auth.conf /etc/dovecot/conf.d/10-auth.conf.original
#mv /etc/dovecot/conf.d/10-mail.conf /etc/dovecot/conf.d/10-mail.conf.original
#mv /etc/dovecot/conf.d/10-master.conf /etc/dovecot/conf.d/10-master.conf.original
#mv /etc/dovecot/conf.d/10-ssl.conf /etc/dovecot/conf.d/10-ssl.conf.original

# cat /etc/dovecot/dovecot.conf 
     1  protocols = imap pop3 lmtp
     2  listen = *,::
     3  dict {
     4  }
     5  !include conf.d/*.conf
     6  !include_try local.conf
#

# cat /etc/dovecot/conf.d/10-auth.conf
     1  disable_plaintext_auth = no
     2  auth_mechanisms = plain login
     3  !include auth-system.conf.ext
#

# cat /etc/dovecot/conf.d/10-mail.conf

     1  mail_location = maildir:~/Maildir
     2  namespace inbox {
     3    inbox = yes
     4  }
     5  protocol !indexer-worker {
     6  }
     7  mbox_write_locks = fcntl
#

# cat /etc/dovecot/conf.d/10-master.conf
     1   service imap-login {
     2    inet_listener imap {
     3    }
     4    inet_listener imaps {
     5    }
     6  }
     7  service pop3-login {
     8    inet_listener pop3 {
     9    }
    10    inet_listener pop3s {
    11    }
    12  }
    13  service lmtp {
    14    unix_listener lmtp {
    15    }
    16  }
    17  service imap {
    18  }
    19  service pop3 {
    20  }
    21  service auth {
    22    unix_listener auth-userdb {
    23  }
    24    unix_listener /var/spool/postfix/private/auth {
    25      mode = 0666
    26      user = postfix
    27      group = postfix
    28    }
    29  }
    30  service auth-worker {
    31  }
    32  service dict {
    33    unix_listener dict {
    34    }
    35  }
#

# cat /etc/dovecot/conf.d/10-ssl.conf
     1  ssl = required
     2  ssl_cert = </etc/pki/dovecot/certs/dovecot.pem
     3  ssl_key = </etc/pki/dovecot/private/dovecot.pem
     4  ssl_cipher_list = PROFILE=SYSTEM
#

- start service persistent after reboot 
# systemctl start dovecot
# systemctl enable dovecot

- If you had your firewall being running, add service 'smtp'. 
#firewall-cmd --add-service={pop3,imap} --permanent
#firewall-cmd --reload

Configure MUA

Click on 'Evolution' and configure as per below ...

Edit -> Preferences -> Add ->Next  





Leave rest to default and continue to check [OK] .

Testing

Compose email and read yourself :)




Thanks

Saturday, 26 August 2017

Installation of Jenkins in Linux

In this blog, I would just summarize installation of jenkins in few steps

1. Jenkins is Java based application, hence install Java version 8
2. Tomcat is required to deploy Jenkins war file, hence install Apache Tomcat version 9
3. Download Jenkins war file
4. Deploy Jenkins war file using Tomcat 
5. Install suggested plugins

Environment: CentOS / Redhat

- Install latest Java version

#yum install java-1.8.0-openjdk

- Download Apache Tomcat 


#tar -xzvf apache-tomcat-9.0.0.M10.tar.gz

#mv apache-tomcat-9.0.0.M10 apache9

- Provide username and password for Apache Tomcat

#mv apache9/conf/tomcat-users.xml apache9/conf/tomcat-users.xml.original
#vim apache9/conf/tomcat-users.xml


<?xml version="1.0" encoding="UTF-8"?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<tomcat-users xmlns="http://tomcat.apache.org/xml"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
              version="1.0">
<role rolename="manager-gui"/>
<role rolename="manager-script"/>
<role rolename="manager-jmx"/>
<role rolename="manager-status"/>
<role rolename="admin-gui"/>
<role rolename="admin-script"/>
<user username="
​jenkins
" password="
​jenkins
" roles="manager-gui,manager-script,manager-jmx,manager-status,admin-gui,admin-script"/>
</tomcat-users>


#cd apache9/bin
#./startup.sh

[user@localhost bin]$ ./startup.sh
Using CATALINA_BASE:   /home/vagrant/tomcat9
Using CATALINA_HOME:   /home/vagrant/tomcat9
Using CATALINA_TMPDIR: /home/vagrant/tomcat9/temp
Using JRE_HOME:        /usr
Using CLASSPATH:       /home/vagrant/tomcat9/bin/bootstrap.jar:/home/vagrant/tomcat9/bin/tomcat-juli.jar
Tomcat started.
[user@localhost bin]$

- Open your local browser and point your URL to http://localhost:8080 to display Apache Tomcat


In-order to deploy jenkins war file click on "Manager App" , provide credentials written in tomcat-users.xml
In deploy section write your context path and WAR file to be deloyed


Once above is completed, it will be deployed and in the Application and click on Jenkins, you need to unlock Jenkins by providing password located in your home directory. 
#cat /home/user/.jenkins/secret/initialAdminPassword


Once they are unlocked, install the suggested plugins and get started. Create your First Admin user and Jenkins is ready 



Saturday, 10 June 2017

​Configure ELK stack for centralized log management

I had a two node centOS7 machine on which I had configured Elasticsearch, Logstash, Kibana(ELK) on the second node configured filebeat to send all logs to logstash. 

Hostnames : node1 & node2
Environment : CentOS 7.3
RPM versions : elasticsearch/kibana/logstash/filebeat - 5.4.1

Minimum requirement: ensure your java package are installed and you have sufficient memory on the elasticserver based on the clients which you are trying to configure..

Would install (elastic/logstash/kibana) on all in a single node(node1) and client(node2) would forward all the logs in /var/log/* to logstash on the node1.

Lets start ....

node1:

Download the RPM and install using yum, kibana is a tarball and shall extract to a directory.


Install Elasticsearch:

[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.
[root@node1 ~]# systemctl start elasticsearch.service
[root@node1 ~]# systemctl status elasticsearch.service

Install kibana:

[root@node1 ~]# tar -xzvf kibana-5.4.1-linux-x86_64.tar.gz -C /usr/local
[root@node1 ~]# cd /usr/local/
[root@node1 ~]# mv kiba* kibana

Make it to start service when system boots

[root@node1 ~]# vim /etc/systemd/system/kibana.service
[Service]
ExecStart=/usr/local/kibana/bin/kibana

[Install]
WantedBy=multi-user.target

[root@node1 system]# systemctl enable kibana.service
[root@node1 system]# systemctl start kibana.service
[root@node1 system]# systemctl status kibana.service

Now you could point your ip:5601 on your browser to get your kibana dashboard.

Install Logstash:

[root@node1 ~]# cat /etc/logstash/conf.d/logstash.conf
input {
   beats {
    port => 5044
    type => "logs"
  }
}

filter {
  if [type] == "syslog" {
   grok {
     match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"}
     add_field => [ "received_at", "%{@timestamp}" ]
     add_field => [ "received_from", "%{hosts}" ]
    }
    syslog_pri { }
    date {
     match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
   }
}

output {
  elasticsearch {hosts => localhost}
  stdout { codec => rubydebug }
}
[root@node1 ~]#

[root@node1 ~]# systemctl enable logstash
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[root@node1 ~]# systemctl start logstash.service
[root@node1 ~]# systemctl status logstash.service

node2:

Install filebeat:

Download and install filebeat RPM


[root@node2 ~]# yum localinstall filebeat-5.4.1-x86_64.rpm
[root@node2 ~]# systemctl enable filebeat.service
Created symlink from /etc/systemd/system/multi-user.target.wants/filebeat.service to /usr/lib/systemd/system/filebeat.service.
[root@node2 ~]# systemctl start filebeat.service
[root@node2 ~]# systemctl status filebeat.service

Make changes to your configuration logstash so that it would sent to your logstash server.

[root@node2 ~]# vim /etc/filebeat/filebeat.yml
 91 output.logstash:
 92   # The Logstash hosts
 93   hosts: ["192.168.122.100:5044"]

Error or Info: Logs you need to check if something goes wrong 

node1:

elasticsearch:

[root@node1 ~]# cat /var/log/elasticsearch/elasticsearch.log

[2017-06-09T18:11:34,760][INFO ][o.e.n.Node               ] [] initializing ...
[2017-06-09T18:11:34,975][INFO ][o.e.e.NodeEnvironment    ] [sfAWP7D] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.4gb], net total_space [6.1gb], spins? [unknown], types [rootfs]
[2017-06-09T18:11:34,975][INFO ][o.e.e.NodeEnvironment    ] [sfAWP7D] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-06-09T18:11:34,976][INFO ][o.e.n.Node               ] node name [sfAWP7D] derived from node ID [sfAWP7DYQpiQTVtuRd0DFw]; set [node.name] to override
[2017-06-09T18:11:34,977][INFO ][o.e.n.Node               ] version[5.4.1], pid[2966], build[2cfe0df/2017-05-29T16:05:51.443Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_131/25.131-b12]
[2017-06-09T18:11:34,977][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-06-09T18:11:37,704][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [aggs-matrix-stats]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [ingest-common]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-expression]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-groovy]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-mustache]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-painless]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [percolator]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [reindex]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [transport-netty3]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [transport-netty4]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] no plugins loaded
[2017-06-09T18:11:42,393][INFO ][o.e.d.DiscoveryModule    ] [sfAWP7D] using discovery type [zen]
[2017-06-09T18:11:43,857][INFO ][o.e.n.Node               ] initialized
[2017-06-09T18:11:43,857][INFO ][o.e.n.Node               ] [sfAWP7D] starting ...
[2017-06-09T18:11:44,190][INFO ][o.e.t.TransportService   ] [sfAWP7D] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-06-09T18:11:47,411][INFO ][o.e.c.s.ClusterService   ] [sfAWP7D] new_master {sfAWP7D}{sfAWP7DYQpiQTVtuRd0DFw}{CNOzn0gIRBqP_pVHl-xusQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)

logstash:


[root@node1 ~]# cat /var/log/logstash/logstash-plain.log
[2017-06-09T20:44:22,105][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}

[2017-06-09T20:44:22,135][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-06-09T20:44:22,395][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x554cf8d1 URL:http://localhost:9200/>}
[2017-06-09T20:44:22,422][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-06-09T20:44:22,567][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-06-09T20:44:22,599][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x74f64db6 URL://localhost>]}
[2017-06-09T20:44:22,813][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-06-09T20:44:23,857][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-06-09T20:44:23,978][INFO ][logstash.pipeline        ] Pipeline main started
[2017-06-09T20:44:24,112][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

node2:

filebeat:

[root@node2 ~]#head -100 /var/log/filebeat/filebeat
2017-06-09T20:37:53+05:30 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-06-09T20:37:53+05:30 INFO Setup Beat: filebeat; Version: 5.4.1
2017-06-09T20:37:53+05:30 INFO Max Retries set to: 3
2017-06-09T20:37:53+05:30 INFO Activated logstash as output plugin.
2017-06-09T20:37:53+05:30 INFO Publisher name: cen02.elktest.com
2017-06-09T20:37:53+05:30 INFO Flush Interval set to: 1s
2017-06-09T20:37:53+05:30 INFO Max Bulk Size set to: 2048
2017-06-09T20:37:53+05:30 INFO filebeat start running.
2017-06-09T20:37:53+05:30 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2017-06-09T20:37:53+05:30 INFO Metrics logging every 30s
2017-06-09T20:37:53+05:30 INFO Loading registrar data from /var/lib/filebeat/registry
2017-06-09T20:37:53+05:30 INFO States Loaded from registrar: 0
2017-06-09T20:37:53+05:30 INFO Loading Prospectors: 1
2017-06-09T20:37:53+05:30 INFO Prospector with previous states loaded: 0
2017-06-09T20:37:53+05:30 INFO Starting prospector of type: log; id: 17005676086519951868
2017-06-09T20:37:53+05:30 INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017-06-09T20:37:53+05:30 INFO Starting Registrar
2017-06-09T20:37:53+05:30 INFO Start sending events to output
2017-06-09T20:37:53+05:30 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/wpa_supplicant.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/yum.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/VBoxGuestAdditions-uninstall.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/VBoxGuestAdditions.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/Xorg.0.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/boot.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/vboxadd-install.log



Thanks