Saturday, 10 June 2017

​Configure ELK stack for centralized log management

I had a two node centOS7 machine on which I had configured Elasticsearch, Logstash, Kibana(ELK) on the second node configured filebeat to send all logs to logstash. 

Hostnames : node1 & node2
Environment : CentOS 7.3
RPM versions : elasticsearch/kibana/logstash/filebeat - 5.4.1

Minimum requirement: ensure your java package are installed and you have sufficient memory on the elasticserver based on the clients which you are trying to configure..

Would install (elastic/logstash/kibana) on all in a single node(node1) and client(node2) would forward all the logs in /var/log/* to logstash on the node1.

Lets start ....

node1:

Download the RPM and install using yum, kibana is a tarball and shall extract to a directory.


Install Elasticsearch:

[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.
[root@node1 ~]# systemctl start elasticsearch.service
[root@node1 ~]# systemctl status elasticsearch.service

Install kibana:

[root@node1 ~]# tar -xzvf kibana-5.4.1-linux-x86_64.tar.gz -C /usr/local
[root@node1 ~]# cd /usr/local/
[root@node1 ~]# mv kiba* kibana

Make it to start service when system boots

[root@node1 ~]# vim /etc/systemd/system/kibana.service
[Service]
ExecStart=/usr/local/kibana/bin/kibana

[Install]
WantedBy=multi-user.target

[root@node1 system]# systemctl enable kibana.service
[root@node1 system]# systemctl start kibana.service
[root@node1 system]# systemctl status kibana.service

Now you could point your ip:5601 on your browser to get your kibana dashboard.

Install Logstash:

[root@node1 ~]# cat /etc/logstash/conf.d/logstash.conf
input {
   beats {
    port => 5044
    type => "logs"
  }
}

filter {
  if [type] == "syslog" {
   grok {
     match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"}
     add_field => [ "received_at", "%{@timestamp}" ]
     add_field => [ "received_from", "%{hosts}" ]
    }
    syslog_pri { }
    date {
     match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
   }
}

output {
  elasticsearch {hosts => localhost}
  stdout { codec => rubydebug }
}
[root@node1 ~]#

[root@node1 ~]# systemctl enable logstash
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[root@node1 ~]# systemctl start logstash.service
[root@node1 ~]# systemctl status logstash.service

node2:

Install filebeat:

Download and install filebeat RPM


[root@node2 ~]# yum localinstall filebeat-5.4.1-x86_64.rpm
[root@node2 ~]# systemctl enable filebeat.service
Created symlink from /etc/systemd/system/multi-user.target.wants/filebeat.service to /usr/lib/systemd/system/filebeat.service.
[root@node2 ~]# systemctl start filebeat.service
[root@node2 ~]# systemctl status filebeat.service

Make changes to your configuration logstash so that it would sent to your logstash server.

[root@node2 ~]# vim /etc/filebeat/filebeat.yml
 91 output.logstash:
 92   # The Logstash hosts
 93   hosts: ["192.168.122.100:5044"]

Error or Info: Logs you need to check if something goes wrong 

node1:

elasticsearch:

[root@node1 ~]# cat /var/log/elasticsearch/elasticsearch.log

[2017-06-09T18:11:34,760][INFO ][o.e.n.Node               ] [] initializing ...
[2017-06-09T18:11:34,975][INFO ][o.e.e.NodeEnvironment    ] [sfAWP7D] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.4gb], net total_space [6.1gb], spins? [unknown], types [rootfs]
[2017-06-09T18:11:34,975][INFO ][o.e.e.NodeEnvironment    ] [sfAWP7D] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-06-09T18:11:34,976][INFO ][o.e.n.Node               ] node name [sfAWP7D] derived from node ID [sfAWP7DYQpiQTVtuRd0DFw]; set [node.name] to override
[2017-06-09T18:11:34,977][INFO ][o.e.n.Node               ] version[5.4.1], pid[2966], build[2cfe0df/2017-05-29T16:05:51.443Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_131/25.131-b12]
[2017-06-09T18:11:34,977][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2017-06-09T18:11:37,704][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [aggs-matrix-stats]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [ingest-common]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-expression]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-groovy]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-mustache]
[2017-06-09T18:11:37,716][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [lang-painless]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [percolator]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [reindex]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [transport-netty3]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] loaded module [transport-netty4]
[2017-06-09T18:11:37,717][INFO ][o.e.p.PluginsService     ] [sfAWP7D] no plugins loaded
[2017-06-09T18:11:42,393][INFO ][o.e.d.DiscoveryModule    ] [sfAWP7D] using discovery type [zen]
[2017-06-09T18:11:43,857][INFO ][o.e.n.Node               ] initialized
[2017-06-09T18:11:43,857][INFO ][o.e.n.Node               ] [sfAWP7D] starting ...
[2017-06-09T18:11:44,190][INFO ][o.e.t.TransportService   ] [sfAWP7D] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-06-09T18:11:47,411][INFO ][o.e.c.s.ClusterService   ] [sfAWP7D] new_master {sfAWP7D}{sfAWP7DYQpiQTVtuRd0DFw}{CNOzn0gIRBqP_pVHl-xusQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)

logstash:


[root@node1 ~]# cat /var/log/logstash/logstash-plain.log
[2017-06-09T20:44:22,105][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}

[2017-06-09T20:44:22,135][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-06-09T20:44:22,395][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x554cf8d1 URL:http://localhost:9200/>}
[2017-06-09T20:44:22,422][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-06-09T20:44:22,567][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-06-09T20:44:22,599][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x74f64db6 URL://localhost>]}
[2017-06-09T20:44:22,813][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-06-09T20:44:23,857][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-06-09T20:44:23,978][INFO ][logstash.pipeline        ] Pipeline main started
[2017-06-09T20:44:24,112][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

node2:

filebeat:

[root@node2 ~]#head -100 /var/log/filebeat/filebeat
2017-06-09T20:37:53+05:30 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-06-09T20:37:53+05:30 INFO Setup Beat: filebeat; Version: 5.4.1
2017-06-09T20:37:53+05:30 INFO Max Retries set to: 3
2017-06-09T20:37:53+05:30 INFO Activated logstash as output plugin.
2017-06-09T20:37:53+05:30 INFO Publisher name: cen02.elktest.com
2017-06-09T20:37:53+05:30 INFO Flush Interval set to: 1s
2017-06-09T20:37:53+05:30 INFO Max Bulk Size set to: 2048
2017-06-09T20:37:53+05:30 INFO filebeat start running.
2017-06-09T20:37:53+05:30 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2017-06-09T20:37:53+05:30 INFO Metrics logging every 30s
2017-06-09T20:37:53+05:30 INFO Loading registrar data from /var/lib/filebeat/registry
2017-06-09T20:37:53+05:30 INFO States Loaded from registrar: 0
2017-06-09T20:37:53+05:30 INFO Loading Prospectors: 1
2017-06-09T20:37:53+05:30 INFO Prospector with previous states loaded: 0
2017-06-09T20:37:53+05:30 INFO Starting prospector of type: log; id: 17005676086519951868
2017-06-09T20:37:53+05:30 INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017-06-09T20:37:53+05:30 INFO Starting Registrar
2017-06-09T20:37:53+05:30 INFO Start sending events to output
2017-06-09T20:37:53+05:30 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/wpa_supplicant.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/yum.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/VBoxGuestAdditions-uninstall.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/VBoxGuestAdditions.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/Xorg.0.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/boot.log
2017-06-09T20:37:53+05:30 INFO Harvester started for file: /var/log/vboxadd-install.log



Thanks

Monday, 8 May 2017

Configure ELK using dockers

I would wish to keep services in three different containers and will try to link to each containers to access ELK stack. 
Installation is very easy .. we will pull docker images and run the containers.

Elasticsearch: 

$sudo docker run --name elasticsearch -d -p 9200:9200 -p 9300:9300 elasticsearch
9c7d52445691015e21a7007e35aca935b9a0dcbbd9560f170cc07c8adc08ae63

$ sudo docker ps -l
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                            NAMES
9c7d52445691        elasticsearch       "/docker-entrypoint.s"   30 seconds ago      Up 30 seconds       0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   elasticsearch
$

test your configurations 

{
  "name" : "PIvLNU_",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "Rx7oxSxESvqo-8GNXkDzCA",
  "version" : {
    "number" : "5.3.1",
    "build_hash" : "5f9cf58",
    "build_date" : "2017-04-17T15:52:53.846Z",
    "build_snapshot" : false,
    "lucene_version" : "6.4.2"
  },
  "tagline" : "You Know, for Search"
}

Kibana: visualization tool which connects to elasticsearch

since elasticsearch is already running, we need to point kibana container to elasticsearch container.  

$sudo docker run --name kibana -d -p 5601:5601 --link elasticsearch:elasticsearch kibana
2090b8df39a44016e401e8b2fb4e2d79f4d674e9f02ae5794f1ed484fb28913e

$ sudo docker ps -l
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
2090b8df39a4        kibana              "/docker-entrypoint.s"   7 seconds ago       Up 4 seconds        0.0.0.0:5601->5601/tcp   kibana

<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';

var hash = window.location.hash;
if (hash.length) {
  window.location = hashRoute + hash;
} else {
  window.location = defaultRoute;
}</script>[sunlnx@fedora ~]$

Point your browser to http://localhost:5601 which would re-direct to kibana default index page..



Logstash:

we would try to take standard input and would it has to be reflected on the kibana dashboard.
create a configuration file for syslog and would start container. 

$cd $PWD/logstash/
$cat logstash.conf

# input from keyboard
input{
stdin {}
}

#output to elasticcontainer
output {
elasticsearch { hosts => ["elasticsearch:9200"] }
}

we are referencing elasticsearch from container and we will link these two containers together. 

$sudo docker run -it --rm --name logstash --link elasticsearch:elasticsearch -v $PWD:/config logstash -f /config/logstash.conf

Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
10:41:27.928 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
10:41:28.003 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"b359fa87-aef2-4cb1-a533-84767399a0f7", :path=>"/var/lib/logstash/uuid"}
10:41:29.340 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs u
.
.
<snip>
this is test1
this is test2
this is test3


logstash would trigger and pushed all messages to elasticsearch, where elasticsearch would created an index page. refresh kibana dashboard, you would now see an index page, click on 'create' and proceed next.

Click on 'discover', you would now see all the messages being taken from keyboard.





Example 2: 

you can also create your own file for port forward TCP and map to your localhost and try to see telnet. you must be able to see the logs in the kibana dashboard. 

$cat portfw.conf

input{
tcp {
   port => 9500
}
}

output {
elasticsearch { hosts => ["elasticsearch:9200"] }
}

$telnet localhost 9500
<type your message> 

I hope you would find this easy for you to configure ELK stack. you could find more reference on the logstash configuration examples from "https://www.elastic.co/guide/en/logstash/current/config-examples.html

Thanks 

Tuesday, 2 May 2017

Modify Docker Installed Directory

If you had only / partition getting filled up with docker images or containers, and you wish to move the docker directory to other location below steps can help you ..

OS: CentOS

Change your default storage base directory for docker(containers and images) from file /etc/sysconfig/docker.

# grep other_args /etc/sysconfig/docker
other_args="-g /your_desired_directory"
#

steps to be taken note while moving from one location to other location: 

1. stop your running containers on the docker 
#docker ps 
#docker stop <container_names>

2. stop docker service 
#service docker stop or systemctl stop docker.service

Double check and confirm docker service stopped. 

3. make sure you backup your current /var/lib/docker before making any changes. 

#tar -cvf var_lib_docker-backup-$(date +%s).tar.gz /var/lib/docker/

4. Move your directory to your desired location 
#mv /var/lib/docker /your_desired_directory

5. Create a symlink to your new diretory
# ln -s /your_desired_directory /var/lib/docker

6. start your docker service
#service docker start or systemctl start docker.service

7. start your containers
# docker ps -a
# docker start <container_names>

Thank you

Sunday, 9 April 2017

Configure your printer on Raspberry Pi

Uses were getting printout from one of locally configured desktop machine which was old running on Ubuntu unfortunately had hardware issues and finally gave a thought for it to RIP. :)
Later, thought about the low cost and tried using Raspberry Pi which acts like a print server, after installing drivers it was still not able to detect printer. I have prepared this tutorial on how to configure print services in Raspberry Pi.

What's needed ?

- 1 Raspberry Pi 3 Model B Board installed with Raspbian OS
- 1 USB based printer connected to Pi board.

This is how it looks like :



Let's start :

Update your repository
# sudo apt-get update

First, in order to link your printer with Raspberry Pi, install CUPS. 
# sudo apt-get install cups 

Once the installation is completed, add usergroup that has access to printer queue. usergroup by default would be 'lpadmin' Since by default user for Rasbian user would be 'pi' add it.
#sudo usermod -a -G lpadmin pi

Configure to enable remote editing for CUPS, rest all can be completed via web browser by pointing your http://localhost:port 
the one in "Green" are to be added and in "red" to be commented. 

#sudo vim /etc/cups/cupsd.conf
# Only listen for connections from the local machine.
#Listen localhost:631 { Comment this line and add below line }
Port 631
Listen /var/run/cups/cups.sock

# Restrict access to the server...
<Location />
  Order allow,deny
  Allow @local
</Location>

# Restrict access to the admin pages...
<Location /admin>
  Order allow,deny
  Allow @local
</Location>

# Restrict access to configuration files...
<Location /admin/conf>
  AuthType Default
  Require user @SYSTEM
  Order allow,deny
  Allow @local
</Location>

Restart your CUPS service
#sudo /etc/init.d/cups restart

In your Raspberry Pi's browser point to http://localhost:631 and click on "Adding Printers and Classes"


Click on "Add Printer" in Administration Panel, select your detected printer and continue and if you get warnings about site certificate ignore it, on which it prompts for username and password of the account you added to 'lpadmin' group earlier in this post.

After logging in, you'll be presented with a list of discovered printers(local or networked). Select the printer you wish to add to your "Pi" 



After continue, you will be prompted to select specific driver you want for your printer. scroll until you see a model number that matches yours. 


​The last configuration step is to Set Default Options and Congrats you have added your system to CUPS. 

Try to display installed printers
#lpstat -a 

Once all are fine, you could issue a print command. 

Thank you. 

Sunday, 26 March 2017

Kerberos Authentication

​Since I had already explained in the past on the mechanism about kerberos, I would try to keep this as much simple as I can.


Go through the below scanned pic, in short have written few notes .. 


Principal Name and key are specified to the client, so clients sends principal name and request for TGT to KDC. 

KDC generates session key(SK) and TGT containing copy of session key, uses TGS to encrypt TGT. Principal key used to encrypt [ Encrypted GT and copy of session key], Client Decrypts using its principal key to extract session key and encrypted TGT.

When client wants to use any service(SSH/NFS..etc) to obtain access for local or remote system( hereafter referred as service provider), it will use session key to encrypt TGT, clients IP addr, time stamp, and SR and sends to KDC

KDC uses its session keys and TGS keys to extract IP addr, time stamp allowing itself to validate client, on successful generates service session key(SSK) and SR containing IP addr+time stamp+copy of SSK and encrypts using service key for SR. 
SK to encrypt both E(SR) and copy of SSK.

Client uses its copy of SK to extract E(E(SR)+SSK)

Client sends the E(SR) to service provider along with E[ Principal name + Time stamp] with E(SSK) 

Service provider uses SK to extract SR which it retrieves SSK to decrypt clients E( Time stamp + Principal Name) 
Once its successful, service provider grants access to its host system.

How to implement kerberos: 

Thanks.