Monday, 8 May 2017

Configure ELK using dockers

I would wish to keep services in three different containers and will try to link to each containers to access ELK stack. 
Installation is very easy .. we will pull docker images and run the containers.

Elasticsearch: 

$sudo docker run --name elasticsearch -d -p 9200:9200 -p 9300:9300 elasticsearch
9c7d52445691015e21a7007e35aca935b9a0dcbbd9560f170cc07c8adc08ae63

$ sudo docker ps -l
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                            NAMES
9c7d52445691        elasticsearch       "/docker-entrypoint.s"   30 seconds ago      Up 30 seconds       0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   elasticsearch
$

test your configurations 

{
  "name" : "PIvLNU_",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "Rx7oxSxESvqo-8GNXkDzCA",
  "version" : {
    "number" : "5.3.1",
    "build_hash" : "5f9cf58",
    "build_date" : "2017-04-17T15:52:53.846Z",
    "build_snapshot" : false,
    "lucene_version" : "6.4.2"
  },
  "tagline" : "You Know, for Search"
}

Kibana: visualization tool which connects to elasticsearch

since elasticsearch is already running, we need to point kibana container to elasticsearch container.  

$sudo docker run --name kibana -d -p 5601:5601 --link elasticsearch:elasticsearch kibana
2090b8df39a44016e401e8b2fb4e2d79f4d674e9f02ae5794f1ed484fb28913e

$ sudo docker ps -l
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
2090b8df39a4        kibana              "/docker-entrypoint.s"   7 seconds ago       Up 4 seconds        0.0.0.0:5601->5601/tcp   kibana

<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';

var hash = window.location.hash;
if (hash.length) {
  window.location = hashRoute + hash;
} else {
  window.location = defaultRoute;
}</script>[sunlnx@fedora ~]$

Point your browser to http://localhost:5601 which would re-direct to kibana default index page..



Logstash:

we would try to take standard input and would it has to be reflected on the kibana dashboard.
create a configuration file for syslog and would start container. 

$cd $PWD/logstash/
$cat logstash.conf

# input from keyboard
input{
stdin {}
}

#output to elasticcontainer
output {
elasticsearch { hosts => ["elasticsearch:9200"] }
}

we are referencing elasticsearch from container and we will link these two containers together. 

$sudo docker run -it --rm --name logstash --link elasticsearch:elasticsearch -v $PWD:/config logstash -f /config/logstash.conf

Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
10:41:27.928 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
10:41:28.003 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"b359fa87-aef2-4cb1-a533-84767399a0f7", :path=>"/var/lib/logstash/uuid"}
10:41:29.340 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs u
.
.
<snip>
this is test1
this is test2
this is test3


logstash would trigger and pushed all messages to elasticsearch, where elasticsearch would created an index page. refresh kibana dashboard, you would now see an index page, click on 'create' and proceed next.

Click on 'discover', you would now see all the messages being taken from keyboard.





Example 2: 

you can also create your own file for port forward TCP and map to your localhost and try to see telnet. you must be able to see the logs in the kibana dashboard. 

$cat portfw.conf

input{
tcp {
   port => 9500
}
}

output {
elasticsearch { hosts => ["elasticsearch:9200"] }
}

$telnet localhost 9500
<type your message> 

I hope you would find this easy for you to configure ELK stack. you could find more reference on the logstash configuration examples from "https://www.elastic.co/guide/en/logstash/current/config-examples.html

Thanks 

Tuesday, 2 May 2017

​Change docker installation directory

If you had only / partition getting filled up with docker images or containers, and you wish to move the docker directory to other location below steps can help you ..

OS: CentOS

Change your default storage base directory for docker(containers and images) from file /etc/sysconfig/docker.

# grep other_args /etc/sysconfig/docker
other_args="-g /your_desired_directory"
#

steps to be taken note while moving from one location to other location: 

1. stop your running containers on the docker 
#docker ps 
#docker stop <container_names>

2. stop docker service 
#service docker stop or systemctl stop docker.service

Double check and confirm docker service stopped. 

3. make sure you backup your current /var/lib/docker before making any changes. 

#tar -cvf var_lib_docker-backup-$(date +%s).tar.gz /var/lib/docker/

4. Move your directory to your desired location 
#mv /var/lib/docker /your_desired_directory

5. Create a symlink to your new diretory
# ln -s /your_desired_directory /var/lib/docker

6. start your docker service
#service docker start or systemctl start docker.service

7. start your containers
# docker ps -a
# docker start <container_names>

Thank you

Sunday, 9 April 2017

Configure your printer on Raspberry Pi

Uses were getting printout from one of locally configured desktop machine which was old running on Ubuntu unfortunately had hardware issues and finally gave a thought for it to RIP. :)
Later, thought about the low cost and tried using Raspberry Pi which acts like a print server, after installing drivers it was still not able to detect printer. I have prepared this tutorial on how to configure print services in Raspberry Pi.

What's needed ?

- 1 Raspberry Pi 3 Model B Board installed with Raspbian OS
- 1 USB based printer connected to Pi board.

This is how it looks like :



Let's start :

Update your repository
# sudo apt-get update

First, in order to link your printer with Raspberry Pi, install CUPS. 
# sudo apt-get install cups 

Once the installation is completed, add usergroup that has access to printer queue. usergroup by default would be 'lpadmin' Since by default user for Rasbian user would be 'pi' add it.
#sudo usermod -a -G lpadmin pi

Configure to enable remote editing for CUPS, rest all can be completed via web browser by pointing your http://localhost:port 
the one in "Green" are to be added and in "red" to be commented. 

#sudo vim /etc/cups/cupsd.conf
# Only listen for connections from the local machine.
#Listen localhost:631 { Comment this line and add below line }
Port 631
Listen /var/run/cups/cups.sock

# Restrict access to the server...
<Location />
  Order allow,deny
  Allow @local
</Location>

# Restrict access to the admin pages...
<Location /admin>
  Order allow,deny
  Allow @local
</Location>

# Restrict access to configuration files...
<Location /admin/conf>
  AuthType Default
  Require user @SYSTEM
  Order allow,deny
  Allow @local
</Location>

Restart your CUPS service
#sudo /etc/init.d/cups restart

In your Raspberry Pi's browser point to http://localhost:631 and click on "Adding Printers and Classes"


Click on "Add Printer" in Administration Panel, select your detected printer and continue and if you get warnings about site certificate ignore it, on which it prompts for username and password of the account you added to 'lpadmin' group earlier in this post.

After logging in, you'll be presented with a list of discovered printers(local or networked). Select the printer you wish to add to your "Pi" 



After continue, you will be prompted to select specific driver you want for your printer. scroll until you see a model number that matches yours. 


​The last configuration step is to Set Default Options and Congrats you have added your system to CUPS. 

Try to display installed printers
#lpstat -a 

Once all are fine, you could issue a print command. 

Thank you. 

Sunday, 26 March 2017

Kerberos Authentication

​Since I had already explained in the past on the mechanism about kerberos, I would try to keep this as much simple as I can.


Go through the below scanned pic, in short have written few notes .. 


Principal Name and key are specified to the client, so clients sends principal name and request for TGT to KDC. 

KDC generates session key(SK) and TGT containing copy of session key, uses TGS to encrypt TGT. Principal key used to encrypt [ Encrypted GT and copy of session key], Client Decrypts using its principal key to extract session key and encrypted TGT.

When client wants to use any service(SSH/NFS..etc) to obtain access for local or remote system( hereafter referred as service provider), it will use session key to encrypt TGT, clients IP addr, time stamp, and SR and sends to KDC

KDC uses its session keys and TGS keys to extract IP addr, time stamp allowing itself to validate client, on successful generates service session key(SSK) and SR containing IP addr+time stamp+copy of SSK and encrypts using service key for SR. 
SK to encrypt both E(SR) and copy of SSK.

Client uses its copy of SK to extract E(E(SR)+SSK)

Client sends the E(SR) to service provider along with E[ Principal name + Time stamp] with E(SSK) 

Service provider uses SK to extract SR which it retrieves SSK to decrypt clients E( Time stamp + Principal Name) 
Once its successful, service provider grants access to its host system.

How to implement kerberos: 

Thanks.