Monday, 27 April 2015

High-Availability (HA) Apache Cluster - CentOS 7

In this article, I would be walk through the steps required to build a high-availability apache cluster on CentOS 7. In CentOS 7(as in RHEL 7) the cluster stack has been moved to pacemaker/corosync, with a new command line tool(pcs) to manage the cluster.

Environment Description:
The cluster would be of 2-node(centos71 & centos72), iSCSI shared storage will be presented from the node iscsitarget which is around 1GB.
If you want to know on how to share disk from iscsi click here

Objective:
High availability cluster that is serving websites where its document root would be utilising a simple failover filesystem.
In a stable situation, cluster should look something like this:


There is one owner of the virtual IP, in this case that is centos71. The owner of the virtual IP also provides the service for the cluster at that moment. A client that is trying to reach our website via 192.168.229.134 will be served the webpages from the webserver running on centos71 . In the above situation, the second node is not doing anything besides waiting for centos71 to fail and take over. This scenario is called active-passive.

In case something happens to centos71 the system crashes, the node is no longer reachable or the webserver isn't responding anymore, centos72 will become the owner of the virtual IP and start its webserver to provide the same services as were running on centos71


Pre-requsites:
Configure both the cluster nodes with static IP, hostname, and make sure they are in the same subnet and could be reached each other by their nodenames.(either you could add it in /etc/hosts or configure your DNS server)

If you had configured your firewall, make sure you allow cluster traffic(incase if they are active on any nodes), in this example, I had shutdown the firewall.

Installation:
After your basic setup, install packages
[root@centos71 ~]# yum install corosync pcs pacemaker
[root@centos72 ~]# yum install corosync pcs pacemaker

To manage cluster nodes, we will use PCS. this allows us to have a single interface to manage all cluster nodes. By installing the necessary packages, Yum also created a user, hacluster, which can be used together with PCS to do the configuration of the cluster nodes.

[root@centos71 ~]# passwd hacluster
[root@centos72 ~]# passwd hacluster

Next, start the pcsd service on both nodes:
[root@centos71 ~]# systemctl start pcsd
[root@centos72 ~]# systemctl start pcsd

Since we will configure all nodes from one point, we need to authenticate on all nodes before we are allowed to change the configuration. Use the previously configured hacluster user and password to do this.

[root@centos71 ~]# pcs cluster auth centos71 centos72
From here, we can control the cluster by using PCS from centos71 It's no longer required to repeat all commands on both nodes.

Create the cluster and add nodes
[root@centos71 ~]# pcs cluster setup --name webcluster centos71 centos72

The above command creates the cluster node configuration in /etc/corosync.conf
After creating the cluster and adding nodes to it, we can start it
[root@centos71 ~]# pcs cluster start --all
centos72: Starting Cluster...
centos71: Starting Cluster...
[root@centos71 ~]#

check the status of the cluster after starting it:
[root@centos71 ~]# pcs status cluster
Cluster Status:
 Last updated: Sun Apr 26 18:17:32 2015
 Last change: Sun Apr 26 18:16:26 2015 via cibadmin on centos71
 Stack: corosync
 Current DC: centos71 (1) - partition with quorum
 Version: 1.1.10-29.el7-368c726
 2 Nodes configured
 0 Resources configured
[root@centos71 ~]#

Since we have simple cluster, we'll just disable the stonith and ignore quorum device.
[root@centos71 ~]# pcs property set stonith-enabled=false
[root@centos71 ~]# pcs property set no-quorum-policy=ignore

Next, create a partition on the 1GB LUN  – this will house a filesystem to be used as the DocumentRoot for our Apache installation.
I had configured multipath for the device, hence install device-mapper* on both nodes. 
[root@centos71 ~]# mpathconf --enable
[root@centos72 ~]# mpathconf --enable
[root@centos71 ~]# systemctl multipathd start
[root@centos72 ~]# systemctl multipathd start

Create partiton from any of the one node, and then only 'partprobe' the second by default you would have had the same mapper device for the newly added iSCSI device. 
[root@centos71 ~]# lsblk -i
sdb                       8:16   0 1016M  0 disk
`-mpatha                253:6    0 1016M  0 mpath
  `-mpatha1             253:7    0 1015M  0 part
[root@centos71 ~]#

[root@centos72 ~]# lsblk -i
sdb                       8:16   0 1016M  0 disk
`-mpatha                253:5    0 1016M  0 mpath
  `-mpatha1             253:6    0 1015M  0 part
[root@centos72 ~]#

[root@centos71 ~]# fdisk /dev/mapper/mpatha
[root@centos71 ~]# partprobe
[root@centos71 ~]# mkfs.ext4 /dev/mapper/mpatha1
[root@centos71 ~]# mount /dev/mapper/mpatha1 /var/www
[root@centos71 ~]# mkdir /var/www/html;mkdir /var/www/error; 
[root@centos71 ~]# echo "apache test page" >/var/www/html/index.html 
[root@centos71 ~]# umount /dev/mapper/mpatha1

Create the filesystem cluster resource fs_apache_shared, and would group it to "apachegroup" which will be used to group the resources together as one unit.
[root@centos71 ~]#  pcs resource create fs_apache_shared ocf:heartbeat:Filesystem device=/dev/mapper/mpatha1 fstype=ext4 directory="/var/www" --group=apachegroup
[root@centos71 ~]#

We will add a virtual IP to our cluster. This virtual IP is the IP address that which will be contacted to reach the services (the webserver in our case). A virtual IP is a resource.
[root@centos71 ~]# pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.229.134 cidr_netmask=24 --group=apachegroup

Create a file /etc/httpd/conf.d/serverstatus.conf with the following contents on both nodes:
[root@centos71 ~]# cat /etc/httpd/conf.d/serverstatus.conf
Listen 127.0.0.1:80
 <Location /server-status>
 SetHandler server-status
 Order deny,allow
 Deny from all
 Allow from 127.0.0.1
 </Location>
[root@centos71 ~]#

Disable the current Listen-statement in the Apache configuration in order to avoid trying to listen multiple times on the same port.
[root@centos71 ~]#  grep -w "Listen 80" /etc/httpd/conf/httpd.conf
#Listen 80
[root@centos72 ~]#  grep -w "Listen 80" /etc/httpd/conf/httpd.conf
#Listen 80

Now that Apache is ready to be controlled by our cluster, we'll add a resource for the webserver
[root@centos71 ~]# pcs resource create webserver ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl="http://localhost/server-status" --group=apachegroup

Testing:
Browse http://192.168.229.134, it must display a test page. move your resource groups to partner node, where you shouldn't expect any downtime to your apache service.

[root@centos71 ~]# pcs status
Cluster name: webcluster
Last updated: Sun Apr 26 21:20:14 2015
Last change: Sun Apr 26 21:19:09 2015 via cibadmin on centos71
Stack: corosync
Current DC: centos71 (1) - partition with quorum
Version: 1.1.10-29.el7-368c726
2 Nodes configured
3 Resources configured

Online: [ centos71 centos72 ]

Full list of resources:

 Resource Group: apachegroup
     fs_apache_shared   (ocf::heartbeat:Filesystem):    Started centos71
     virtual_ip (ocf::heartbeat:IPaddr2):       Started centos71
     webserver  (ocf::heartbeat:apache):        Started centos71

PCSD Status:
  centos71: Online
  centos72: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/disabled
[root@centos71 ~]#

[root@centos71 ~]# pcs resource move webserver
[root@centos71 ~]# pcs status
Cluster name: webcluster
Last updated: Sun Apr 26 21:21:00 2015
Last change: Sun Apr 26 21:20:57 2015 via crm_resource on centos71
Stack: corosync
Current DC: centos71 (1) - partition with quorum
Version: 1.1.10-29.el7-368c726
2 Nodes configured
3 Resources configured

Online: [ centos71 centos72 ]

Full list of resources:

 Resource Group: apachegroup
     fs_apache_shared   (ocf::heartbeat:Filesystem):    Started centos72
     virtual_ip (ocf::heartbeat:IPaddr2):       Started centos72
     webserver  (ocf::heartbeat:apache):        Started centos72

PCSD Status:
  centos71: Online
  centos72: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/disabled
[root@centos71 ~]#

you can use df -h  to confirm file system failover, ip addr show to confirm IP address failover.

Saturday, 25 April 2015

Install/Configure iSCSI targets, LUN's, initiators - CentOS

​iSCSI is an acronym for Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet. iSCSI protocol allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts with the illusion of locally attached disks. However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN), due to competition for a fixed amount of bandwidth.


Environment: centos 6.5 - iSCSI target 
             centos 7   - iSCSI initiators

Diagram is self explanatory for the environment on which I would explain below article in three sections
I have attached one HDD on the target server.

Section 1: Install iscsi target 
Section 2: Create LUN using iscsi
Section 3: Install initiators and verify LUN

Section 1 :

- Install the iscsi target package
iscsitarget# yum install scsi-target-utils -y

- start the iscsi service and autostart while system boots.
iscsitarget# /etc/init.d/tgtd start
iscsitarget# chkconfig tgtd on

- Incase firewall is enabled, add rules to allow iptables to discover iscsi targets.
iscsitarget# iptables -A INPUT -i eth0 -p tcp --dport 860 -m state --state NEW,ESTABLISHED -j ACCEPT
iscsitarget# iptables -A INPUT -i eth0 -p tcp --dport 3260 -m state --state NEW,ESTABLISHED -j ACCEPT
iscsitarget# iptables-save
iscsitarget# /etc/init.d/iptables restart

Section 2:
- Check your fdisk utility shows the newly added disk, then partition the drive and change partition type to LVM(8e). save the changes and make sure kernel is aware of the changes made to partition table(partprobe).

iscsitarget# pvcreate /dev/sdb1
iscsitarget# vgcreate vg_iscsi /dev/sdb1
iscsitarget# lvcreate -l 100%FREE -n lv_iscsi vg_iscsi
iscsitarget# pvs;vgs;lvs
  PV         VG       Fmt  Attr PSize    PFree
  /dev/sdb1  vg_iscsi lvm2 a--  1016.00m    0
  VG       #PV #LV #SN Attr   VSize    VFree
  vg_iscsi   1   1   0 wz--n- 1016.00m    0
  LV       VG       Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_iscsi vg_iscsi -wi-ao---- 1016.00m
iscsitarget#

- we need to define the LUN in the target configuration, so it will be available for client machines(initiators).
iscsitarget# cat -n /etc/tgt/targets.conf
    16  default-driver iscsi
    17
    18  <target iqn.2015-04.com.test:apache:lun1>
    19          backing-store /dev/vg_iscsi/lv_iscsi
    20  </target>

briefly, the fields are:
- iqn, iSCSI Qualified Name 
- date (yyyy-mm),  that the naming authority took ownership of the domain
- reversed domain name of the authority
- Optional ":" prefixing a storage target name specified by the naming authority

- reload the configuration
iscsitarget# /etc/init.d/tgtd reload

- show target and LUN status
iscsitarget# tgtadm --mode target --op show
Target 1: iqn.2015-04.com.test:apache:lun1  <<=== iSCSI Qualified Name
    System information: 
        Driver: iscsi
        State: ready   <<=== iSCSI is Ready to Use
    I_T nexus information:
        I_T nexus: 1
            Initiator: iqn.1994-05.com.redhat:905dbea49e0
            Connection: 0
                IP Address: 192.168.229.132
        I_T nexus: 2
            Initiator: iqn.1994-05.com.redhat:488c35a56511
            Connection: 0
                IP Address: 192.168.229.133
    LUN information:
        LUN: 0  <<==== Default LUN ID reservered for controller  
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1 
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 1065 MB, Block size: 512  <<== Lun size defined in the target
            Online: Yes  <<===  Lun is ready to be used 
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/vg_iscsi/lv_iscsi
            Backing store flags:
    Account information:
    ACL information:
        ALL
iscsitarget#

Section 3:

On the clients, install initiator package. 
iscsiinitiator1# yum install iscsi-initiator-utils.x86_64
iscsiinitiator1#

iscsiinitiator2# yum install iscsi-initiator-utils.x86_64
iscsiinitiator2#

- discover the share from the target server. 
iscsiinitiator1# iscsiadm -m discoverydb -t st -p 192.168.229.130 -D
192.168.229.130:3260,1 iqn.2015-04.com.test:apache:lun1
iscsiinitiator1#

iscsiinitiator2# iscsiadm -m discoverydb -t st -p 192.168.229.130 -D
192.168.229.130:3260,1 iqn.2015-04.com.test:apache:lun1
iscsiinitiator2# 

- To attach LUN to client's local system, you will be required to authenticate with target server and allow to login to LUN.
iscsiinitiator1# iscsiadm -m node -T iqn.2015-04.com.test:apache:lun1 -p 192.168.229.130:3260 -l
iscsiinitiator1#

iscsiinitiator2# iscsiadm -m node -T iqn.2015-04.com.test:apache:lun1 -p 192.168.229.130:3260 -l
iscsiinitiator2#

- you can list your block devices from the initiators(lsblk), where you could see new disk. Now, you can create partition, format and mount the file system to use. optional, can be appended in /etc/fstab if you need it to be mounted after reboots.
iscsiinitiator1# lsblk -i 
sdb                       8:16   0 1016M  0 disk
iscsiinitiator1#

iscsiinitiator2# lsblk -i 
sdb                       8:16   0 1016M  0 disk
iscsiinitiator2#

Wednesday, 1 April 2015

Build your RPM

Since, I had an local repository file where I wanted to distribute it to the YUM repository, so had to created an RPM file as explained below

Objective: Demonstrate a simple scenario on how to build RPM. 

  

Environment: CentOS 6.6 (X86_64) 

Install required RPM 

#yum install -y rpm-build rpmdevtools

Create user for RPM build

# useradd -m rpmbld
# passwd rpmbld

Build RPM

login with the rpmbld account and from the home directory create the directory structure.
Creates the directory rpmbuild with several sub-directories.

~]$ id
uid=501(rpmbld) gid=501(rpmbld) groups=501(rpmbld)
~]$ rpmdev-setuptree 
~]$ echo $?
0
~]$ 

~]$ cd rpmbuild/
~]$ ls
BUILD  RPMS  SOURCES  SPECS  SRPMS
~]$ 

Create compressed content with RPM content

Change to the SOURCE directory, representing directory structure with RPM name,version and target file system. here, I use /etc/yum.repos.d copy desired repo file into the rpms structure. Once after that gzip all of that.

~]$ cd rpmbuild/SOURCES/
~/rpmbuild/SOURCES]$ ls
~/rpmbuild/SOURCES]$ mkdir -p localrepo-1/etc/yum.repos.d
~/rpmbuild/SOURCES]$ cp /tmp/centos66.repo localrepo-1/etc/yum.repos.d
~/rpmbuild/SOURCES]$

~]$ ls
localrepo-1
~/rpmbuild/SOURCES]$ 

~/rpmbuild/SOURCES]$ tar -zcvf localrepo-1.tar.gz localrepo-1/
localrepo-1/
localrepo-1/etc/
localrepo-1/etc/yum.repos.d/
localrepo-1/etc/yum.repos.d/centos66.repo
~/rpmbuild/SOURCES]$

SPEC skeleton

Instructions for the build process are created in the rpmbuild/SPECS directory. rpmdev-newspec <filename> used to create a sample file in your current directory.Edit as required.

~]$ cd rpmbuild/SPECS/
~/rpmbuild/SPECS]$ 

~/rpmbuild/SPECS]$ rpmdev-newspec localrepo.spec
Skeleton specfile (minimal) has been created to "localrepo.spec".
~/rpmbuild/SPECS]$

~/rpmbuild/SPECS]$ ls
localrepo.spec
~/rpmbuild/SPECS]$ 

:~]$ cat rpmbuild/SPECS/localrepo.spec 
Name:           localrepo
Version:        1 
Release:        0
Summary:        Centos repository
Group:          System Environment/Base
License:        GPL
URL:            http://sunlnx.blogspot.in
source0:        localrepo-1.tar.gz
buildarch:      noarch
BuildRoot:      %{_tmppath}/%{name}-buildroot

%description
 Create YUM repository pointing to local centos/redhat repository.

%prep
%setup -q

%install
mkdir -p "$RPM_BUILD_ROOT"
cp -R * "$RPM_BUILD_ROOT"

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
/etc/yum.repos.d/centos66.repo
:~]$

RPMBUILD

you can now use the rpmbuild process to create RPM with -bb for binary rpm without src rpm.

:~]$ rpmbuild -v -bb rpmbuild/SPECS/localrepo.spec 
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.RjS32s
+ umask 022
+ cd /home/rpmbld/rpmbuild/BUILD
+ cd /home/rpmbld/rpmbuild/BUILD
+ rm -rf localrepo-1
+ /bin/tar -xf -
+ /usr/bin/gzip -dc /home/rpmbld/rpmbuild/SOURCES/localrepo-1.tar.gz
+ STATUS=0
+ '[' 0 -ne 0 ']'
+ cd localrepo-1
+ /bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ exit 0
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.qqYUfq
+ umask 022
+ cd /home/rpmbld/rpmbuild/BUILD
+ cd localrepo-1
+ mkdir -p /home/rpmbld/rpmbuild/BUILDROOT/localrepo-1-0.i386
+ cp -R etc /home/rpmbld/rpmbuild/BUILDROOT/localrepo-1-0.i386
+ /usr/lib/rpm/check-rpaths /usr/lib/rpm/check-buildroot
+ /usr/lib/rpm/brp-compress
+ /usr/lib/rpm/brp-strip
+ /usr/lib/rpm/brp-strip-static-archive
+ /usr/lib/rpm/brp-strip-comment-note
Processing files: localrepo-1-0.noarch
Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1
Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/rpmbld/rpmbuild/BUILDROOT/localrepo-1-0.i386
Wrote: /home/rpmbld/rpmbuild/RPMS/noarch/localrepo-1-0.noarch.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.lBmbql
+ umask 022
+ cd /home/rpmbld/rpmbuild/BUILD
+ cd localrepo-1
+ rm -rf /home/rpmbld/rpmbuild/BUILDROOT/localrepo-1-0.i386
+ exit 0
:~]$

Test your build custom RPM's:

:~]# rpm -qiaf /home/rpmbld/rpmbuild/RPMS/noarch/localrepo-1-0.noarch.rpm
Name        : localrepo                    Relocations: (not relocatable)
Version     : 1                                 Vendor: (none)
Release     : 0                             Build Date: Wednesday 01 April 2015 05:18:03 AM IST
Install Date: (not installed)               Build Host: redhat
Group       : System Environment/Base       Source RPM: localrepo-1-0.src.rpm
Size        : 110                              License: GPL
Signature   : (none)
URL         : http://sunlnx.blogspot.in
Summary     : Centos repository
Description :
 Create YUM repository pointing to local centos repository.
:~]# 

:~]# rpm -ivh /home/rpmbld/rpmbuild/RPMS/noarch/localrepo-1-0.noarch.rpm
Preparing...                ########################################### [100%]
   1:localrepo              ########################################### [100%]
:~]# 

:~]# ls -l /etc/yum.repos.d/centos66.repo 
-rw-r--r-- 1 root root 110 Apr  1 05:18 /etc/yum.repos.d/centos66.repo
:~]# 

this is how the custom RPM's are build, hope this tutorial can help you all for your custom RPM builds.
sharing in public, re-share for all.