Tuesday, 15 July 2014

Storage replication with DRBD

Objective: Storage replication with DRBD

Environment : CentOS 6.5 (32-bit)


DRBD Version : 8.3.16

Introduction;
In this article, I am using DRBD(Distributed Replicated Block Devicereplicated storage solution mirroring the content of block devices (hard disks) between serves. Not everyone can afford network-attached storage but somehow the data needs to be kept in sync, DRBD which can be thought of as network based RAID-1.

DRBD's position within the Linux I/O stack




Below are some of the basic requirements.

- Two disks  (preferably same size /dev/sdb )
- Networking between machines (drbd-node1 & drbd-node2)
Working DNS resolution.
- NTP synchronized times on both nodes

Install DRBD packages :

drbd-node1# yum install -y  drbd83-utils kmod-drbd83
drbd-node2# yum install -y  drbd83-utils kmod-drbd83

Load the DRBD modules.

Reboot or /sbin/modprobe drbd

Partition the disk:
drbd-node1# fdisk /dev/sdb
drbd-node2# fdisk /dev/sdb

Create the Distributed Replicated Block Device resource file

Readers are required to change according to your specifications on the servers, which are marked in RED


drbd-node1# cat /etc/drbd.d/drbdcluster.res 
resource drbdcluster
 {
 startup {
 wfc-timeout 30;
 outdated-wfc-timeout 20;
 degr-wfc-timeout 30;
 }
net {
 cram-hmac-alg sha1;
 shared-secret sync_disk;
 }
syncer {
 rate 10M;
 al-extents 257;
 on-no-data-accessible io-error;
 }
 on drbd-node1 {
 device /dev/drbd0;
 disk /dev/sdb1;
 address 192.168.1.XXX:7788;
 flexible-meta-disk internal;
 }
 on drbd-node2 {
 device /dev/drbd0;
 disk /dev/sdb1;
 address 192.168.1.YYY:7788;
 meta-disk internal;
 }
 }
drbd-node1#

Copy DRBD configured to the secondary node(drbd-node2)

drbd-node1# scp /etc/drbd.d/drbdcluster.res root@192.168.1.YYY:/etc/drbd.d/drbdcluster.res
drbd-node1#

Initialize DRBD on both the nodes and start their services(drbd-node1 & drbd-node2)

ALL]# drbdadm create-md drbdcluster
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
success

ALL# service drbd start
Starting DRBD resources: [ d(drbdcluster) s(drbdcluster) n(drbdcluster) ]........

- Since both the disks contain garbage values, we are required to tell to the DRBD which set of data would be used as primarily.

drbd-node1# drbdadm -- --overwrite-data-of-peer primary drbdcluster
drbd-node1#

- Device would start for an initial sync, we are required to wait until it completes.

drbd-node1# cat /proc/drbd
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build32R6, 2013-09-27 15:59:12
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:123904 nr:0 dw:0 dr:124568 al:0 bm:7 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:923580
[=>..................] sync'ed: 12.2% (923580/1047484)K
finish: 0:01:29 speed: 10,324 (10,324) K/sec

Create a file system and populate the data  on the device.

drbd-node1# mkfs.ext4 /dev/drbd0 
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 261871 blocks
13093 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376

Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
drbd-node1#

drbd-node1# mount /dev/drbd0 /data
drbd-node1# touch /data/file1
drbd-node1#

You don't need to mount the disk from secondary machines. All data you write on /data folder will be synced to secondary server.

In order to view it, umount /data from the drbd-node1 and maker secondary node as primary node and mount back /data on the drbd-node2. you could see the same contents of /data.

drbd-node1# drbdadm secondary drbdcluster
drbd-node2# drbdadm -- --overwrite-data-of-peer primary drbdcluster

We were successful on storage replication with DRBD.

On further improvement, since DRBD is functioning I would configure cluster and file system as a resource to use it. further in-addition to the Filesystem definition, we also need to tell the cluster where it can be located (only on the DRBD Primary) and when it is allowed to start(after
the Primary was promoted). 

I would publish an article in near future for the same.

No comments:

Post a Comment