Tuesday, 25 June 2013

Shrink root(/) partition - LVM

Objective: Shrink root partition using LVM.

Environment: Redhat 5.0 (32-bit) / CentOS release 6.3 (32-bit).

While I installed my OS long back, I had partitioned more space in the root file system, because of which I was unable to use other file systems due to lack of disk space.

So, incase if few are facing the same issues, here is the tutorial to decrease your space in root partition.

Once your volume group(VG) gets free space, you can extend any where in your logical volumes(LV) which resides in the same volume group(VG)

Change Plan:
1. find the current disk usage in the file system. 
2. Since your root partition(mounted) needs to be re-sized,  need access to unmounted root volume get into the rescue environment.
3. Change to logical volume manager in rescue environment.
4. Shrink the root file system 
5. Reduce root volume.
6. Reboot system and bring it online.
7. Check the root disk after re-sizing.

Technical Implementation:

1. Current disk usage:
tux# df -h
Filesystem            Size  Used Avail Use% Mounted on
                      6.7G  2.0G  4.5G  31% /
/dev/sda1              99M   12M   83M  13% /boot
tmpfs                 379M     0  379M   0% /dev/shm

2. Reboot the system and boot system in rescue environment.

3. In rescue shell, change to LV Manager prompt.
#lvm vgchange -a y

The -a y argument sets the availability to "y" or yes. As there are no specified LVM volume groups, this command will make all LVM volumes found available to the rescue kernel.

Once the kernel is aware of all LVM volumes they will be automatically mapped as devices. These are usually located under /dev/VolGroup.

#ls /dev/VolGroup00/
LogVol00  LogVol01

#e2fsck -f /dev/VolGroup00/LogVol00

4. After the check, resize2fs can be used to shrink the file system on the LVM volume. Here I reduce the root volume to 1GB

#resize2fs -f /dev/VolGroup00/LogVol00 1G

The e2fsck check command could be run again to make sure the, now significantly smaller, root file system is OK.

5. Reduce the root volume.
#lvm lvreduce -L 1G /dev/VolGroup00/LogVol00

The lvreduce command reduces the size of LVM volumes. The -L option allows the new size to be explicitly given. If the option to this argument does not begin with a minus sign, it is taken as the size to reduce the volume.

6. Remove CD/DVD and reboot the system to bring it online.

7. Check root partition after reboot.
tux# df -h
Filesystem            Size  Used Avail Use% Mounted on
                      5.9G  2.0G  3.7G  35% /
/dev/sda1              99M   12M   83M  13% /boot
tmpfs                 379M     0  379M   0% /dev/shm

Root partition was shrinked successfully.

Objective successful.

Root login on Oracle Solaris 11 / OpenIndiana - 151

Once after your installation is completed, you will not be able to login directly to the console as the user "root".

user root should be assigned role as a normal user.

Below are the very few steps to enable root login.

1. #vim /etc/ssh/sshd_config [ change parameter to yes ]
      PermitRootLogin yes

2. #vim /etc/default/login  [ Comment the below line ]

3. Remove ";type=role" from the root entry in /etc/user_attr 
         use the below command
       #rolemod -K type=normal root

4.  Restart service 
       #svcadm restart svc:/network/ssh:default

Monday, 17 June 2013

ISO - Mount/Unmount

1. Mount the ISO to CD/DVD, then mount it to file system.
2. Create an ISO from the CD/DVD, then mount to file system.
3. Created ISO will be mounted, and will be mounted as an file system.

Environment: CentOS release 6.3 (32-bit).


- Mounted my ISO image to CD/DVD, then copy files.
# mount -r -t iso9660 /dev/sr0 /mnt
# cd /mnt
# du -sh .
3.5G    CentOS_6.3/

-Created an ISO image from the CD/DVD image.
WARNING: Check the free disk space before using below.
# dd if=/dev/sr0 of=/root/centos6.3.iso
# ls -l  /root/centos6.3.iso
-rw-r--r-- 1 root root 3713204224 Jun 17 10:04 centos6.3.iso

-Local ISO image created on the system can be mounted to CD/DVD.
#mount -o loop -t iso9660 centos6.3.iso /mnt/isoimage/

tux:/mnt/isoimage]# du -sh .
3.5G    .

Since my colleagues faced difficulty in creating the local/remote repositories, I thought I should write this, as it plays very crucial role in creation.

 I would post up How to create local/remote Redhat/CentOS repository in coming days.

Logical Volume Snapshots

Objective:  Creating and restoring manual logical volume snapshots.

Environment: CentOS release 6.3 (32-bit).

By LV snapshot you will be able to freeze your logical volumes. In other words, you can easily backup and rollback to a original logical volume state. This is almost similar to VMware where you can the snap shot of the VM and revert in-case if anything goes wrong.

Concept of snapshot are just like the symbolic links, where you don't create a file, instead you only reference to it. Here, two essential parts are 
1. Metadata.

2. Data blocks.

When a snapshot is created Logical Volume Manager simply creates a copy of all Metadata pointers to a separate logical volume, snapshot volume only starts grow once you start altering data of the original logical volume.

Since my server was not created using LVM, I have created two partitions and changed them to LVM.

Change plan:
1. Create two partitions on /dev/sda drive.
2. Create physical volumes on the two drives.
3. Create volume group.
4. Create a single logical volume using ext3 file system.
5. Take snapshot and remove data
6. Rollback logical volume snapshot

Technical Implementation:

1. Created two physical partitions:

tux#fdisk -l | tail -2
/dev/sda5            2305        2366      497983+  8e  Linux LVM
/dev/sda6            2367        2428      497983+  8e  Linux LVM

2. Created Physical Volumes:

tux#pvcreate /dev/sda[5-6]
  Physical volume "/dev/sda5" successfully created
  Physical volume "/dev/sda6" successfully created

3. Created volume groups:
tux#vgcreate vol_grp /dev/sda5 /dev/sda6
  /dev/hdc: open failed: No medium found
  Volume group "vol_grp" successfully created

4. Created single logical volume of 100Mb with ext3 file system.

tux#lvcreate -L 100M -n lv1 vol_grp
  Logical volume "lv1" created

tux#mkfs.ext3 /dev/vol_grp/lv1
mke2fs 1.39 (29-May-2006)

Finally, we have come to the point where we can take a snapshot of our logical volume, for this we will also need some sample data on our Logical Volume, so once we revert from the snapshot we can confirm entire process by comparing original data with data recovered from the snapshot.

Create a mount directory for the logical volume and mount it.

tux#mkdir /mnt/lv1
tux#mount /dev/vol_grp/lv1 /mnt/lv1

tux:/mnt/lv1#cp -r /sbin/ .
tux:/mnt/lv1#cp -r /bin/ .
tux:/mnt/lv1#du -s .
39312   .

5. Creating the LV snapshot.

tux:/mnt/lv1]#lvcreate -s -L 30M -n lv1_snapshot /dev/vol_grp/lv1
  Rounding up size to full physical extent 32.00 MB
  Logical volume "lv1_snapshot" created

Execute lvs to confirm that new volume snapshot has been created.

  LV           VG      Attr   LSize   Origin Snap%  Move Log Copy%
  lv1          vol_grp owi-ao 100.00M
  lv1_snapshot vol_grp swi-a-  32.00M lv1      0.07

Since the snapshot was already created, now you can alter the data on the volume group.

tux:/mnt/lv1]#rm -rf ./bin/
tux:/mnt/lv1]#rm -rf ./sbin/
tux:/mnt/lv1]#du -s .
13      .

6. Rollback logical volume snapshot.

tux:/mnt/lv1]# lvconvert --merge /dev/vol_grp/lv1_snapshot
  Can't merge over open origin volume
  Merging of snapshot lv1_snapshot will start next activation.

tux#umount /mnt/lv1

Deactivate and activate you volume:

tux# lvchange -a n /dev/vol_grp/lv1
tux# lvchange -a y /dev/vol_grp/lv1

As a last step mount again your logical volume "lv1" and confirm that data all has been recovered:

tux# mount /dev/vol_grp/lv1 /mnt/lv1
tux:/mnt/lv1]# ls
bin  lost+found  sbin
tux# du -s /mnt/lv1/
39312   .

Snapshot was rolled back.

Objective successful 

Sunday, 9 June 2013

SSH too long to connect ;(

Its been observed from my experience few of the remote system takes too long to connect. 

I  had this issue on Redhat server 5.0 32-bit on kernel 2.6.18 with OpenSSH version 4.3.

Below steps solved my problem.

1. try to connect your server with an verbose option, if there are any of the GSS failures, need to turn off in /etc/ssh/sshd_config. You could either comment it out or you could switch off the GSS authentication parameter.

By disabling the above it will quickly prompt for the password field.

2. Once after entering the password and you found it is taking more delay for shell it means that your sshd config is trying to search for the DNS entry.

edit your sshd_config and append "UseDNS=No".

Any changes made in the sshd_config, expects for service restart.

Once after restarting the service, it must solve your problem.