The ZFS file system is a file system that fundamentally changes the way file systems are administered, with features and benefits not found in other file systems available today. ZFS is robust, scalable, and easy to administer.ZFS uses the concept of storage pools to manage physical storage, ZFS eliminates volume management altogether. Instead of forcing you to create virtualized volumes, ZFS aggregates devices into a storage pool.File systems are no longer constrained to individual devices, allowing them to share disk space with all file systems in the pool. You no longer need to predetermine the size of a file system, as file systems grow automatically within the disk space allocated to the storage pool. When new storage is added, all file systems within the pool can immediately use the additional disk space without additional work
zpool commands | Description |
zpool create testpool c0t0d0 | Create simple pool named testpool with single disk creating default mount point as poolname(/testpool) OPTIONAL: -n do a dry run on pool creation -f force creation of the pool |
zpool create testpool mirror c0t0d0 c0t0d1 | Create testpool mirroring c0t0d0 with c0t0d1 creating default mount point as poolname(/testpool) |
zpool create -m /mypool testpool c0t0d0 | Create pool with different mount point than default |
zpool create testpool raidz c2t1d0 c2t2d0 c2t3d0 | Create RAID-Z testpool |
zpool add testpool raidz c2t4d0 c2t5d0 c2t6d0 | Add RAID-Z disks to testpool |
zpool create testpool raidz1 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0 | Create RAIDZ-1 testpool |
zpool create testpool raidz2 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0 | Create RAIDZ-2 testpool |
zpool add testpool spare c2t6d0 | Add spare device to the testpool |
zpool create testpool mirror c2t1d0 c2t2d0 mirror c2t3d0 c2t4d0 | Disk c2t1d0 mirrored with c2t2d0 & c2t3d0 mirrored with c2t4d0 |
zpool remove testpool c2t6d0 | Removes hot spares and cache disks |
zpool detach testpool c2t4d0 | Detach the mirror from the pool |
zpool clear testpool c2t4d0 | Clears specific disk fault |
zpool replace testpool c3t4d0 | Replace disk like disk |
zpool replace testpool c3t4d0 c3t5d0 | Replace one disk with another disk |
zpool export testpool | Export the pool from the system |
zpool import testpool | Imports specific pool |
zpool import -f -D -d /testpool testpool | Import destroyed testpool |
zpool import testpool newtestpool | Import a pool originally named testpool under new name newtestpool |
zpool import 88746667466648 | Import pool using ID |
zpool offline testpool c2t4d0 | Offline the disk in the pool Note: zpool offline testpool -t c2t4d0 will offline temporary |
zpool upgrade -a | upgrade all pools |
zpool upgrade testpool | Upgrade specific pool |
zpool status -x | Health status of all pools |
zpool status testpool | Status of pool in verbose mode |
zpool get all testpool | Lists all the properties of the storage pool |
zpool set autoexpand=on testpool | Set the parameter value on the storage pool Note: zpool get all testpool gives you all the properties on which it could be used to set value |
zpool list | Lists all pools |
zpool list -o name,size,altroot | show properties of the pool |
zpool history | Displays history of the pool Note: Once the pool is removed, history is removed. |
zpool iostat 2 2 | Display ZFS I/O stastics |
zpool destroy testpool | Removes the storage pool |
zfs commands | Description |
zfs list | Lists the ZFS file system's zfs list -t filesystem zfs list -t snapshot zfs list -t volume |
zfs create testpool/filesystem1 | Creates ZFS filesystem on testpool storage |
zfs create -o mountpoint=/filesystem1 testpool/filesystem1 | Different mountpoint created after ZFS creation |
zfs rename testpool/filesystem1 testpool/filesystem2 | Renames the ZFS filesystem |
zfs unmount testpool | unmount the storagepool |
zfs mount testpool | mounts the storagepool |
NFS exports in ZFS | zfs share testpool - shares the file system for export zfs set share.nfs=on testpool - make the share persistent across reboot svcs -a nfs/server - NFS server should be online cat /etc/dfs/dfstab - Exported entry in the file showmount -e - storage pool has been exported |
zfs unshare testpool | Remove NFS exports |
zfs destroy -r testpool | Destroy storage pool and all datasets under it |
zfs set quota=1G testpool/filesystem1 | set quota of 1GB on the filesystem1 |
zfs set reservations=1G testpool/filesystem1 | set reservation of 1GB on the filesystem1 |
zfs set mountpoint=legacy testpool/filesystem1 | Disable ZFS auto mounting and enable through /etc/vsftab |
zfs unmount testpool/filesystem1 | unmounts ZFS filesystem1 in testpool |
zfs mount testpool/filesystem1 | mounts ZFS filestystem1 in testpool |
zfs mount -a | mounts all the ZFS filesystems |
zfs snapshot testpool/filesystem1@friday | Creates a snapshot of the filesystem1 |
zfs hold keep testpool/filesystem1@friday | Holds existing snapshot & attempts to destroy using zfs destroy will fail |
zfs rename testpool/filesystem1@friday FRIDAY | Rename the snapshots Note: snapshots must exists in the same pools |
zfs diff testpool/filesystem1@friday testpool/filesystem1@friday1 | Identify the difference between two snapshots |
zfs holds testpool/filesystem1@friday | Displays the list of snapshots help |
zfs rollback -r testpool/fileystem1@friday | Roll back yesterday snapshot recursively |
zfs destroy testpool/fileystem1@thursday | Destroy snapshot created yesterday |
zfs clone testpool/filesystem1@friday testpool/clones/friday | Clone was created on the same snapshot Note: Cannot create clone of a filesystem in a pool that is different from where original snapshot resides. |
zfs destroy testpool/clones/Friday | Destroy the clone |
Thanks,