ZFS Pool design

I’m in the process of moving my ZFS pool of 6 x 4TB disks to a new server,  so taking a moment to re-design.

Design

ZFS supports redundant data by the concept of a ‘vDEV’.    Each vDEV is a redundant combination of 2 or more disks.  vDEV’s can then be connected together for larger capacity.

vDEV

  • Mirror
    2 disks written with identical information.   Can also be 3 or more disks.
    Capacity is the size of one disk.
    For a 2 disk mirror the storage loss is 50%
  • RAIDZ1
    This requires minimum of 3 disks,  where one disk is used for parity.
  • RAIDZ2

I exported the pool from the old server, moved the physical disks , re-imported bigpool as oldpool ,   then attached NVMe cache and log devices.

The pool is composed of three 2-disk mirrors.    This uses very little CPU overhead and

Note most ZFS operations must be done via the GUI or the re-configuration doesn’t show up in the GUI.

My ‘oldpool’ now looks like this:

root@freenas[~]# zpool list
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot    29G  1022M  28.0G        -         -     0%     3%  1.00x  ONLINE  -
oldpool       10.9T  6.14T  4.73T        -         -    38%    56%  1.00x  ONLINE  /mnt

 

>root@freenas[~]# zpool status
 pool: oldpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 0 days 09:59:03 with 0 errors on Sun Mar 29 01:11:43 2020
config:

	NAME                                            STATE     READ WRITE CKSUM
	oldpool                                         ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/9589efdd-1392-11e9-ae38-00221992d64d  ONLINE       0     0     0
	    gptid/dfdcd37a-147a-11e9-ae38-00221992d64d  ONLINE       0     0     0
	  mirror-1                                      ONLINE       0     0     0
	    gptid/c72c07c4-2085-11e7-b172-00221992d64d  ONLINE       0     0     0
	    gptid/6e147570-04e0-11e7-b8fe-00221992d64d  ONLINE       0     0     0
	  mirror-2                                      ONLINE       0     0     0
	    gptid/f0ae8899-6092-11e9-9f3c-00221992d64d  ONLINE       0     0     0
	    gptid/09569e56-6114-11e9-9f3c-00221992d64d  ONLINE       0     0     0
	logs
	  gptid/c5b5fc86-71e7-11ea-ab8e-a0369f7f2d9c    ONLINE       0     0     0
	cache
	  gptid/ab62afd0-71e7-11ea-ab8e-a0369f7f2d9c    ONLINE       0     0     0

errors: No known data errors

 

I still had two 1.5TB drives and extra drive bays so I created a smaller mirror.

root@freenas[~]# zpool status -v
  pool: bigpool
 state: ONLINE
  scan: scrub repaired 0 in 0 days 02:46:01 with 0 errors on Sat Mar 28 18:00:04 2020
config:

	NAME                                            STATE     READ WRITE CKSUM
	bigpool                                         ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/d5246cd1-64d3-11ea-8356-002590f435be  ONLINE       0     0     0
	    gptid/d7154e77-64d3-1root@freenas[~]# zpool status -x
all pools are healthy

This pool has no cache or slog devices.    I’m primarily going to be using this for Plex media streaming.    Since  Plex streams are mostly

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s