ASRock FM2A88M Pro3+ - FreeNAS

Hardware info on main page.

History

2024-05-04: from /var/log/messages

May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): CAM status: Command timeout
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): Retrying command
May  4 20:51:19 kg-f6 ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
May  4 20:51:19 kg-f6 ada1: <ST4000LM016-1N2170 0003> s/n W801LQCD detached
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): SETFEATURES ENABLE RCACHE. ACB: ef aa 00 00 00 40 00 00 00 00 00 00
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): CAM status: Command timeout
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): Error 5, Periph was invalidated
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): SETFEATURES ENABLE WCACHE. ACB: ef 02 00 00 00 40 00 00 00 00 00 00
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): CAM status: Command timeout
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): Error 5, Periph was invalidated
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 18 a8 f3 8c 40 25 01 00 00 00 00
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): CAM status: Command timeout
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): Error 5, Periph was invalidated
May  4 20:51:19 kg-f6 GEOM_MIRROR: Device swap2: provider ada1p1 disconnected.
May  4 20:51:19 kg-f6 (ada1:ahcich1:0:0:0): Periph destroyed

so ada1 is gone.

2024-04-09: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:28 with 0 errors on Tue Apr  9 03:46:28 2024
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2024-03-30: clear error on freenas-boot

tingo@kg-f6$ sudo zpool clear freenas-boot

check

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Sun Mar 24 03:46:27 2024
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2024-03-24: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Sun Mar 24 03:46:27 2024
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     2

errors: No known data errors

2024-03-18: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 7.74M in 1 days 06:12:35 with 0 errors on Mon Mar 18 06:12:38 2024
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/6fb3d12c-acb3-11ee-99e4-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2024-02-13: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:30 with 0 errors on Tue Feb 13 03:46:30 2024
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2024-02-05: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 24.6M in 1 days 06:35:31 with 0 errors on Mon Feb  5 06:35:43 2024
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/6fb3d12c-acb3-11ee-99e4-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2024-01-28: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Sun Jan 28 03:46:27 2024
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2024-01-20: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:30 with 0 errors on Sat Jan 20 03:46:30 2024
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2024-01-13: zpool z6 is finally resilvered

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: resilvered 3.26T in 6 days 18:21:08 with 0 errors on Sat Jan 13 12:10:47 2024
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/6fb3d12c-acb3-11ee-99e4-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2024-01-12: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:33 with 0 errors on Fri Jan 12 03:46:33 2024
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2024-01-06: sudo - apply nopasswd fix again (using sudo visudo)

tingo@kg-f6$ cat /usr/local/etc/sudoers | tail -1
%wheel ALL=(ALL) NOPASSWD: ALL

2024-01-06: zpool status for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Jan  6 17:49:39 2024
    515G scanned at 1.04G/s, 6.70G issued at 13.9M/s, 28.1T total
    0 resilvered, 0.02% done, 24 days 12:49:21 to go
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/6fb3d12c-acb3-11ee-99e4-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2024-01-06: after the reboot, the new drive shows up as ada5 - expected.

tingo@kg-f6$ sudo camcontrol devlist
<ST4000LM024-2U817V 0001>          at scbus0 target 0 lun 0 (ada0,pass0)
<ST4000LM016-1N2170 0003>          at scbus1 target 0 lun 0 (ada1,pass1)
<ST4000LM016-1N2170 0003>          at scbus2 target 0 lun 0 (ada2,pass2)
<ST4000LM016-1N2170 0003>          at scbus3 target 0 lun 0 (ada3,pass3)
<ST4000LM016-1N2170 0003>          at scbus4 target 0 lun 0 (ada4,pass4)
<ST4000LM024-2AN17V 0001>          at scbus4 target 1 lun 0 (ada5,pass5)
<ST4000LM016-1N2170 0003>          at scbus5 target 0 lun 0 (ada6,pass6)
<ST4000LM016-1N2170 0003>          at scbus5 target 1 lun 0 (ada7,pass7)
<SanDisk Cruzer Fit 1.27>          at scbus7 target 0 lun 0 (pass8,da0)

smartctl confirms that it is the right drive

tingo@kg-f6$ sudo smartctl -i /dev/ada5
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 2.5 5400
Device Model:     ST4000LM024-2AN17V
Serial Number:    WTL0F4XX
LU WWN Device Id: 5 000c50 0f193456e
Firmware Version: 0001
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5526 rpm
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan  6 17:42:46 2024 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

and gpart confirms that is is clean

tingo@kg-f6$ gpart show -p ada5
gpart: No such geom: ada5.

back to FreeNAS gui, and replace the drive (from Volume Status). After a while the operation completes.

2024-01-06: I offlined the drive (ada5) in FreeNAS gui (Volume Status), then I pulled the drive and physically replaced it. As expected the replacvement drive did not show up in sudo camcontrol devlist. And a sudo camcontrol rescan all didn't help either. So I rebooted the machine again.

2024-01-06: I rebooted the machine, now all disk drives are back

tingo@kg-f6$ sudo camcontrol devlist
<ST4000LM024-2U817V 0001>          at scbus0 target 0 lun 0 (ada0,pass0)
<ST4000LM016-1N2170 0003>          at scbus1 target 0 lun 0 (ada1,pass1)
<ST4000LM016-1N2170 0003>          at scbus2 target 0 lun 0 (ada2,pass2)
<ST4000LM016-1N2170 0003>          at scbus3 target 0 lun 0 (ada3,pass3)
<ST4000LM016-1N2170 0003>          at scbus4 target 0 lun 0 (ada4,pass4)
<ST4000LM016-1N2170 0003>          at scbus4 target 1 lun 0 (ada5,pass5)
<ST4000LM016-1N2170 0003>          at scbus5 target 0 lun 0 (ada6,pass6)
<ST4000LM016-1N2170 0003>          at scbus5 target 1 lun 0 (ada7,pass7)
<SanDisk Cruzer Fit 1.27>          at scbus7 target 0 lun 0 (pass8,da0)

do a smartctl test

tingo@kg-f6$ sudo smartctl -H /dev/ada0
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

tingo@kg-f6$ sudo smartctl -H /dev/ada1
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

tingo@kg-f6$ sudo smartctl -H /dev/ada2
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

tingo@kg-f6$ sudo smartctl -H /dev/ada3
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

tingo@kg-f6$ sudo smartctl -H /dev/ada4
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

tingo@kg-f6$ sudo smartctl -H /dev/ada5
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
Please note the following marginal Attributes:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  7 Seek_Error_Rate         0x000f   060   027   030    Pre-fail  Always   In_the_past 8425006714631

tingo@kg-f6$ sudo smartctl -H /dev/ada6
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

tingo@kg-f6$ sudo smartctl -H /dev/ada7
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

extra check on ada5

tingo@kg-f6$ sudo smartctl -H /dev/ada5
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
Please note the following marginal Attributes:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  7 Seek_Error_Rate         0x000f   060   027   030    Pre-fail  Always   In_the_past 8425006720223

tingo@kg-f6$ sudo smartctl -i /dev/ada5
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Laptop HDD
Device Model:     ST4000LM016-1N2170
Serial Number:    W801AJVD
LU WWN Device Id: 5 000c50 09bef1cbf
Firmware Version: 0003
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan  6 17:09:34 2024 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

yes, I should replace it.

2023-12-30: check for ada5 - camcontrol

tingo@kg-f6$ sudo camcontrol devlist
<ST4000LM024-2U817V 0001>          at scbus0 target 0 lun 0 (ada0,pass0)
<ST4000LM016-1N2170 0003>          at scbus1 target 0 lun 0 (ada1,pass1)
<ST4000LM016-1N2170 0003>          at scbus2 target 0 lun 0 (ada2,pass2)
<ST4000LM016-1N2170 0003>          at scbus3 target 0 lun 0 (ada3,pass3)
<ST4000LM016-1N2170 0003>          at scbus4 target 0 lun 0 (ada4,pass4)
<ST4000LM016-1N2170 0003>          at scbus5 target 0 lun 0 (ada6,pass6)
<ST4000LM016-1N2170 0003>          at scbus5 target 1 lun 0 (ada7,pass7)
<SanDisk Cruzer Fit 1.27>          at scbus7 target 0 lun 0 (pass8,da0)

zpool status

tingo@kg-f6$ zpool status z6
  pool: z6
 state: DEGRADED
status: One or more devices has been removed by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
  scan: resilvered 19.2M in 0 days 01:01:48 with 0 errors on Fri Dec 29 20:00:38 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              DEGRADED     0     0     0
      raidz3-0                                      DEGRADED     0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        10082317651966857923                        REMOVED      0     0     0  was /dev/gptid/25d23441-9579-11e7-9009-7085c239f419
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

so it should be replaced anyway.

2023-12-29: infor about ada5 from /var/log/messages

Dec 29 21:11:16 kg-f6 smartd[2582]: Device: /dev/ada5, 360 Currently unreadable (pending) sectors
Dec 29 21:11:17 kg-f6 smartd[2582]: Device: /dev/ada5, 360 Offline uncorrectable sectors

Dec 29 21:41:23 kg-f6 smartd[2582]: Device: /dev/ada5, 360 Currently unreadable (pending) sectors
Dec 29 21:41:23 kg-f6 smartd[2582]: Device: /dev/ada5, 360 Offline uncorrectable sectors

Dec 29 21:48:46 kg-f6 (ada5:ata0:0:1:0): FLUSHCACHE48. ACB: ea 00 00 00 00 40 00 00 00 00 00 00
Dec 29 21:48:46 kg-f6 (ada5:ata0:0:1:0): CAM status: Command timeout
Dec 29 21:48:46 kg-f6 (ada5:ata0:0:1:0): Retrying command
Dec 29 21:48:46 kg-f6 ada5 at ata0 bus 0 scbus4 target 1 lun 0
Dec 29 21:48:46 kg-f6 ada5: <ST4000LM016-1N2170 0003> s/n W801AJVD detached
Dec 29 21:48:46 kg-f6 GEOM_MIRROR: Device swap1: provider ada5p1 disconnected.
Dec 29 21:48:47 kg-f6 ZFS: vdev state changed, pool_guid=6633318532419024550 vdev_guid=10082317651966857923
Dec 29 21:48:47 kg-f6 (ada5:ata0:0:1:0): Periph destroyed
Dec 29 21:48:47 kg-f6 ZFS: vdev state changed, pool_guid=6633318532419024550 vdev_guid=10082317651966857923

2023-12-29: startup after replacing the PSU. Everything seems to work. pool status for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Wed Dec 27 03:46:27 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

pool status for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Dec 29 18:58:50 2023
    200G scanned at 536M/s, 200G issued at 536M/s, 28.1T total
    0 resilvered, 0.70% done, 0 days 15:09:10 to go
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2023-12-27: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Wed Dec 27 03:46:27 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-12-25: latest status for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: DEGRADED
status: One or more devices has been removed by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
  scan: scrub repaired 864K in 1 days 06:23:52 with 0 errors on Mon Dec 25 06:23:57 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              DEGRADED     0     0     0
      raidz3-0                                      DEGRADED     0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        10082317651966857923                        REMOVED      0     0     0  was /dev/gptid/25d23441-9579-11e7-9009-7085c239f419
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

check the devices

tingo@kg-f6$ sudo camcontrol devlist
<ST4000LM024-2U817V 0001>          at scbus0 target 0 lun 0 (ada0,pass0)
<ST4000LM016-1N2170 0003>          at scbus1 target 0 lun 0 (ada1,pass1)
<ST4000LM016-1N2170 0003>          at scbus2 target 0 lun 0 (ada2,pass2)
<ST4000LM016-1N2170 0003>          at scbus3 target 0 lun 0 (ada3,pass3)
<ST4000LM016-1N2170 0003>          at scbus4 target 0 lun 0 (ada4,pass4)
<ST4000LM016-1N2170 0003>          at scbus5 target 0 lun 0 (ada6,pass6)
<ST4000LM016-1N2170 0003>          at scbus5 target 1 lun 0 (ada7,pass7)
<SanDisk Cruzer Fit 1.27>          at scbus7 target 0 lun 0 (pass8,da0)

ok, only seven drives. Looks like ada5 is missing. Check the log /var/log/messages. Ok, here we go

Dec 24 04:26:02 kg-f6 (ada5:ata0:0:1:0): FLUSHCACHE48. ACB: ea 00 00 00 00 40 00 00 00 00 00 00
Dec 24 04:26:02 kg-f6 (ada5:ata0:0:1:0): CAM status: Command timeout
Dec 24 04:26:02 kg-f6 (ada5:ata0:0:1:0): Retrying command
Dec 24 04:26:02 kg-f6 ada5 at ata0 bus 0 scbus4 target 1 lun 0
Dec 24 04:26:02 kg-f6 ada5: <ST4000LM016-1N2170 0003> s/n W801AJVD detached
Dec 24 04:26:03 kg-f6 ZFS: vdev state changed, pool_guid=6633318532419024550 vdev_guid=10082317651966857923
Dec 24 04:26:03 kg-f6 GEOM_MIRROR: Device swap1: provider ada5p1 disconnected.
Dec 24 04:26:03 kg-f6 (ada5:ata0:0:1:0): Periph destroyed
Dec 24 04:26:04 kg-f6 ZFS: vdev state changed, pool_guid=6633318532419024550 vdev_guid=10082317651966857923

2023-12-19: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Tue Dec 19 03:46:27 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-11-13: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 864K in 1 days 09:26:23 with 0 errors on Mon Nov 13 09:26:26 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2023-11-01: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:30 with 0 errors on Wed Nov  1 03:46:30 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-10-17: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:30 with 0 errors on Tue Oct 17 03:46:30 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-10-09: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 5.47M in 1 days 08:55:14 with 0 errors on Mon Oct  9 08:55:16 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2023-10-01: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:31 with 0 errors on Sun Oct  1 03:46:31 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-09-23: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Sat Sep 23 03:46:27 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-09-07: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:31 with 0 errors on Thu Sep  7 03:46:31 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-08-30: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:28 with 0 errors on Wed Aug 30 03:46:28 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-08-28: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 7.31M in 1 days 09:00:16 with 0 errors on Mon Aug 28 09:00:18 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2023-07-29: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Sat Jul 29 03:46:27 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-07-21: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Fri Jul 21 03:46:27 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-07-19: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: resilvered 320K in 0 days 00:00:02 with 0 errors on Wed Jul 19 06:48:57 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2023-07-05: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Wed Jul  5 03:46:27 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-07-04: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:29:19 with 0 errors on Sun Jun  4 14:29:23 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2023-06-27: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:31 with 0 errors on Tue Jun 27 03:46:31 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-06-04: latest scrub for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:29:19 with 0 errors on Sun Jun  4 14:29:23 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2023-06-03: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:30 with 0 errors on Sat Jun  3 03:46:30 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-05-27: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0 days 00:02:33 with 0 errors on Fri May 26 03:47:33 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     6

errors: No known data errors

do a zpool clear

tingo@kg-f6$ sudo zpool clear freenas-boot
tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:02:33 with 0 errors on Fri May 26 03:47:33 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

good.

2023-04-24: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:31 with 0 errors on Mon Apr 24 03:46:31 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-04-23: latest scrub for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 25.6M in 0 days 15:08:28 with 0 errors on Sun Apr 23 15:08:30 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2023-04-16: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Sun Apr 16 03:46:27 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-03-16: the sudo nopasswd fix had to be re-applied since the machine was powered down (sudo visudo).

tingo@kg-f6$ cat /usr/local/etc/sudoers | tail -1
%wheel ALL=(ALL) NOPASSWD: ALL

it works better if this is the last line of the file, for some reason.

2023-03-16: I powered down the machine to take out all drives to verify placement (by reading the serial no. off the drive). After re-inserting the last drive and powering on the machine, I now see

tingo@kg-f6$ sudo camcontrol devlist
<ST4000LM024-2U817V 0001>          at scbus0 target 0 lun 0 (ada0,pass0)
<ST4000LM016-1N2170 0003>          at scbus1 target 0 lun 0 (ada1,pass1)
<ST4000LM016-1N2170 0003>          at scbus2 target 0 lun 0 (ada2,pass2)
<ST4000LM016-1N2170 0003>          at scbus3 target 0 lun 0 (ada3,pass3)
<ST4000LM016-1N2170 0003>          at scbus4 target 0 lun 0 (ada4,pass4)
<ST4000LM016-1N2170 0003>          at scbus4 target 1 lun 0 (ada5,pass5)
<ST4000LM016-1N2170 0003>          at scbus5 target 0 lun 0 (ada6,pass6)
<ST4000LM016-1N2170 0003>          at scbus5 target 1 lun 0 (ada7,pass7)
<SanDisk Cruzer Fit 1.27>          at scbus7 target 0 lun 0 (pass8,da0)

interesting.

tingo@kg-f6$ ls -l /dev/ada*
crw-r-----  1 root  operator  0x71 Mar 16 21:50 /dev/ada0
crw-r-----  1 root  operator  0x78 Mar 16 21:50 /dev/ada0p1
crw-r-----  1 root  operator  0x79 Mar 16 21:50 /dev/ada0p2
crw-r-----  1 root  operator  0x72 Mar 16 21:50 /dev/ada1
crw-r-----  1 root  operator  0x7a Mar 16 21:50 /dev/ada1p1
crw-r-----  1 root  operator  0x7b Mar 16 21:50 /dev/ada1p2
crw-r-----  1 root  operator  0x73 Mar 16 21:50 /dev/ada2
crw-r-----  1 root  operator  0x7c Mar 16 21:50 /dev/ada2p1
crw-r-----  1 root  operator  0x7d Mar 16 21:50 /dev/ada2p2
crw-r-----  1 root  operator  0x77 Mar 16 21:50 /dev/ada3
crw-r-----  1 root  operator  0x82 Mar 16 21:50 /dev/ada3p1
crw-r-----  1 root  operator  0x83 Mar 16 21:50 /dev/ada3p2
crw-r-----  1 root  operator  0x7e Mar 16 21:50 /dev/ada4
crw-r-----  1 root  operator  0x8a Mar 16 21:50 /dev/ada4p1
crw-r-----  1 root  operator  0x8b Mar 16 21:50 /dev/ada4p2
crw-r-----  1 root  operator  0x7f Mar 16 21:50 /dev/ada5
crw-r-----  1 root  operator  0x8c Mar 16 21:50 /dev/ada5p1
crw-r-----  1 root  operator  0x8d Mar 16 21:50 /dev/ada5p2
crw-r-----  1 root  operator  0x80 Mar 16 21:50 /dev/ada6
crw-r-----  1 root  operator  0x8e Mar 16 21:50 /dev/ada6p1
crw-r-----  1 root  operator  0x8f Mar 16 21:50 /dev/ada6p2
crw-r-----  1 root  operator  0x81 Mar 16 21:50 /dev/ada7
crw-r-----  1 root  operator  0x90 Mar 16 21:50 /dev/ada7p1
crw-r-----  1 root  operator  0x91 Mar 16 21:50 /dev/ada7p2

check gptid

tingo@kg-f6$ ls -l /dev/gptid
total 0
crw-r-----  1 root  operator  0x87 Mar 16 21:50 2226f441-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x89 Mar 16 21:50 231416ea-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x93 Mar 16 21:50 23fdb526-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x95 Mar 16 21:50 24edb679-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x97 Mar 16 21:50 25d23441-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x99 Mar 16 21:50 26bf7deb-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x9b Mar 16 21:50 27aac2e7-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x85 Mar 16 21:50 ab074aeb-3691-11ed-aacb-7085c239f419
crw-r-----  1 root  operator  0x75 Mar 16 21:50 dc877909-95ad-11e7-b897-7085c239f419

yes, they are there too.

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: resilvered 21.8M in 0 days 00:00:41 with 0 errors on Thu Mar 16 21:53:16 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     1
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

clear and check

tingo@kg-f6$ sudo zpool clear z6
tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: resilvered 21.8M in 0 days 00:00:41 with 0 errors on Thu Mar 16 21:53:16 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

good. We shall see how long it lasts.

2023-03-16: interesting status for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: DEGRADED
status: One or more devices has been removed by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
  scan: scrub repaired 0 in 0 days 17:06:47 with 0 errors on Sun Mar 12 17:06:50 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              DEGRADED     0     0     0
      raidz3-0                                      DEGRADED     0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        2925362242950686323                         REMOVED      0     0     0  was /dev/gptid/24edb679-9579-11e7-9009-7085c239f419
        10082317651966857923                        REMOVED      0     0     0  was /dev/gptid/25d23441-9579-11e7-9009-7085c239f419
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

let us see what we got

tingo@kg-f6$ ls -l /dev/gptid/
total 0
crw-r-----  1 root  operator  0x8f Feb  1 10:53 2226f441-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x91 Feb  1 10:53 231416ea-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x93 Feb  1 10:53 23fdb526-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x99 Feb  1 10:53 26bf7deb-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x9b Feb  1 10:53 27aac2e7-9579-11e7-9009-7085c239f419
crw-r-----  1 root  operator  0x8d Feb  1 10:53 ab074aeb-3691-11ed-aacb-7085c239f419
crw-r-----  1 root  operator  0x7a Feb  1 10:53 dc877909-95ad-11e7-b897-7085c239f419

try gptid-to-device, check usage first

tingo@kg-f6$ gptid-to-device
Usage: /mnt/z6/h-tingo/bin/gptid-to-device <gpt>
Examples: /mnt/z6/h-tingo/bin/gptid-to-device e5d96c20-2f6d-11e5-9aed-003048806658
          /mnt/z6/h-tingo/bin/gptid-to-device gptid/e5d96c20-2f6d-11e5-9aed-003048806658

lets see

tingo@kg-f6$ gptid-to-device 2226f441-9579-11e7-9009-7085c239f419
ada1p2
tingo@kg-f6$ gptid-to-device 231416ea-9579-11e7-9009-7085c239f419
ada2p2
tingo@kg-f6$ gptid-to-device 23fdb526-9579-11e7-9009-7085c239f419
ada3p2
tingo@kg-f6$ gptid-to-device 26bf7deb-9579-11e7-9009-7085c239f419
ada6p2
tingo@kg-f6$ gptid-to-device 27aac2e7-9579-11e7-9009-7085c239f419
ada7p2
tingo@kg-f6$ gptid-to-device ab074aeb-3691-11ed-aacb-7085c239f419
ada0p2
tingo@kg-f6$ gptid-to-device dc877909-95ad-11e7-b897-7085c239f419
da0p1

and verify:

tingo@kg-f6$ ls -l /dev/ada*
crw-r-----  1 root  operator  0x71 Feb  1 10:53 /dev/ada0
crw-r-----  1 root  operator  0x7c Feb  1 10:53 /dev/ada0p1
crw-r-----  1 root  operator  0x7d Feb  1 10:53 /dev/ada0p2
crw-r-----  1 root  operator  0x72 Feb  1 10:53 /dev/ada1
crw-r-----  1 root  operator  0x7e Feb  1 10:53 /dev/ada1p1
crw-r-----  1 root  operator  0x7f Feb  1 10:53 /dev/ada1p2
crw-r-----  1 root  operator  0x73 Feb  1 10:53 /dev/ada2
crw-r-----  1 root  operator  0x80 Feb  1 10:53 /dev/ada2p1
crw-r-----  1 root  operator  0x81 Feb  1 10:53 /dev/ada2p2
crw-r-----  1 root  operator  0x74 Feb  1 10:53 /dev/ada3
crw-r-----  1 root  operator  0x82 Feb  1 10:53 /dev/ada3p1
crw-r-----  1 root  operator  0x83 Feb  1 10:53 /dev/ada3p2
crw-r-----  1 root  operator  0x77 Feb  1 10:53 /dev/ada6
crw-r-----  1 root  operator  0x88 Feb  1 10:53 /dev/ada6p1
crw-r-----  1 root  operator  0x89 Feb  1 10:53 /dev/ada6p2
crw-r-----  1 root  operator  0x78 Feb  1 10:53 /dev/ada7
crw-r-----  1 root  operator  0x8a Feb  1 10:53 /dev/ada7p1
crw-r-----  1 root  operator  0x8b Feb  1 10:53 /dev/ada7p2

yes, ada4 and ada5 are missing. and gptid-to-device can't find them either

tingo@kg-f6$ gptid-to-device 24edb679-9579-11e7-9009-7085c239f419
tingo@kg-f6$ gptid-to-device 25d23441-9579-11e7-9009-7085c239f419

not unexpected. From the log file

Feb 27 09:25:06 kg-f6 ahcich2: Timeout on slot 24 port 0
Feb 27 09:25:06 kg-f6 ahcich2: is 00000000 cs 00000000 ss 01000000 rs 01000000 tfd 40 serr 00000000 cmd 0000f817
Feb 27 09:25:06 kg-f6 (ada2:ahcich2:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 00 28 99 40 24 01 00 00 00 00
Feb 27 09:25:06 kg-f6 (ada2:ahcich2:0:0:0): CAM status: Command timeout
Feb 27 09:25:06 kg-f6 (ada2:ahcich2:0:0:0): Retrying command

Feb 24 17:52:39 kg-f6 (ada4:ata0:0:0:0): WRITE_DMA48. ACB: 35 00 10 b5 fd 40 25 01 00 00 08 00
Feb 24 17:52:39 kg-f6 (ada4:ata0:0:0:0): CAM status: Command timeout
Feb 24 17:52:39 kg-f6 (ada4:ata0:0:0:0): Retrying command
Feb 24 17:52:39 kg-f6 ada5 at ata0 bus 0 scbus4 target 1 lun 0
Feb 24 17:52:39 kg-f6 ada5: <ST4000LM016-1N2170 0003> s/n W801AJVD detached
Feb 24 17:52:39 kg-f6 ada4 at ata0 bus 0 scbus4 target 0 lun 0
Feb 24 17:52:39 kg-f6 ada4: <ST4000LM016-1N2170 0003> s/n W801B3XV detached
Feb 24 17:52:39 kg-f6 GEOM_MIRROR: Device swap1: provider ada5p1 disconnected.
Feb 24 17:52:39 kg-f6 GEOM_MIRROR: Device swap1: provider ada4p1 disconnected.
Feb 24 17:52:39 kg-f6 GEOM_MIRROR: Device swap1: provider destroyed.
Feb 24 17:52:39 kg-f6 GEOM_MIRROR: Device swap1 destroyed.
Feb 24 17:52:39 kg-f6 GEOM_ELI: Device mirror/swap1.eli destroyed.
Feb 24 17:52:39 kg-f6 GEOM_ELI: Detached mirror/swap1.eli on last close.
Feb 24 17:52:39 kg-f6 (ada4:ata0:0:0:0): Periph destroyed
Feb 24 17:52:39 kg-f6 (ada5:ata0:0:1:0): Periph destroyed
Feb 24 17:51:58 kg-f6 daemon[2873]:     2023/02/24 17:51:58 [WARN] agent: Check 'service:nas-health' is now warning
Feb 24 17:52:37 kg-f6 ZFS: vdev state changed, pool_guid=6633318532419024550 vdev_guid=2925362242950686323

camcontrol

tingo@kg-f6$ sudo camcontrol devlist
<ST4000LM024-2U817V 0001>          at scbus0 target 0 lun 0 (ada0,pass0)
<ST4000LM016-1N2170 0003>          at scbus1 target 0 lun 0 (ada1,pass1)
<ST4000LM016-1N2170 0003>          at scbus2 target 0 lun 0 (ada2,pass2)
<ST4000LM016-1N2170 0003>          at scbus3 target 0 lun 0 (ada3,pass3)
<ST4000LM016-1N2170 0003>          at scbus5 target 0 lun 0 (ada6,pass6)
<ST4000LM016-1N2170 0003>          at scbus5 target 1 lun 0 (ada7,pass7)
<SanDisk Cruzer Fit 1.27>          at scbus7 target 0 lun 0 (pass8,da0)

2023-02-15: nfs exports - exporting each zfs dataset separately works. Here is /etc/exports after using the FreeNAS gui to configure things

tingo@kg-f6$ more /etc/exports
/mnt/z6/xxx  -alldirs -maproot="root":"wheel" -network 10.1.0.0/16
/mnt/z6/media  -alldirs -maproot="root":"wheel" -network 10.1.0.0/16
/mnt/z6/h-tingo  -alldirs -maproot="root":"wheel" -network 10.1.0.0/16

2023-02-15: do zpool clear on z6

tingo@kg-f6$ sudo zpool clear z6

verify

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: resilvered 42.7M in 0 days 00:01:44 with 0 errors on Wed Feb  1 10:58:09 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2023-02-15: do a zpool clear on freenas-boot

tingo@kg-f6$ sudo zpool clear freenas-boot

verify

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:26 with 0 errors on Sat Feb 11 03:46:26 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2023-02-14: I set up a nfs share in FreeNAS from the gui by going to Sharing, Unix (NFS) Sharing, Add Unix (NFS) Share, filling out the necessary fields and finally enabling it. showmount reports

tingo@kg-f6$ sudo showmount -e
Exports list on localhost:
/mnt/z6                            Everyone

tingo@kg-f6$ sudo showmount -E kg-f6.local
/mnt/z6

unfortunately, only the top level directories show up on the client.

tingo@kg-f6$ zfs get sharenfs z6
NAME  PROPERTY  VALUE     SOURCE
z6    sharenfs  off       default

and /etc/zfs/exports is empty

tingo@kg-f6$ ls -l /etc/zfs/exports
-rw-r--r--  1 root  wheel  0 Jan  6  2018 /etc/zfs/exports

does changing it help?

tingo@kg-f6$ sudo zfs set sharenfs=on z6
tingo@kg-f6$ zfs get sharenfs z6
NAME  PROPERTY  VALUE     SOURCE
z6    sharenfs  on        local

check the zfs exports file

tingo@kg-f6$ ls -l /etc/zfs/exports
-rw-------  1 root  wheel  111 Feb 14 21:10 /etc/zfs/exports

contents

tingo@kg-f6$ sudo more /etc/zfs/exports
# !!! DO NOT EDIT THIS FILE MANUALLY !!!

/mnt/z6 
/mnt/z6/h-tingo 
/mnt/z6/jails   
/mnt/z6/media   
/mnt/z6/xxx     

interesting. and the I need to share it, like so

tingo@kg-f6$ sudo zfs share -a

2023-02-11: zpool status for freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0 days 00:01:26 with 0 errors on Sat Feb 11 03:46:26 2023
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     2

errors: No known data errors

2023-02-03: sudo's nopasswd fix had to be reapplied, using sudo visudo:

tingo@kg-f6$ cat /usr/local/etc/sudoers | head -1
%wheel ALL=(ALL) NOPASSWD: ALL

2023-02-01: zpool status for z6

tingo@kg-f6$ zpool history z6
History for 'z6':
cannot show history for pool 'z6': permission denied
tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: resilvered 42.7M in 0 days 00:01:44 with 0 errors on Wed Feb  1 10:58:09 2023
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     3
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     3
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2023-01-31: the machine got rebooted, because of a power outage (electrical inspection).

2022-10-07: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:29 with 0 errors on Fri Oct  7 03:46:29 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-09-29: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:31 with 0 errors on Thu Sep 29 03:46:31 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-09-25: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:23:37 with 0 errors on Sun Sep 25 14:23:39 2022
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2022-09-21: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:42 with 0 errors on Wed Sep 21 03:46:42 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-09-20: zpool status for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: resilvered 3.51T in 2 days 16:33:57 with 0 errors on Tue Sep 20 08:39:09 2022
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2022-09-17: replacing the old ada0 with the new in FreeNAS gui: Storage, Volumes, click on volume z6, Volume status, select the faulted drive (shows up as 3164759501558243128), click on Replace, use the preselected ada0, click on ok. Wait a couple of minutes, and then the resilver is in progress:

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Sep 17 16:05:12 2022
    504G scanned at 2.65G/s, 3.61G issued at 19.4M/s, 28.1T total
    0 resilvered, 0.01% done, 17 days 12:40:26 to go
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/ab074aeb-3691-11ed-aacb-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2022-09-17: drives now

tingo@kg-f6$ sudo camcontrol devlist
<ST4000LM024-2U817V 0001>          at scbus0 target 0 lun 0 (ada0,pass0)
<ST4000LM016-1N2170 0003>          at scbus1 target 0 lun 0 (ada1,pass1)
<ST4000LM016-1N2170 0003>          at scbus2 target 0 lun 0 (ada2,pass2)
<ST4000LM016-1N2170 0003>          at scbus3 target 0 lun 0 (ada3,pass3)
<ST4000LM016-1N2170 0003>          at scbus4 target 0 lun 0 (ada4,pass4)
<ST4000LM016-1N2170 0003>          at scbus4 target 1 lun 0 (ada5,pass5)
<ST4000LM016-1N2170 0003>          at scbus5 target 0 lun 0 (ada6,pass6)
<ST4000LM016-1N2170 0003>          at scbus5 target 1 lun 0 (ada7,pass7)
<SanDisk Cruzer Fit 1.27>          at scbus7 target 0 lun 0 (pass8,da0)

2022-09-17: sudo's nopasswd fix had to be reapplied, using sudo visudo:

tingo@kg-f6$ cat /usr/local/etc/sudoers | head -1
%wheel ALL=(ALL) NOPASSWD: ALL

2022-09-17: after physically replacing drive ada0, I rebooted the machine to get all drives assigned in order again (the new drive showed up as ada7 when plugged in).

2022-08-28: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:29 with 0 errors on Sun Aug 28 03:46:29 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-08-14: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
    the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: scrub repaired 0 in 0 days 16:01:23 with 0 errors on Sun Aug 14 16:01:25 2022
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              DEGRADED     0     0     0
      raidz3-0                                      DEGRADED     0     0     0
        3164759501558243128                         UNAVAIL      0     0     0  was /dev/gptid/2cae7eee-86ed-11e8-a186-7085c239f419
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2022-08-14: the machine rebooted for some reason

tingo@kg-f6$ who -b
                 system boot  Aug 14 00:53 

the sudo nopasswd fix had to be reapplied, via sudo visudo:

tingo@kg-f6$ cat /usr/local/etc/sudoers | head -1
%wheel ALL=(ALL) NOPASSWD: ALL

from /var/log/messages

Aug 14 00:53:09 kg-f6 syslog-ng[1628]: syslog-ng starting up; version='3.7.3'
Aug 14 00:53:09 kg-f6 panic: I/O to pool 'z6' appears to be hung on vdev guid 2925362242950686323 at '/dev/gptid/24e
db679-9579-11e7-9009-7085c239f419'.
Aug 14 00:53:09 kg-f6 cpuid = 3
Aug 14 00:53:09 kg-f6 KDB: stack backtrace:
Aug 14 00:53:09 kg-f6 db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe083a9d7760
Aug 14 00:53:09 kg-f6 vpanic() at vpanic+0x186/frame 0xfffffe083a9d77e0
Aug 14 00:53:09 kg-f6 panic() at panic+0x43/frame 0xfffffe083a9d7840
Aug 14 00:53:09 kg-f6 vdev_deadman() at vdev_deadman+0x194/frame 0xfffffe083a9d7890
Aug 14 00:53:09 kg-f6 vdev_deadman() at vdev_deadman+0x41/frame 0xfffffe083a9d78e0
Aug 14 00:53:09 kg-f6 vdev_deadman() at vdev_deadman+0x41/frame 0xfffffe083a9d7930
Aug 14 00:53:09 kg-f6 spa_deadman() at spa_deadman+0x86/frame 0xfffffe083a9d7960
Aug 14 00:53:09 kg-f6 taskqueue_run_locked() at taskqueue_run_locked+0x147/frame 0xfffffe083a9d79c0
Aug 14 00:53:09 kg-f6 taskqueue_thread_loop() at taskqueue_thread_loop+0xb8/frame 0xfffffe083a9d79f0
Aug 14 00:53:09 kg-f6 fork_exit() at fork_exit+0x85/frame 0xfffffe083a9d7a30
Aug 14 00:53:09 kg-f6 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe083a9d7a30
Aug 14 00:53:09 kg-f6 --- trap 0, rip = 0, rsp = 0, rbp = 0 ---
Aug 14 00:53:09 kg-f6 KDB: enter: panic

status of pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
    the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: scrub in progress since Sun Aug 14 00:00:02 2022
    19.3T scanned at 650M/s, 16.6T issued at 558M/s, 28.1T total
    0 repaired, 58.96% done, 0 days 06:01:11 to go
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              DEGRADED     0     0     0
      raidz3-0                                      DEGRADED     0     0     0
        3164759501558243128                         UNAVAIL      0     0     0  was /dev/gptid/2cae7eee-86ed-11e8-a186-7085c239f419
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

running gptid-to-device doesn't help much, all drives got renumbered from 0 when the machine rebooted. So I had to run smartctl -i on each of the remaining drives, and can confirm that the drive the died was ada0.

2022-08-12: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:29 with 0 errors on Fri Aug 12 03:46:29 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-08-04: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:32 with 0 errors on Thu Aug  4 03:46:32 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-07-04: clear errors on pool freenas-boot

tingo@kg-f6$ zpool clear freenas-boot
cannot clear errors for freenas-boot: permission denied
tingo@kg-f6$ sudo zpool clear freenas-boot

verify

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:02:46 with 0 errors on Sun Jul  3 03:47:46 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-07-03: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:55:30 with 0 errors on Sun Jul  3 14:55:31 2022
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2022-07-03: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0 days 00:02:46 with 0 errors on Sun Jul  3 03:47:46 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     5

errors: No known data errors

2022-05-22: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:47:48 with 0 errors on Sun May 22 14:47:49 2022
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2022-05-16: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:26 with 0 errors on Mon May 16 03:46:26 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-04-10: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:36:45 with 0 errors on Sun Apr 10 14:36:47 2022
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2022-04-06: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:30 with 0 errors on Wed Apr  6 03:46:30 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-03-29: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:33 with 0 errors on Tue Mar 29 03:46:33 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-03-05: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:30 with 0 errors on Sat Mar  5 03:46:30 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-02-27: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:39:29 with 0 errors on Sun Feb 27 14:39:31 2022
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2022-02-25: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:34 with 0 errors on Fri Feb 25 03:46:34 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2022-01-16: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:33:39 with 0 errors on Sun Jan 16 14:33:42 2022
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2022-01-16: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:41 with 0 errors on Sun Jan 16 03:46:41 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2021-12-31: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:28 with 0 errors on Fri Dec 31 03:46:28 2021
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

2021-12-05: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:30:19 with 0 errors on Sun Dec  5 14:30:21 2021
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

2021-10-14: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0 days 00:01:32 with 0 errors on Wed Oct 13 03:46:32 2021
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0    11

errors: No known data errors

try zpool clear

tingo@kg-f6$ sudo zpool clear freenas-boot
tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:32 with 0 errors on Wed Oct 13 03:46:32 2021
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

until next time.

2021-08-27: I had to power on the machine this evening (a circuit breaker tripped while I was away). sudo nopasswd fix had to be re-applied:

tingo@kg-f6$ cat /usr/local/etc/sudoers | tail -1
%wheel ALL=(ALL) NOPASSWD: ALL

added via sudo visudo command.

2021-08-09: the machine was restarted (a circuit breaker tripped), so the nopasswd fix for sudo needed to be re-applied sudo visudo and add a line:

tingo@kg-f6$ cat /usr/local/etc/sudoers | tail -1
%wheel ALL=(ALL) NOPASSWD: ALL

that's all.

2021-06-27: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 15:36:49 with 0 errors on Sun Jun 27 15:36:51 2021
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2021-06-23: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:26 with 0 errors on Wed Jun 23 03:46:26 2021
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2021-04-04: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:27:42 with 0 errors on Sun Apr  4 14:27:44 2021
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2021-02-21: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 15:22:07 with 0 errors on Sun Feb 21 15:22:09 2021
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2021-02-15: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:35 with 0 errors on Mon Feb 15 03:46:35 2021
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok.

2021-02-07: latest scrub result for pool freenas-boot

tingo@kg-f6$ date;zpool status freenas-boot
Sun Feb  7 13:18:17 CET 2021
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:31 with 0 errors on Sun Feb  7 03:46:31 2021
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2021-01-10: latest scrub result for pool z6

tingo@kg-f6$ date;zpool status z6
Sun Jan 10 16:42:20 CET 2021
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:32:43 with 0 errors on Sun Jan 10 14:32:44 2021
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2021-01-06: latest scrub result for pool freenas-boot

tingo@kg-f6$ date;zpool status freenas-boot
Wed Jan  6 11:09:30 CET 2021
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:29 with 0 errors on Wed Jan  6 03:46:29 2021
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2020-11-29: latest scrub result for pool z6

tingo@kg-f6$ date;zpool status z6
Sun Nov 29 16:03:37 CET 2020
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:22:30 with 0 errors on Sun Nov 29 14:22:32 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2020-11-27: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:26 with 0 errors on Fri Nov 27 03:46:26 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2020-10-26: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 1 days 04:36:35 with 0 errors on Mon Oct 26 03:36:49 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2020-10-26: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:35 with 0 errors on Mon Oct 26 03:46:35 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok.

2020-10-25: zpool status for z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub in progress since Sun Oct 25 00:00:14 2020
    2.91T scanned at 1.55G/s, 510G issued at 271M/s, 28.1T total
    0 repaired, 1.77% done, 1 days 05:37:43 to go
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2020-10-25: zpool status for freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:26 with 0 errors on Sun Oct 18 03:46:26 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2020-10-25: sudo - fixing sudo access by running 'sudo visudo' and adding a line for the wheel group

tingo@kg-f6$ cat /usr/local/etc/sudoers | tail -1
%wheel ALL=(ALL) NOPASSWD: ALL

to the /usr/local/etc/sudoers file

2020-10-25: the machine wasn't responding over the network, so I powered off and on again using the power button. Previous uptime was 333 days or more. I couldn't find anything conclusivein logs, perhaps it ran out of memory.

2020-09-20: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 15:24:49 with 0 errors on Sun Sep 20 15:24:53 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2020-09-16: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:29 with 0 errors on Wed Sep 16 03:46:29 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2020-08-09: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 15:53:57 with 0 errors on Sun Aug  9 15:53:58 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2020-06-28: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:08:26 with 0 errors on Sun Jun 28 14:08:28 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2020-06-28: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status  freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:36 with 0 errors on Sun Jun 28 03:46:36 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2020-05-23: swap config is four two-way mirrors: swap0, swap1, swap2 and swap3

tingo@kg-f6$ gmirror status
        Name    Status  Components
mirror/swap0  COMPLETE  ada7p1 (ACTIVE)
                        ada6p1 (ACTIVE)
mirror/swap1  COMPLETE  ada5p1 (ACTIVE)
                        ada4p1 (ACTIVE)
mirror/swap2  COMPLETE  ada3p1 (ACTIVE)
                        ada2p1 (ACTIVE)
mirror/swap3  COMPLETE  ada1p1 (ACTIVE)
                        ada0p1 (ACTIVE)

good to know.

2020-05-19: latest scrub results for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:36 with 0 errors on Tue May 19 03:46:36 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2020-05-17: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:13:51 with 0 errors on Sun May 17 14:13:52 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2020-05-11: latest scrub results for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:29 with 0 errors on Mon May 11 03:46:29 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2020-04-01: latest scrub results for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:32 with 0 errors on Wed Apr  1 03:46:32 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2020-02-23: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 13:29:30 with 0 errors on Sun Feb 23 13:29:31 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2020-02-21: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Fri Feb 21 03:46:27 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok.

2019-11-25: reboot after power outage

tingo@kg-f6$ date;who -a
Mon Nov 25 21:02:33 CET 2019
                 - system boot  Nov 25 20:53 00:12
tingo            + pts/0        Nov 25 20:55   .   (10.1.150.52)

ok.

2019-10-27: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:27:54 with 0 errors on Sun Oct 27 13:27:57 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2019-10-25: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:35 with 0 errors on Fri Oct 25 03:46:35 2019
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2019-09-15: latest scrub result for pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:56:24 with 0 errors on Sun Sep 15 14:56:27 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok

2019-09-15: latest scrub result for pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:37 with 0 errors on Sun Sep 15 03:46:37 2019
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2019-08-04: latest scrub result pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 14:18:40 with 0 errors on Sun Aug  4 14:18:42 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2019-07-28: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Sun Jul 28 03:46:27 2019
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok.

2019-06-23: latest scrub result pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 13:34:06 with 0 errors on Sun Jun 23 13:34:08 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2019-06-10: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:29 with 0 errors on Mon Jun 10 03:46:29 2019
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2019-05-12: latest scrub result pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 12:38:51 with 0 errors on Sun May 12 12:38:56 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2019-05-09: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:29 with 0 errors on Thu May  9 03:46:29 2019
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2019-03-22: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:37 with 0 errors on Fri Mar 22 03:46:37 2019
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok.

2019-02-17: latest scrub result pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 11:52:53 with 0 errors on Sun Feb 17 11:52:54 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2019-02-02: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Sat Feb  2 03:46:27 2019
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0    11

errors: No known data errors

try a zpool clear then

root@kg-f6:~ # zpool clear freenas-boot
root@kg-f6:~ # zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Sat Feb  2 03:46:27 2019
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

looks good.

2019-01-06: latest scrub result pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 11:36:18 with 0 errors on Sun Jan  6 11:36:20 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2018-12-24: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Mon Dec 24 03:46:27 2018
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0    11

errors: No known data errors

a mixed message there.

2018-11-25: latest scrub result pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 04:15:17 with 0 errors on Sun Nov 25 04:15:19 2018
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2018-10-14: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:26 with 0 errors on Sun Oct 14 03:46:26 2018
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok.

2018-09-09: latest scrub result pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:17:13 with 0 errors on Sun Sep  9 00:17:17 2018
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2018-08-19: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:33 with 0 errors on Sun Aug 19 03:46:33 2018
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok.

2018-08-03: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:27 with 0 errors on Fri Aug  3 03:46:27 2018
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok.

2018-07-29: latest scrub result pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:18:15 with 0 errors on Sun Jul 29 00:18:15 2018
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2018-07-18: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:35 with 0 errors on Wed Jul 18 03:46:35 2018
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok

2018-07-14: pool z6 completed resilvering

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: resilvered 64.2G in 0 days 00:19:34 with 0 errors on Sat Jul 14 00:58:05 2018
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2018-07-14: replacing ada0 in the FreeNAS gui. Storage, Volumes (click on volume z6) Volume Status, I see the FAULTED drive (it is labeled gptid/21355856-9579-11e7-9009-7085c239f419), I click on "Replace", it comes up with ada0 selected, I click on "Replace", I get "Disk is not clear, partitions or ZFS labels were found. Not surprising:

tingo@kg-f6$ sudo gpart show -p ada0
=>        34  7814037100    ada0  GPT  (3.6T) [CORRUPT]
          34      262144  ada0p1  ms-reserved  (128M)
      262178        2014          - free -  (1.0M)
      264192  7813771264  ada0p2  ms-basic-data  (3.6T)
  7814035456        1678          - free -  (839K)

but I do not need those. So I check "Force" and click on "Replace" again. I get "please wait" for a bit, then it succeeds. The old drive (gptid/...) still hangs around, so I do a detach on it, get another "please wait" for a few seconds, and after that everything is ok? Yes, the pool is resilvering:

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Jul 14 00:38:31 2018
    513G scanned at 1.46G/s, 35.8G issued at 390M/s, 514G total
    3.98G resilvered, 6.98% done, 0 days 00:20:53 to go
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/2cae7eee-86ed-11e8-a186-7085c239f419  ONLINE       0     0     0  (resilvering)
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2018-07-14: physically replaced ada0. New drive

tingo@kg-f6$ sudo smartctl -i /dev/ada0
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 2.5 5400
Device Model:     ST4000LM024-2AN17V
Serial Number:    WCK2RMNP
LU WWN Device Id: 5 000c50 0b9cdd1c8
Firmware Version: 0001
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5526 rpm
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jul 14 00:25:42 2018 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

ok.

2018-07-11: from /var/log/messages

Jul 11 19:28:07 kg-f6 (ada0:ahcich0:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 00 48 87 6c 40 2c 01 00 01 00 00
Jul 11 19:28:07 kg-f6 (ada0:ahcich0:0:0:0): CAM status: Uncorrectable parity/CRC error
Jul 11 19:28:07 kg-f6 (ada0:ahcich0:0:0:0): Retrying command
[..]
Jul 11 19:33:43 kg-f6 (ada0:ahcich0:0:0:0): READ_FPDMA_QUEUED. ACB: 60 10 90 ba c0 40 d1 01 00 00 00 00
Jul 11 19:33:43 kg-f6 (ada0:ahcich0:0:0:0): CAM status: Uncorrectable parity/CRC error
Jul 11 19:33:43 kg-f6 (ada0:ahcich0:0:0:0): Error 5, Retries exhausted
[..]
Jul 11 19:34:48 kg-f6 ZFS: vdev state changed, pool_guid=6633318532419024550 vdev_guid=4801151775808843357
Jul 11 19:34:48 kg-f6 (ada0:ahcich0:0:0:0): READ_FPDMA_QUEUED. ACB: 60 10 90 ba c0 40 d1 01 00 00 00 00
Jul 11 19:34:48 kg-f6 (ada0:ahcich0:0:0:0): CAM status: Uncorrectable parity/CRC error
Jul 11 19:34:48 kg-f6 (ada0:ahcich0:0:0:0): Error 5, Retries exhausted
Jul 11 19:34:48 kg-f6 (ada0:ahcich0:0:0:0): READ_FPDMA_QUEUED. ACB: 60 10 90 bc c0 40 d1 01 00 00 00 00
Jul 11 19:34:48 kg-f6 (ada0:ahcich0:0:0:0): CAM status: Uncorrectable parity/CRC error
Jul 11 19:34:48 kg-f6 (ada0:ahcich0:0:0:0): Error 5, Retries exhausted
Jul 11 19:34:48 kg-f6 (ada0:ahcich0:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 10 90 02 40 40 00 00 00 00 00 00
Jul 11 19:34:48 kg-f6 (ada0:ahcich0:0:0:0): CAM status: Uncorrectable parity/CRC error
Jul 11 19:34:48 kg-f6 (ada0:ahcich0:0:0:0): Error 5, Retries exhausted
Jul 11 19:34:55 kg-f6 daemon[2862]:     2018/07/11 19:34:55 [WARN] agent: Check 'service:nas-health' is now warning

so I guess I really should replace ada0.

2018-07-11: latest scrub result pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: scrub repaired 0 in 0 days 00:17:18 with 0 errors on Wed Jul 11 19:52:15 2018
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              DEGRADED     0     0     0
      raidz3-0                                      DEGRADED     0     0     0
        gptid/21355856-9579-11e7-9009-7085c239f419  FAULTED      0 4.18K     0  too many errors
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

that's not good. Which device?

tingo@kg-f6$ gptid-to-device gptid/21355856-9579-11e7-9009-7085c239f419
ada0p2

smartctl says

health

tingo@kg-f6$ sudo smartctl -H /dev/ada0
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

info

tingo@kg-f6$ sudo smartctl -i /dev/ada0
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Laptop HDD
Device Model:     ST4000LM016-1N2170
Serial Number:    W801MQG0
LU WWN Device Id: 5 000c50 09ce567b7
Firmware Version: 0003
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Fri Jul 13 11:27:46 2018 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

and 'all'

tingo@kg-f6$ sudo smartctl -a /dev/ada0
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Laptop HDD
Device Model:     ST4000LM016-1N2170
Serial Number:    W801MQG0
LU WWN Device Id: 5 000c50 09ce567b7
Firmware Version: 0003
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Fri Jul 13 11:28:56 2018 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82)    Offline data collection activity
                    was completed without error.
                    Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)    The previous self-test routine completed
                    without error or no self-test has ever
                    been run.
Total time to complete Offline
data collection:         (    0) seconds.
Offline data collection
capabilities:              (0x7b) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    Offline surface scan supported.
                    Self-test supported.
                    Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine
recommended polling time:      (   1) minutes.
Extended self-test routine
recommended polling time:      ( 702) minutes.
Conveyance self-test routine
recommended polling time:      (   2) minutes.
SCT capabilities:            (0x3035)    SCT Status supported.
                    SCT Feature Control supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   118   099   006    Pre-fail  Always       -       198683153
  3 Spin_Up_Time            0x0003   096   096   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       15
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   070   060   030    Pre-fail  Always       -       189402167002
  9 Power_On_Hours          0x0032   092   092   000    Old_age   Always       -       7375 (244 193 0)
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       15
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   001   001   000    Old_age   Always       -       378
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   061   056   045    Old_age   Always       -       39 (Min/Max 27/44)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       5
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       61
194 Temperature_Celsius     0x0022   039   044   000    Old_age   Always       -       39 (0 17 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   138   138   000    Old_age   Always       -       377
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       7361 (197 109 0)
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       4549075925
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       1087348893

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

No errors logged. Hmm, ok.

2018-03-12: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:26 with 0 errors on Mon Mar 12 03:46:26 2018
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok.

2018-06-10: latest scrub result pool freenas-boot

tingo@kg-f6$ zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:01:26 with 0 errors on Tue Jul 10 03:46:26 2018
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

ok.

2018-02-11: latest scrub result pool z6

tingo@kg-f6$ zpool status z6
  pool: z6
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:20:32 with 0 errors on Sun Feb 11 00:20:33 2018
config:

    NAME                                            STATE     READ WRITE CKSUM
    z6                                              ONLINE       0     0     0
      raidz3-0                                      ONLINE       0     0     0
        gptid/21355856-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/2226f441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/231416ea-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/23fdb526-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/24edb679-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/25d23441-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/26bf7deb-9579-11e7-9009-7085c239f419  ONLINE       0     0     0
        gptid/27aac2e7-9579-11e7-9009-7085c239f419  ONLINE       0     0     0

errors: No known data errors

ok.

2018-01-06: after the upgrade, it now runs FreeNAS 11.1-RELEASE. From web UI

Build   FreeNAS-11.1-RELEASE
Platform    AMD Athlon(tm) X4 845 Quad Core Processor
Memory  32668MB

and via ssh

tingo@10.1.161.35's password:
Last login: Sun Dec  3 15:38:02 2017 from 10.1.150.50
FreeBSD 11.1-STABLE (FreeNAS.amd64) #0 r321665+d4625dcee3e(freenas/11.1-stable): Wed Dec 13 16:33:42 UTC 2017

    FreeNAS (c) 2009-2017, The FreeNAS Development Team
    All rights reserved.
    FreeNAS is released under the modified BSD license.

    For more information, documentation, help or support, go here:
     http://freenas.org
Welcome to FreeNAS

ok.

2018-01-06: FreeNAS - from the UI I select System, Update. It lists five pending updates, so I press "Apply Pending Updates" button. The changelog is

25810   Add missing restart_queue initialization to isp(4)
25946   FreeBSD SA 17:06
25950   Update Samba to address current CVEs
12684   Do not create an actual /nonexistent directory
21336   Add ability to attach smaller disk to a larger one
23197   Try to validate certificate before importing it
24000   Improve FHA locality control for NFS read/write requests
24942   Register mDNS on all interfaces
25037   Fix AWS-SNS Alert Service
25236   Add Docker section to Guide
25966   Update module that reports ARC Hit Ratio
26470   Allow interfaces to be selected from netcli
26509   Autostart at boot iocage jails that have property boot=on
26531   Make sure mDNS starts
26663   Fix disk attach/detach of boot pool
26800   Fork netatalk
26990   Fix regression that prevented VNC connection
26993   Allow special characters in grub-bhyve password
27001   Fix mDNS traceback
27018   Don't create iocage datasets if no jails exist
27088   Fix iocage logging
27097   Avoid exception when number of maximum swap mirrors is reached
27098   Fix destroying system datasets on migrate
27099   Fix traceback on cloud credentials
27124   Fixes to address OpenSSL SA 17:12
27128   Do not destroy volume if wizard import fails

and the packages are

Upgrade: base-os-11.0-U3-eb431b5a0d2479409c6acdbb77a22d5b -> base-os-11.1-RELEASE-8815a498ef028e9ba97ca6e1e9e75c74
Upgrade: docs-11.0-U3-eb431b5a0d2479409c6acdbb77a22d5b -> docs-11.1-RELEASE-8815a498ef028e9ba97ca6e1e9e75c74
Upgrade: freebsd-pkgdb-11.0-U3-eb431b5a0d2479409c6acdbb77a22d5b -> freebsd-pkgdb-11.1-RELEASE-8815a498ef028e9ba97ca6e1e9e75c74
Upgrade: freenas-pkg-tools-11.0-U3-eb431b5a0d2479409c6acdbb77a22d5b -> freenas-pkg-tools-11.1-RELEASE-8815a498ef028e9ba97ca6e1e9e75c74
Upgrade: FreeNASUI-11.0-U3-eb431b5a0d2479409c6acdbb77a22d5b -> FreeNASUI-11.1-RELEASE-8815a498ef028e9ba97ca6e1e9e75c74

it warns me that the system will be rebooted after upgrade, that's nice.

2017-09-10: interestingly enough, on another machine I get these in /var/log/messages:

Sep 10 00:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 01:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 02:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 03:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 04:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 05:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 06:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 07:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 08:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 09:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 10:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 11:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 12:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 13:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 14:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 15:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 16:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 17:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 18:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep 10 19:43:27 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.

Sep  9 18:26:22 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.
Sep  9 18:26:53 kg-core1 last message repeated 5 times
Sep  9 18:28:29 kg-core1 last message repeated 2 times
Sep  9 18:34:53 kg-core1 last message repeated 2 times
Sep  9 18:43:25 kg-core1 avahi-daemon[693]: Invalid response packet from host 10.1.161.35.

regular as clockwork too.

2017-09-09: logging in via ssh

tingo@10.1.161.35's password:
Last login: Sat Sep  9 18:26:33 2017 from 10.1.150.50
FreeBSD 11.0-STABLE (FreeNAS.amd64) #0 r321665+c0805687fec(freenas/11.0-stable): Tue Sep  5 16:07:24 UTC 2017

    FreeNAS (c) 2009-2017, The FreeNAS Development Team
    All rights reserved.
    FreeNAS is released under the modified BSD license.

    For more information, documentation, help or support, go here:
     http://freenas.org
Welcome to FreeNAS
$

from ssh it now looks like this:

tingo@kg-f6:~ % date;swapinfo -h;df -h;uptime
Sat Sep  9 18:31:21 CEST 2017
Device          1K-blocks     Used    Avail Capacity
/dev/ada0p1.eli   2097152       0B     2.0G     0%
/dev/ada1p1.eli   2097152       0B     2.0G     0%
/dev/ada2p1.eli   2097152       0B     2.0G     0%
/dev/ada3p1.eli   2097152       0B     2.0G     0%
/dev/ada4p1.eli   2097152       0B     2.0G     0%
/dev/ada5p1.eli   2097152       0B     2.0G     0%
/dev/ada6p1.eli   2097152       0B     2.0G     0%
/dev/ada7p1.eli   2097152       0B     2.0G     0%
Total            16777216       0B      16G     0%
Filesystem                   Size    Used   Avail Capacity  Mounted on
freenas-boot/ROOT/default     14G    725M     14G     5%    /
devfs                        1.0K    1.0K      0B   100%    /dev
tmpfs                         32M    9.4M     23M    29%    /etc
tmpfs                        4.0M    8.0K    4.0M     0%    /mnt
tmpfs                         11G    125M     11G     1%    /var
freenas-boot/grub             14G    6.4M     14G     0%    /boot/grub
fdescfs                      1.0K    1.0K      0B   100%    /dev/fd
z6                            16T    201K     16T     0%    /mnt/z6
z6/h-tingo                    16T    283K     16T     0%    /mnt/z6/h-tingo
 6:31PM  up 59 mins, 1 users, load averages: 0.28, 0.24, 0.20

ok.

2017-09-09: FreeNAS - web UI - account - I created a group and a user (I had already created a dataset for the user.

2017-09-09: FreeNAS - web UI - services - I enabled the ssh service. SMART service was already running.

2017-09-09: FreeNAS - web UI - storage - manual setup, raidz3, I named it z6.

2017-09-09: FreeNAS - web UI - I change the hostname from "freenas.local" to "kg-f6.local". Info from the web UI:

Hostname kg-f6.local
Build FreeNAS-11.0-U3 (c5dcf4416)
Platform AMD Athlon(tm) X4 845 Quad Core Processor
Memory 32668MB

ok

2017-09-09: FreeNAS - web UI - initial Wizard; keep English language, select Norwegian keyboard layout, Europe/Oslo timezone. Then the wizard wants to configure a volume, but it doesn't have Raid-z3 as a choice, so I exit it.

2017-09-09: FreeNAS - boot - the first boot takes a while, because FReeNAS is configuring things, generating keys and so on. When the boot menu shows, I get the IP address, and log on the web UI.

2017-09-09: FreeNAS - reboot - it doesn't boot from the UEFI stick by default. I changed the boot priorities in UEFI so that UEFI:SanDisk Cruzer Fit is first in the list. That makes FreeNAS boot.

2017-09-09: FreeNAS - install - after booting UEFI, I inserted a 16 GB SanDisk Cruzer Fit; it shows up as da1, I'm installing FreeNAS on that. Install, set root password, select "Boot via UEFI", and the install starts. It take a bit, but finishes without trouble and I'm advised to reboot and remove install media.

2017-09-09: I booted the FreeNAS 11.0-U3 image off a usb stick. Per default the machine does a BIOS boot, but I also tried boot menu and UEFI, that works too.