The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/contrib/openzfs/man/man8/zpool-iostat.8

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 .\"
    2 .\" CDDL HEADER START
    3 .\"
    4 .\" The contents of this file are subject to the terms of the
    5 .\" Common Development and Distribution License (the "License").
    6 .\" You may not use this file except in compliance with the License.
    7 .\"
    8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
    9 .\" or https://opensource.org/licenses/CDDL-1.0.
   10 .\" See the License for the specific language governing permissions
   11 .\" and limitations under the License.
   12 .\"
   13 .\" When distributing Covered Code, include this CDDL HEADER in each
   14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
   15 .\" If applicable, add the following below this CDDL HEADER, with the
   16 .\" fields enclosed by brackets "[]" replaced with your own identifying
   17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
   18 .\"
   19 .\" CDDL HEADER END
   20 .\"
   21 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
   22 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
   23 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
   24 .\" Copyright (c) 2017 Datto Inc.
   25 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
   26 .\" Copyright 2017 Nexenta Systems, Inc.
   27 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
   28 .\"
   29 .Dd March 16, 2022
   30 .Dt ZPOOL-IOSTAT 8
   31 .Os
   32 .
   33 .Sh NAME
   34 .Nm zpool-iostat
   35 .Nd display logical I/O statistics for ZFS storage pools
   36 .Sh SYNOPSIS
   37 .Nm zpool
   38 .Cm iostat
   39 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
   40 .Op Fl T Sy u Ns | Ns Sy d
   41 .Op Fl ghHLnpPvy
   42 .Oo Ar pool Ns … Ns | Ns Oo Ar pool vdev Ns … Oc Ns | Ns Ar vdev Ns … Oc
   43 .Op Ar interval Op Ar count
   44 .
   45 .Sh DESCRIPTION
   46 Displays logical I/O statistics for the given pools/vdevs.
   47 Physical I/O statistics may be observed via
   48 .Xr iostat 1 .
   49 If writes are located nearby, they may be merged into a single
   50 larger operation.
   51 Additional I/O may be generated depending on the level of vdev redundancy.
   52 To filter output, you may pass in a list of pools, a pool and list of vdevs
   53 in that pool, or a list of any vdevs from any pool.
   54 If no items are specified, statistics for every pool in the system are shown.
   55 When given an
   56 .Ar interval ,
   57 the statistics are printed every
   58 .Ar interval
   59 seconds until killed.
   60 If
   61 .Fl n
   62 flag is specified the headers are displayed only once, otherwise they are
   63 displayed periodically.
   64 If
   65 .Ar count
   66 is specified, the command exits after
   67 .Ar count
   68 reports are printed.
   69 The first report printed is always the statistics since boot regardless of
   70 whether
   71 .Ar interval
   72 and
   73 .Ar count
   74 are passed.
   75 However, this behavior can be suppressed with the
   76 .Fl y
   77 flag.
   78 Also note that the units of
   79 .Sy K ,
   80 .Sy M ,
   81 .Sy G Ns …
   82 that are printed in the report are in base 1024.
   83 To get the raw values, use the
   84 .Fl p
   85 flag.
   86 .Bl -tag -width Ds
   87 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns …
   88 Run a script (or scripts) on each vdev and include the output as a new column
   89 in the
   90 .Nm zpool Cm iostat
   91 output.
   92 Users can run any script found in their
   93 .Pa ~/.zpool.d
   94 directory or from the system
   95 .Pa /etc/zfs/zpool.d
   96 directory.
   97 Script names containing the slash
   98 .Pq Sy /
   99 character are not allowed.
  100 The default search path can be overridden by setting the
  101 .Sy ZPOOL_SCRIPTS_PATH
  102 environment variable.
  103 A privileged user can only run
  104 .Fl c
  105 if they have the
  106 .Sy ZPOOL_SCRIPTS_AS_ROOT
  107 environment variable set.
  108 If a script requires the use of a privileged command, like
  109 .Xr smartctl 8 ,
  110 then it's recommended you allow the user access to it in
  111 .Pa /etc/sudoers
  112 or add the user to the
  113 .Pa /etc/sudoers.d/zfs
  114 file.
  115 .Pp
  116 If
  117 .Fl c
  118 is passed without a script name, it prints a list of all scripts.
  119 .Fl c
  120 also sets verbose mode
  121 .No \&( Ns Fl v Ns No \&) .
  122 .Pp
  123 Script output should be in the form of "name=value".
  124 The column name is set to "name" and the value is set to "value".
  125 Multiple lines can be used to output multiple columns.
  126 The first line of output not in the
  127 "name=value" format is displayed without a column title,
  128 and no more output after that is displayed.
  129 This can be useful for printing error messages.
  130 Blank or NULL values are printed as a '-' to make output AWKable.
  131 .Pp
  132 The following environment variables are set before running each script:
  133 .Bl -tag -compact -width "VDEV_ENC_SYSFS_PATH"
  134 .It Sy VDEV_PATH
  135 Full path to the vdev
  136 .It Sy VDEV_UPATH
  137 Underlying path to the vdev
  138 .Pq Pa /dev/sd* .
  139 For use with device mapper, multipath, or partitioned vdevs.
  140 .It Sy VDEV_ENC_SYSFS_PATH
  141 The sysfs path to the enclosure for the vdev (if any).
  142 .El
  143 .It Fl T Sy u Ns | Ns Sy d
  144 Display a time stamp.
  145 Specify
  146 .Sy u
  147 for a printed representation of the internal representation of time.
  148 See
  149 .Xr time 2 .
  150 Specify
  151 .Sy d
  152 for standard date format.
  153 See
  154 .Xr date 1 .
  155 .It Fl g
  156 Display vdev GUIDs instead of the normal device names.
  157 These GUIDs can be used in place of device names for the zpool
  158 detach/offline/remove/replace commands.
  159 .It Fl H
  160 Scripted mode.
  161 Do not display headers, and separate fields by a
  162 single tab instead of arbitrary space.
  163 .It Fl L
  164 Display real paths for vdevs resolving all symbolic links.
  165 This can be used to look up the current block device name regardless of the
  166 .Pa /dev/disk/
  167 path used to open it.
  168 .It Fl n
  169 Print headers only once when passed
  170 .It Fl p
  171 Display numbers in parsable (exact) values.
  172 Time values are in nanoseconds.
  173 .It Fl P
  174 Display full paths for vdevs instead of only the last component of the path.
  175 This can be used in conjunction with the
  176 .Fl L
  177 flag.
  178 .It Fl r
  179 Print request size histograms for the leaf vdev's I/O.
  180 This includes histograms of individual I/O (ind) and aggregate I/O (agg).
  181 These stats can be useful for observing how well I/O aggregation is working.
  182 Note that TRIM I/O may exceed 16M, but will be counted as 16M.
  183 .It Fl v
  184 Verbose statistics Reports usage statistics for individual vdevs within the
  185 pool, in addition to the pool-wide statistics.
  186 .It Fl y
  187 Normally the first line of output reports the statistics since boot:
  188 suppress it.
  189 .It Fl w
  190 Display latency histograms:
  191 .Bl -tag -compact -width "asyncq_read/write"
  192 .It Sy total_wait
  193 Total I/O time (queuing + disk I/O time).
  194 .It Sy disk_wait
  195 Disk I/O time (time reading/writing the disk).
  196 .It Sy syncq_wait
  197 Amount of time I/O spent in synchronous priority queues.
  198 Does not include disk time.
  199 .It Sy asyncq_wait
  200 Amount of time I/O spent in asynchronous priority queues.
  201 Does not include disk time.
  202 .It Sy scrub
  203 Amount of time I/O spent in scrub queue.
  204 Does not include disk time.
  205 .It Sy rebuild
  206 Amount of time I/O spent in rebuild queue.
  207 Does not include disk time.
  208 .El
  209 .It Fl l
  210 Include average latency statistics:
  211 .Bl -tag -compact -width "asyncq_read/write"
  212 .It Sy total_wait
  213 Average total I/O time (queuing + disk I/O time).
  214 .It Sy disk_wait
  215 Average disk I/O time (time reading/writing the disk).
  216 .It Sy syncq_wait
  217 Average amount of time I/O spent in synchronous priority queues.
  218 Does not include disk time.
  219 .It Sy asyncq_wait
  220 Average amount of time I/O spent in asynchronous priority queues.
  221 Does not include disk time.
  222 .It Sy scrub
  223 Average queuing time in scrub queue.
  224 Does not include disk time.
  225 .It Sy trim
  226 Average queuing time in trim queue.
  227 Does not include disk time.
  228 .It Sy rebuild
  229 Average queuing time in rebuild queue.
  230 Does not include disk time.
  231 .El
  232 .It Fl q
  233 Include active queue statistics.
  234 Each priority queue has both pending
  235 .Sy ( pend )
  236 and active
  237 .Sy ( activ )
  238 I/O requests.
  239 Pending requests are waiting to be issued to the disk,
  240 and active requests have been issued to disk and are waiting for completion.
  241 These stats are broken out by priority queue:
  242 .Bl -tag -compact -width "asyncq_read/write"
  243 .It Sy syncq_read/write
  244 Current number of entries in synchronous priority
  245 queues.
  246 .It Sy asyncq_read/write
  247 Current number of entries in asynchronous priority queues.
  248 .It Sy scrubq_read
  249 Current number of entries in scrub queue.
  250 .It Sy trimq_write
  251 Current number of entries in trim queue.
  252 .It Sy rebuildq_write
  253 Current number of entries in rebuild queue.
  254 .El
  255 .Pp
  256 All queue statistics are instantaneous measurements of the number of
  257 entries in the queues.
  258 If you specify an interval,
  259 the measurements will be sampled from the end of the interval.
  260 .El
  261 .
  262 .Sh EXAMPLES
  263 .\" These are, respectively, examples 13, 16 from zpool.8
  264 .\" Make sure to update them bidirectionally
  265 .Ss Example 13 : No Adding Cache Devices to a ZFS Pool
  266 The following command adds two disks for use as cache devices to a ZFS storage
  267 pool:
  268 .Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd
  269 .Pp
  270 Once added, the cache devices gradually fill with content from main memory.
  271 Depending on the size of your cache devices, it could take over an hour for
  272 them to fill.
  273 Capacity and reads can be monitored using the
  274 .Cm iostat
  275 subcommand as follows:
  276 .Dl # Nm zpool Cm iostat Fl v Ar pool 5
  277 .
  278 .Ss Example 16 : No Adding output columns
  279 Additional columns can be added to the
  280 .Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c .
  281 .Bd -literal -compact -offset Ds
  282 .No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size
  283    NAME     STATE  READ WRITE CKSUM vendor  model        size
  284    tank     ONLINE 0    0     0
  285    mirror-0 ONLINE 0    0     0
  286    U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
  287    U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
  288    U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
  289    U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
  290    U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
  291    U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
  292 
  293 .No # Nm zpool Cm iostat Fl vc Pa size
  294               capacity     operations     bandwidth
  295 pool        alloc   free   read  write   read  write  size
  296 ----------  -----  -----  -----  -----  -----  -----  ----
  297 rpool       14.6G  54.9G      4     55   250K  2.69M
  298   sda1      14.6G  54.9G      4     55   250K  2.69M   70G
  299 ----------  -----  -----  -----  -----  -----  -----  ----
  300 .Ed
  301 .
  302 .Sh SEE ALSO
  303 .Xr iostat 1 ,
  304 .Xr smartctl 8 ,
  305 .Xr zpool-list 8 ,
  306 .Xr zpool-status 8

Cache object: 548f1c066d1aad4a1bd9bdc41a329eba


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.