The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/contrib/openzfs/man/man7/zfsconcepts.7

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 .\"
    2 .\" CDDL HEADER START
    3 .\"
    4 .\" The contents of this file are subject to the terms of the
    5 .\" Common Development and Distribution License (the "License").
    6 .\" You may not use this file except in compliance with the License.
    7 .\"
    8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
    9 .\" or https://opensource.org/licenses/CDDL-1.0.
   10 .\" See the License for the specific language governing permissions
   11 .\" and limitations under the License.
   12 .\"
   13 .\" When distributing Covered Code, include this CDDL HEADER in each
   14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
   15 .\" If applicable, add the following below this CDDL HEADER, with the
   16 .\" fields enclosed by brackets "[]" replaced with your own identifying
   17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
   18 .\"
   19 .\" CDDL HEADER END
   20 .\"
   21 .\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
   22 .\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
   23 .\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
   24 .\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
   25 .\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
   26 .\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
   27 .\" Copyright (c) 2014 Integros [integros.com]
   28 .\" Copyright 2019 Richard Laager. All rights reserved.
   29 .\" Copyright 2018 Nexenta Systems, Inc.
   30 .\" Copyright 2019 Joyent, Inc.
   31 .\"
   32 .Dd June 30, 2019
   33 .Dt ZFSCONCEPTS 7
   34 .Os
   35 .
   36 .Sh NAME
   37 .Nm zfsconcepts
   38 .Nd overview of ZFS concepts
   39 .
   40 .Sh DESCRIPTION
   41 .Ss ZFS File System Hierarchy
   42 A ZFS storage pool is a logical collection of devices that provide space for
   43 datasets.
   44 A storage pool is also the root of the ZFS file system hierarchy.
   45 .Pp
   46 The root of the pool can be accessed as a file system, such as mounting and
   47 unmounting, taking snapshots, and setting properties.
   48 The physical storage characteristics, however, are managed by the
   49 .Xr zpool 8
   50 command.
   51 .Pp
   52 See
   53 .Xr zpool 8
   54 for more information on creating and administering pools.
   55 .Ss Snapshots
   56 A snapshot is a read-only copy of a file system or volume.
   57 Snapshots can be created extremely quickly, and initially consume no additional
   58 space within the pool.
   59 As data within the active dataset changes, the snapshot consumes more data than
   60 would otherwise be shared with the active dataset.
   61 .Pp
   62 Snapshots can have arbitrary names.
   63 Snapshots of volumes can be cloned or rolled back, visibility is determined
   64 by the
   65 .Sy snapdev
   66 property of the parent volume.
   67 .Pp
   68 File system snapshots can be accessed under the
   69 .Pa .zfs/snapshot
   70 directory in the root of the file system.
   71 Snapshots are automatically mounted on demand and may be unmounted at regular
   72 intervals.
   73 The visibility of the
   74 .Pa .zfs
   75 directory can be controlled by the
   76 .Sy snapdir
   77 property.
   78 .Ss Bookmarks
   79 A bookmark is like a snapshot, a read-only copy of a file system or volume.
   80 Bookmarks can be created extremely quickly, compared to snapshots, and they
   81 consume no additional space within the pool.
   82 Bookmarks can also have arbitrary names, much like snapshots.
   83 .Pp
   84 Unlike snapshots, bookmarks can not be accessed through the filesystem in any
   85 way.
   86 From a storage standpoint a bookmark just provides a way to reference
   87 when a snapshot was created as a distinct object.
   88 Bookmarks are initially tied to a snapshot, not the filesystem or volume,
   89 and they will survive if the snapshot itself is destroyed.
   90 Since they are very light weight there's little incentive to destroy them.
   91 .Ss Clones
   92 A clone is a writable volume or file system whose initial contents are the same
   93 as another dataset.
   94 As with snapshots, creating a clone is nearly instantaneous, and initially
   95 consumes no additional space.
   96 .Pp
   97 Clones can only be created from a snapshot.
   98 When a snapshot is cloned, it creates an implicit dependency between the parent
   99 and child.
  100 Even though the clone is created somewhere else in the dataset hierarchy, the
  101 original snapshot cannot be destroyed as long as a clone exists.
  102 The
  103 .Sy origin
  104 property exposes this dependency, and the
  105 .Cm destroy
  106 command lists any such dependencies, if they exist.
  107 .Pp
  108 The clone parent-child dependency relationship can be reversed by using the
  109 .Cm promote
  110 subcommand.
  111 This causes the
  112 .Qq origin
  113 file system to become a clone of the specified file system, which makes it
  114 possible to destroy the file system that the clone was created from.
  115 .Ss "Mount Points"
  116 Creating a ZFS file system is a simple operation, so the number of file systems
  117 per system is likely to be numerous.
  118 To cope with this, ZFS automatically manages mounting and unmounting file
  119 systems without the need to edit the
  120 .Pa /etc/fstab
  121 file.
  122 All automatically managed file systems are mounted by ZFS at boot time.
  123 .Pp
  124 By default, file systems are mounted under
  125 .Pa /path ,
  126 where
  127 .Ar path
  128 is the name of the file system in the ZFS namespace.
  129 Directories are created and destroyed as needed.
  130 .Pp
  131 A file system can also have a mount point set in the
  132 .Sy mountpoint
  133 property.
  134 This directory is created as needed, and ZFS automatically mounts the file
  135 system when the
  136 .Nm zfs Cm mount Fl a
  137 command is invoked
  138 .Po without editing
  139 .Pa /etc/fstab
  140 .Pc .
  141 The
  142 .Sy mountpoint
  143 property can be inherited, so if
  144 .Em pool/home
  145 has a mount point of
  146 .Pa /export/stuff ,
  147 then
  148 .Em pool/home/user
  149 automatically inherits a mount point of
  150 .Pa /export/stuff/user .
  151 .Pp
  152 A file system
  153 .Sy mountpoint
  154 property of
  155 .Sy none
  156 prevents the file system from being mounted.
  157 .Pp
  158 If needed, ZFS file systems can also be managed with traditional tools
  159 .Po
  160 .Nm mount ,
  161 .Nm umount ,
  162 .Pa /etc/fstab
  163 .Pc .
  164 If a file system's mount point is set to
  165 .Sy legacy ,
  166 ZFS makes no attempt to manage the file system, and the administrator is
  167 responsible for mounting and unmounting the file system.
  168 Because pools must
  169 be imported before a legacy mount can succeed, administrators should ensure
  170 that legacy mounts are only attempted after the zpool import process
  171 finishes at boot time.
  172 For example, on machines using systemd, the mount option
  173 .Pp
  174 .Nm x-systemd.requires=zfs-import.target
  175 .Pp
  176 will ensure that the zfs-import completes before systemd attempts mounting
  177 the filesystem.
  178 See
  179 .Xr systemd.mount 5
  180 for details.
  181 .Ss Deduplication
  182 Deduplication is the process for removing redundant data at the block level,
  183 reducing the total amount of data stored.
  184 If a file system has the
  185 .Sy dedup
  186 property enabled, duplicate data blocks are removed synchronously.
  187 The result
  188 is that only unique data is stored and common components are shared among files.
  189 .Pp
  190 Deduplicating data is a very resource-intensive operation.
  191 It is generally recommended that you have at least 1.25 GiB of RAM
  192 per 1 TiB of storage when you enable deduplication.
  193 Calculating the exact requirement depends heavily
  194 on the type of data stored in the pool.
  195 .Pp
  196 Enabling deduplication on an improperly-designed system can result in
  197 performance issues (slow I/O and administrative operations).
  198 It can potentially lead to problems importing a pool due to memory exhaustion.
  199 Deduplication can consume significant processing power (CPU) and memory as well
  200 as generate additional disk I/O.
  201 .Pp
  202 Before creating a pool with deduplication enabled, ensure that you have planned
  203 your hardware requirements appropriately and implemented appropriate recovery
  204 practices, such as regular backups.
  205 Consider using the
  206 .Sy compression
  207 property as a less resource-intensive alternative.

Cache object: 9528b4847a5e6cd89c6ace07e0f2dba7


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.