The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/geom/sched/

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

Name Size Last modified (GMT) Description
Back Parent directory 2023-01-29 19:52:57
File README 5671 bytes 2023-01-29 19:52:57
C file g_sched.c 41943 bytes 2023-01-29 19:52:57
C file g_sched.h 4416 bytes 2023-01-29 19:52:57
C file gs_delay.c 7424 bytes 2023-01-29 19:52:57
C file gs_rr.c 19494 bytes 2023-01-29 19:52:57
C file gs_scheduler.h 7462 bytes 2023-01-29 19:52:57

    1 
    2         --- GEOM BASED DISK SCHEDULERS FOR FREEBSD ---
    3 
    4 This code contains a framework for GEOM-based disk schedulers and a
    5 couple of sample scheduling algorithms that use the framework and
    6 implement two forms of "anticipatory scheduling" (see below for more
    7 details).
    8 
    9 As a quick example of what this code can give you, try to run "dd",
   10 "tar", or some other program with highly SEQUENTIAL access patterns,
   11 together with "cvs", "cvsup", "svn" or other highly RANDOM access patterns
   12 (this is not a made-up example: it is pretty common for developers
   13 to have one or more apps doing random accesses, and others that do
   14 sequential accesses e.g., loading large binaries from disk, checking
   15 the integrity of tarballs, watching media streams and so on).
   16 
   17 These are the results we get on a local machine (AMD BE2400 dual
   18 core CPU, SATA 250GB disk):
   19 
   20     /mnt is a partition mounted on /dev/ad0s1f
   21 
   22     cvs:        cvs -d /mnt/home/ncvs-local update -Pd /mnt/ports
   23     dd-read:    dd bs=128k of=/dev/null if=/dev/ad0 (or ad0-sched-)
   24     dd-writew   dd bs=128k if=/dev/zero of=/mnt/largefile
   25 
   26                         NO SCHEDULER            RR SCHEDULER
   27                         dd      cvs             dd      cvs
   28 
   29     dd-read only        72 MB/s ----            72 MB/s ---
   30     dd-write only       55 MB/s ---             55 MB/s ---
   31     dd-read+cvs          6 MB/s ok              30 MB/s ok
   32     dd-write+cvs        55 MB/s slooow          14 MB/s ok
   33 
   34 As you can see, when a cvs is running concurrently with dd, the
   35 performance drops dramatically, and depending on read or write mode,
   36 one of the two is severely penalized.  The use of the RR scheduler
   37 in this example makes the dd-reader go much faster when competing
   38 with cvs, and lets cvs progress when competing with a writer.
   39 
   40 To try it out:
   41 
   42 1. PLEASE MAKE SURE THAT THE DISK THAT YOU WILL BE USING FOR TESTS
   43    DOES NOT CONTAIN PRECIOUS DATA.
   44     This is experimental code, so we make no guarantees, though
   45     I am routinely using it on my desktop and laptop.
   46 
   47 2. EXTRACT AND BUILD THE PROGRAMS
   48     A 'make install' in the directory should work (with root privs),
   49     or you can even try the binary modules.
   50     If you want to build the modules yourself, look at the Makefile.
   51 
   52 3. LOAD THE MODULE, CREATE A GEOM NODE, RUN TESTS
   53 
   54     The scheduler's module must be loaded first:
   55 
   56       # kldload gsched_rr
   57 
   58     substitute with gsched_as to test AS.  Then, supposing that you are
   59     using /dev/ad0 for testing, a scheduler can be attached to it with:
   60 
   61       # geom sched insert ad0
   62 
   63     The scheduler is inserted transparently in the geom chain, so
   64     mounted partitions and filesystems will keep working, but
   65     now requests will go through the scheduler.
   66 
   67     To change scheduler on-the-fly, you can reconfigure the geom:
   68 
   69       # geom sched configure -a as ad0.sched.
   70 
   71     assuming that gsched_as was loaded previously.
   72 
   73 5. SCHEDULER REMOVAL
   74 
   75     In principle it is possible to remove the scheduler module
   76     even on an active chain by doing
   77 
   78         # geom sched destroy ad0.sched.
   79 
   80     However, there is some race in the geom subsystem which makes
   81     the removal unsafe if there are active requests on a chain.
   82     So, in order to reduce the risk of data losses, make sure
   83     you don't remove a scheduler from a chain with ongoing transactions.
   84 
   85 --- NOTES ON THE SCHEDULERS ---
   86 
   87 The important contribution of this code is the framework to experiment
   88 with different scheduling algorithms.  'Anticipatory scheduling'
   89 is a very powerful technique based on the following reasoning:
   90 
   91     The disk throughput is much better if it serves sequential requests.
   92     If we have a mix of sequential and random requests, and we see a
   93     non-sequential request, do not serve it immediately but instead wait
   94     a little bit (2..5ms) to see if there is another one coming that
   95     the disk can serve more efficiently.
   96 
   97 There are many details that should be added to make sure that the
   98 mechanism is effective with different workloads and systems, to
   99 gain a few extra percent in performance, to improve fairness,
  100 insulation among processes etc.  A discussion of the vast literature
  101 on the subject is beyond the purpose of this short note.
  102 
  103 --------------------------------------------------------------------------
  104 
  105 TRANSPARENT INSERT/DELETE
  106 
  107 geom_sched is an ordinary geom module, however it is convenient
  108 to plug it transparently into the geom graph, so that one can
  109 enable or disable scheduling on a mounted filesystem, and the
  110 names in /etc/fstab do not depend on the presence of the scheduler.
  111 
  112 To understand how this works in practice, remember that in GEOM
  113 we have "providers" and "geom" objects.
  114 Say that we want to hook a scheduler on provider "ad0",
  115 accessible through pointer 'pp'. Originally, pp is attached to
  116 geom "ad0" (same name, different object) accessible through pointer old_gp
  117 
  118   BEFORE        ---> [ pp    --> old_gp ...]
  119 
  120 A normal "geom sched create ad0" call would create a new geom node
  121 on top of provider ad0/pp, and export a newly created provider
  122 ("ad0.sched." accessible through pointer newpp).
  123 
  124   AFTER create  ---> [ newpp --> gp --> cp ] ---> [ pp    --> old_gp ... ]
  125 
  126 On top of newpp, a whole tree will be created automatically, and we
  127 can e.g. mount partitions on /dev/ad0.sched.s1d, and those requests
  128 will go through the scheduler, whereas any partition mounted on
  129 the pre-existing device entries will not go through the scheduler.
  130 
  131 With the transparent insert mechanism, the original provider "ad0"/pp
  132 is hooked to the newly created geom, as follows:
  133 
  134   AFTER insert  ---> [ pp    --> gp --> cp ] ---> [ newpp --> old_gp ... ]
  135 
  136 so anything that was previously using provider pp will now have
  137 the requests routed through the scheduler node.
  138 
  139 A removal ("geom sched destroy ad0.sched.") will restore the original
  140 configuration.
  141 
  142 # $FreeBSD$

[ source navigation ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.