The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/Documentation/sched-design-CFS.txt

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 
    2 This is the CFS scheduler.
    3 
    4 80% of CFS's design can be summed up in a single sentence: CFS basically
    5 models an "ideal, precise multi-tasking CPU" on real hardware.
    6 
    7 "Ideal multi-tasking CPU" is a (non-existent  :-))  CPU that has 100%
    8 physical power and which can run each task at precise equal speed, in
    9 parallel, each at 1/nr_running speed. For example: if there are 2 tasks
   10 running then it runs each at 50% physical power - totally in parallel.
   11 
   12 On real hardware, we can run only a single task at once, so while that
   13 one task runs, the other tasks that are waiting for the CPU are at a
   14 disadvantage - the current task gets an unfair amount of CPU time. In
   15 CFS this fairness imbalance is expressed and tracked via the per-task
   16 p->wait_runtime (nanosec-unit) value. "wait_runtime" is the amount of
   17 time the task should now run on the CPU for it to become completely fair
   18 and balanced.
   19 
   20 ( small detail: on 'ideal' hardware, the p->wait_runtime value would
   21   always be zero - no task would ever get 'out of balance' from the
   22   'ideal' share of CPU time. )
   23 
   24 CFS's task picking logic is based on this p->wait_runtime value and it
   25 is thus very simple: it always tries to run the task with the largest
   26 p->wait_runtime value. In other words, CFS tries to run the task with
   27 the 'gravest need' for more CPU time. So CFS always tries to split up
   28 CPU time between runnable tasks as close to 'ideal multitasking
   29 hardware' as possible.
   30 
   31 Most of the rest of CFS's design just falls out of this really simple
   32 concept, with a few add-on embellishments like nice levels,
   33 multiprocessing and various algorithm variants to recognize sleepers.
   34 
   35 In practice it works like this: the system runs a task a bit, and when
   36 the task schedules (or a scheduler tick happens) the task's CPU usage is
   37 'accounted for': the (small) time it just spent using the physical CPU
   38 is deducted from p->wait_runtime. [minus the 'fair share' it would have
   39 gotten anyway]. Once p->wait_runtime gets low enough so that another
   40 task becomes the 'leftmost task' of the time-ordered rbtree it maintains
   41 (plus a small amount of 'granularity' distance relative to the leftmost
   42 task so that we do not over-schedule tasks and trash the cache) then the
   43 new leftmost task is picked and the current task is preempted.
   44 
   45 The rq->fair_clock value tracks the 'CPU time a runnable task would have
   46 fairly gotten, had it been runnable during that time'. So by using
   47 rq->fair_clock values we can accurately timestamp and measure the
   48 'expected CPU time' a task should have gotten. All runnable tasks are
   49 sorted in the rbtree by the "rq->fair_clock - p->wait_runtime" key, and
   50 CFS picks the 'leftmost' task and sticks to it. As the system progresses
   51 forwards, newly woken tasks are put into the tree more and more to the
   52 right - slowly but surely giving a chance for every task to become the
   53 'leftmost task' and thus get on the CPU within a deterministic amount of
   54 time.
   55 
   56 Some implementation details:
   57 
   58  - the introduction of Scheduling Classes: an extensible hierarchy of
   59    scheduler modules. These modules encapsulate scheduling policy
   60    details and are handled by the scheduler core without the core
   61    code assuming about them too much.
   62 
   63  - sched_fair.c implements the 'CFS desktop scheduler': it is a
   64    replacement for the vanilla scheduler's SCHED_OTHER interactivity
   65    code.
   66 
   67    I'd like to give credit to Con Kolivas for the general approach here:
   68    he has proven via RSDL/SD that 'fair scheduling' is possible and that
   69    it results in better desktop scheduling. Kudos Con!
   70 
   71    The CFS patch uses a completely different approach and implementation
   72    from RSDL/SD. My goal was to make CFS's interactivity quality exceed
   73    that of RSDL/SD, which is a high standard to meet :-) Testing
   74    feedback is welcome to decide this one way or another. [ and, in any
   75    case, all of SD's logic could be added via a kernel/sched_sd.c module
   76    as well, if Con is interested in such an approach. ]
   77 
   78    CFS's design is quite radical: it does not use runqueues, it uses a
   79    time-ordered rbtree to build a 'timeline' of future task execution,
   80    and thus has no 'array switch' artifacts (by which both the vanilla
   81    scheduler and RSDL/SD are affected).
   82 
   83    CFS uses nanosecond granularity accounting and does not rely on any
   84    jiffies or other HZ detail. Thus the CFS scheduler has no notion of
   85    'timeslices' and has no heuristics whatsoever. There is only one
   86    central tunable (you have to switch on CONFIG_SCHED_DEBUG):
   87 
   88          /proc/sys/kernel/sched_granularity_ns
   89 
   90    which can be used to tune the scheduler from 'desktop' (low
   91    latencies) to 'server' (good batching) workloads. It defaults to a
   92    setting suitable for desktop workloads. SCHED_BATCH is handled by the
   93    CFS scheduler module too.
   94 
   95    Due to its design, the CFS scheduler is not prone to any of the
   96    'attacks' that exist today against the heuristics of the stock
   97    scheduler: fiftyp.c, thud.c, chew.c, ring-test.c, massive_intr.c all
   98    work fine and do not impact interactivity and produce the expected
   99    behavior.
  100 
  101    the CFS scheduler has a much stronger handling of nice levels and
  102    SCHED_BATCH: both types of workloads should be isolated much more
  103    agressively than under the vanilla scheduler.
  104 
  105    ( another detail: due to nanosec accounting and timeline sorting,
  106      sched_yield() support is very simple under CFS, and in fact under
  107      CFS sched_yield() behaves much better than under any other
  108      scheduler i have tested so far. )
  109 
  110  - sched_rt.c implements SCHED_FIFO and SCHED_RR semantics, in a simpler
  111    way than the vanilla scheduler does. It uses 100 runqueues (for all
  112    100 RT priority levels, instead of 140 in the vanilla scheduler)
  113    and it needs no expired array.
  114 
  115  - reworked/sanitized SMP load-balancing: the runqueue-walking
  116    assumptions are gone from the load-balancing code now, and
  117    iterators of the scheduling modules are used. The balancing code got
  118    quite a bit simpler as a result.
  119 
  120 
  121 Group scheduler extension to CFS
  122 ================================
  123 
  124 Normally the scheduler operates on individual tasks and strives to provide
  125 fair CPU time to each task. Sometimes, it may be desirable to group tasks
  126 and provide fair CPU time to each such task group. For example, it may
  127 be desirable to first provide fair CPU time to each user on the system
  128 and then to each task belonging to a user.
  129 
  130 CONFIG_FAIR_GROUP_SCHED strives to achieve exactly that. It lets
  131 SCHED_NORMAL/BATCH tasks be be grouped and divides CPU time fairly among such
  132 groups. At present, there are two (mutually exclusive) mechanisms to group
  133 tasks for CPU bandwidth control purpose:
  134 
  135         - Based on user id (CONFIG_FAIR_USER_SCHED)
  136                 In this option, tasks are grouped according to their user id.
  137         - Based on "cgroup" pseudo filesystem (CONFIG_FAIR_CGROUP_SCHED)
  138                 This options lets the administrator create arbitrary groups
  139                 of tasks, using the "cgroup" pseudo filesystem. See
  140                 Documentation/cgroups.txt for more information about this
  141                 filesystem.
  142 
  143 Only one of these options to group tasks can be chosen and not both.
  144 
  145 Group scheduler tunables:
  146 
  147 When CONFIG_FAIR_USER_SCHED is defined, a directory is created in sysfs for
  148 each new user and a "cpu_share" file is added in that directory.
  149 
  150         # cd /sys/kernel/uids
  151         # cat 512/cpu_share             # Display user 512's CPU share
  152         1024
  153         # echo 2048 > 512/cpu_share     # Modify user 512's CPU share
  154         # cat 512/cpu_share             # Display user 512's CPU share
  155         2048
  156         #
  157 
  158 CPU bandwidth between two users are divided in the ratio of their CPU shares.
  159 For ex: if you would like user "root" to get twice the bandwidth of user
  160 "guest", then set the cpu_share for both the users such that "root"'s
  161 cpu_share is twice "guest"'s cpu_share
  162 
  163 
  164 When CONFIG_FAIR_CGROUP_SCHED is defined, a "cpu.shares" file is created
  165 for each group created using the pseudo filesystem. See example steps
  166 below to create task groups and modify their CPU share using the "cgroups"
  167 pseudo filesystem
  168 
  169         # mkdir /dev/cpuctl
  170         # mount -t cgroup -ocpu none /dev/cpuctl
  171         # cd /dev/cpuctl
  172 
  173         # mkdir multimedia      # create "multimedia" group of tasks
  174         # mkdir browser         # create "browser" group of tasks
  175 
  176         # #Configure the multimedia group to receive twice the CPU bandwidth
  177         # #that of browser group
  178 
  179         # echo 2048 > multimedia/cpu.shares
  180         # echo 1024 > browser/cpu.shares
  181 
  182         # firefox &     # Launch firefox and move it to "browser" group
  183         # echo <firefox_pid> > browser/tasks
  184 
  185         # #Launch gmplayer (or your favourite movie player)
  186         # echo <movie_player_pid> > multimedia/tasks

Cache object: 2541c59591c8806b6bb856d691abb438


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.