The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/kern/sched_prim.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1993-1987 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        sched_prim.c,v $
   29  * Revision 2.25  93/11/17  17:23:36  dbg
   30  *      Replaced evc_notify_abort with evc_wait_interrupt, called only
   31  *      if thread is halted (like mach_msg_interrupt).
   32  *      [93/08/20            dbg]
   33  * 
   34  *      Break up thread lock.
   35  *      [93/05/26            dbg]
   36  * 
   37  *      Removed thread->depress_timer.
   38  *      [93/04/08            dbg]
   39  * 
   40  *      Changed timeouts to seconds/nanoseconds.
   41  * 
   42  *      Restructured:
   43  *      kern/sched_prim.c       thread state machine,
   44  *                              wait queues,
   45  *                              context switch primitives
   46  *      kern/run_queues.c       run queues
   47  *      kern/priority.c         Mach time-sharing priority computation
   48  *      sched_policy/mk_ts.c    thread_setrun, context switch check
   49  *      [93/01/28            dbg]
   50  * 
   51  * 
   52  * Revision 2.23  93/01/14  17:36:13  danner
   53  *      Added assert_wait argument casts.
   54  *      [93/01/12            danner]
   55  * 
   56  *      Added ANSI function prototypes.
   57  *      [92/12/28            dbg]
   58  * 
   59  *      64bit cleanup. Proper spl typing.
   60  *      NOTE NOTE NOTE: events must be 'vm_offset_t' and dont you forget it.
   61  *      [92/12/01            af]
   62  * 
   63  *      Added function prototypes.
   64  *      [92/11/23            dbg]
   65  * 
   66  *      Fixed locking in do_thread_scan: thread lock must be taken
   67  *      before runq lock.  Removed obsolete swap states from
   68  *      documentation of state machine.
   69  *      [92/10/29            dbg]
   70  * 
   71  * Revision 2.22  92/08/05  18:05:25  jfriedl
   72  *      Added call to  machine_idle in idle loop if power_save option is
   73  *      selected.
   74  *      [92/08/05            mrt]
   75  * 
   76  * Revision 2.21  92/07/20  13:32:37  cmaeda
   77  *      Fast tas support: Check if current thread is in ras during 
   78  *      thread_block.
   79  *      [92/05/11  14:34:42  cmaeda]
   80  * 
   81  * Revision 2.20  92/05/21  17:15:31  jfriedl
   82  *      Added void to fcns that yet needed it.
   83  *      Added init to 'restart_needed' in do_thread_scan().
   84  *      [92/05/16            jfriedl]
   85  * 
   86  * Revision 2.19  92/02/19  16:06:35  elf
   87  *      Added argument to compute_priority.  We dont always want to do
   88  *      a reschedule right away.
   89  *      [92/01/19            rwd]
   90  * 
   91  * Revision 2.18  91/09/04  11:28:26  jsb
   92  *      Add a temporary hack to thread_dispatch for i860 support:
   93  *      don't panic if thread->state is TH_WAIT.
   94  *      [91/09/04  09:24:43  jsb]
   95  * 
   96  * Revision 2.17  91/08/24  11:59:46  af
   97  *      Final form of do_priority_computation was missing a pair of parentheses.
   98  *      [91/07/19            danner]
   99  * 
  100  * Revision 2.16  91/07/31  17:47:27  dbg
  101  *      When re-invoking the running thread (thread_block, thread_run):
  102  *      . Mark the thread interruptible.
  103  *      . If there is a continuation, call it instead of returning.
  104  *      [91/07/26            dbg]
  105  * 
  106  *      Fix timeout race.
  107  *      [91/05/23            dbg]
  108  * 
  109  *      Revised scheduling state machine.
  110  *      [91/05/22            dbg]
  111  * 
  112  * Revision 2.15  91/05/18  14:32:58  rpd
  113  *      Added check_simple_locks to thread_block and thread_run.
  114  *      [91/05/02            rpd]
  115  *      Changed recompute_priorities to use a private timer.
  116  *      Changed thread_timeout_setup to initialize depress_timer.
  117  *      [91/03/31            rpd]
  118  * 
  119  *      Updated thread_invoke to check stack_privilege.
  120  *      [91/03/30            rpd]
  121  * 
  122  * Revision 2.14  91/05/14  16:46:16  mrt
  123  *      Correcting copyright
  124  * 
  125  * Revision 2.13  91/05/08  12:48:33  dbg
  126  *      Distinguish processor sets from run queues in choose_pset_thread!
  127  *      Remove (long dead) 'someday' code.
  128  *      [91/04/26  14:43:26  dbg]
  129  * 
  130  * Revision 2.12  91/03/16  14:51:09  rpd
  131  *      Added idle_thread_continue, sched_thread_continue.
  132  *      [91/01/20            rpd]
  133  * 
  134  *      Allow swapped threads on the run queues.
  135  *      Added thread_invoke, thread_select.
  136  *      Reorganized thread_block, thread_run.
  137  *      Changed the AST interface; idle_thread checks for ASTs now.
  138  *      [91/01/17            rpd]
  139  * 
  140  * Revision 2.11  91/02/05  17:28:57  mrt
  141  *      Changed to new Mach copyright
  142  *      [91/02/01  16:16:51  mrt]
  143  * 
  144  * Revision 2.10  91/01/08  15:16:38  rpd
  145  *      Added KEEP_STACKS support.
  146  *      [91/01/06            rpd]
  147  *      Added thread_continue_calls counter.
  148  *      [91/01/03  22:07:43  rpd]
  149  * 
  150  *      Added continuation argument to thread_run.
  151  *      [90/12/11            rpd]
  152  *      Added continuation argument to thread_block/thread_continue.
  153  *      Removed FAST_CSW conditionals.
  154  *      [90/12/08            rpd]
  155  * 
  156  *      Removed thread_swap_tick.
  157  *      [90/11/11            rpd]
  158  * 
  159  * Revision 2.9  90/09/09  14:32:36  rpd
  160  *      Removed do_pset_scan call from sched_thread.
  161  *      [90/08/30            rpd]
  162  * 
  163  * Revision 2.8  90/08/07  22:22:54  rpd
  164  *      Fixed casting of a volatile comparison: the non-volatile should
  165  *      be casted or else.
  166  *      Removed silly set_leds() mips thingy.
  167  *      [90/08/07  15:56:08  af]
  168  * 
  169  * Revision 2.7  90/08/07  17:58:55  rpd
  170  *      Removed sched_debug; converted set_pri, update_priority,
  171  *      and compute_my_priority to real functions.
  172  *      Picked up fix for thread_block, to check for processor set mismatch.
  173  *      Record last processor info on all multiprocessors.
  174  *      [90/08/07            rpd]
  175  * 
  176  * Revision 2.6  90/06/02  14:55:49  rpd
  177  *      Updated to new scheduling technology.
  178  *      [90/03/26  22:16:03  rpd]
  179  * 
  180  * Revision 2.5  90/01/11  11:43:51  dbg
  181  *      Check for cpu shutting down on exit from idle loop - next_thread
  182  *      will be THREAD_NULL in this case.
  183  *      [90/01/03            dbg]
  184  * 
  185  *      Make sure cpu is marked active on all exits from idle loop.
  186  *      [89/12/11            dbg]
  187  * 
  188  *      Removed more lint.
  189  *      [89/12/05            dbg]
  190  * 
  191  *      DLB's scheduling changes in thread_block don't work if partially
  192  *      applied; a thread can run in two places at once.  Revert to old
  193  *      code, pending a complete merge.
  194  *      [89/12/04            dbg]
  195  * 
  196  * Revision 2.4  89/11/29  14:09:11  af
  197  *      On Mips, delay setting of active_threads inside load_context,
  198  *      or we might take exceptions in an embarassing state.
  199  *      [89/11/03  17:00:04  af]
  200  * 
  201  *      Long overdue fix: the pointers that the idle thread uses to check
  202  *      for someone to become runnable are now "volatile".  This prevents
  203  *      smart compilers from overoptimizing (e.g. Mips).
  204  * 
  205  *      While looking for someone to run in the idle_thread(), rotate
  206  *      console lights on Mips to show we're alive [useful when machine
  207  *      becomes catatonic].
  208  *      [89/10/28            af]
  209  * 
  210  * Revision 2.3  89/09/08  11:26:22  dbg
  211  *      Add extra thread state cases to thread_switch, since it may now
  212  *      be called by a thread about to block.
  213  *      [89/08/22            dbg]
  214  * 
  215  * 19-Dec-88  David Golub (dbg) at Carnegie-Mellon University
  216  *      Changes for MACH_KERNEL:
  217  *      . Import timing definitions from sys/time_out.h.
  218  *      . Split uses of PMAP_ACTIVATE and PMAP_DEACTIVATE into
  219  *        separate _USER and _KERNEL macros.
  220  *
  221  * Revision 2.8  88/12/19  02:46:33  mwyoung
  222  *      Corrected include file references.  Use <kern/macro_help.h>.
  223  *      [88/11/22            mwyoung]
  224  *      
  225  *      In thread_wakeup_with_result(), only lock threads that have the
  226  *      appropriate wait_event.  Both the wait_event and the hash bucket
  227  *      links are only modified with both the thread *and* hash bucket
  228  *      locked, so it should be safe to read them with either locked.
  229  *      
  230  *      Documented the wait event mechanism.
  231  *      
  232  *      Summarized ancient history.
  233  *      [88/11/21            mwyoung]
  234  * 
  235  * Revision 2.7  88/08/25  18:18:00  mwyoung
  236  *      Corrected include file references.
  237  *      [88/08/22            mwyoung]
  238  *      
  239  *      Avoid unsigned computation in wait_hash.
  240  *      [88/08/16  00:29:51  mwyoung]
  241  *      
  242  *      Add priority check to thread_check; make queue index unsigned,
  243  *      so that checking works correctly at all.
  244  *      [88/08/11  18:47:55  mwyoung]
  245  * 
  246  * 11-Aug-88  David Black (dlb) at Carnegie-Mellon University
  247  *      Support ast mechanism for threads.  Thread from local_runq gets
  248  *      minimum quantum to start.
  249  *
  250  *  9-Aug-88  David Black (dlb) at Carnegie-Mellon University
  251  *      Moved logic to detect and clear next_thread[] dispatch to
  252  *      idle_thread() from thread_block().
  253  *      Maintain first_quantum field in thread instead of runrun.
  254  *      Changed preempt logic in thread_setrun.
  255  *      Avoid context switch if current thread is still runnable and
  256  *      processor would go idle as a result.
  257  *      Added scanner to unstick stuck threads.
  258  *
  259  * Revision 2.6  88/08/06  18:25:03  rpd
  260  * Eliminated use of kern/mach_ipc_defs.h.
  261  * 
  262  * 10-Jul-88  David Golub (dbg) at Carnegie-Mellon University
  263  *      Check for negative priority (BUG) in thread_setrun.
  264  *
  265  * Revision 2.5  88/07/20  16:39:35  rpd
  266  * Changed "NCPUS > 1" conditionals that were eliminating dead
  267  * simple locking code to MACH_SLOCKS conditionals.
  268  * 
  269  *  7-Jul-88  David Golub (dbg) at Carnegie-Mellon University
  270  *      Split uses of PMAP_ACTIVATE and PMAP_DEACTIVATE into separate
  271  *      _USER and _KERNEL macros.
  272  *
  273  * 15-Jun-88  Michael Young (mwyoung) at Carnegie-Mellon University
  274  *      Removed excessive thread_unlock() occurrences in thread_wakeup.
  275  *      Problem discovered and solved by Richard Draves.
  276  *
  277  * Historical summary:
  278  *
  279  *      Redo priority recomputation. [dlb, 29 feb 88]
  280  *      New accurate timing. [dlb, 19 feb 88]
  281  *      Simplified choose_thread and thread_block. [dlb, 18 dec 87]
  282  *      Add machine-dependent hooks in idle loop. [dbg, 24 nov 87]
  283  *      Quantum scheduling changes. [dlb, 14 oct 87]
  284  *      Replaced scheduling logic with a state machine, and included
  285  *       timeout handling. [dbg, 05 oct 87]
  286  *      Deactivate kernel pmap in idle_thread. [dlb, 23 sep 87]
  287  *      Favor local_runq in choose_thread. [dlb, 23 sep 87]
  288  *      Hacks for master processor handling. [rvb, 12 sep 87]
  289  *      Improved idle cpu and idle threads logic. [dlb, 24 aug 87]
  290  *      Priority computation improvements. [dlb, 26 jun 87]
  291  *      Quantum-based scheduling. [avie, dlb, apr 87]
  292  *      Improved thread swapper. [avie, 13 mar 87]
  293  *      Lots of bug fixes. [dbg, mar 87]
  294  *      Accurate timing support. [dlb, 27 feb 87]
  295  *      Reductions in scheduler lock contention. [dlb, 18 feb 87]
  296  *      Revise thread suspension mechanism. [avie, 17 feb 87]
  297  *      Real thread handling [avie, 31 jan 87]
  298  *      Direct idle cpu dispatching. [dlb, 19 jan 87]
  299  *      Initial processor binding. [avie, 30 sep 86]
  300  *      Initial sleep/wakeup. [dbg, 12 jun 86]
  301  *      Created. [avie, 08 apr 86]
  302  */
  303 /*
  304  *      File:   sched_prim.c
  305  *      Author: Avadis Tevanian, Jr.
  306  *      Date:   1986
  307  *
  308  *      Scheduling primitives
  309  *
  310  */
  311 
  312 #include <cpus.h>
  313 #include <fast_tas.h>
  314 
  315 #include <mach/boolean.h>
  316 #include <mach/kern_return.h>
  317 
  318 #include <ipc/mach_msg.h>
  319 
  320 #include <kern/ast.h>
  321 #include <kern/counters.h>
  322 #include <kern/cpu_number.h>
  323 #include <kern/lock.h>
  324 #include <kern/mach_timer.h>
  325 #include <kern/processor.h>
  326 #include <kern/queue.h>
  327 #include <kern/sched_policy.h>
  328 #include <kern/sched_prim.h>
  329 #include <kern/stack.h>
  330 #include <kern/thread.h>
  331 #include <kern/thread_swap.h>
  332 
  333 #include <machine/machspl.h>    /* For def'n of splsched() */
  334 
  335 /*
  336  *      State machine
  337  *
  338  * states are combinations of:
  339  *  R   running
  340  *  W   waiting (or on wait queue)
  341  *  S   suspended (or will suspend)
  342  *  N   non-interruptible
  343  *
  344  * init action 
  345  *      assert_wait     thread_block    clear_wait      suspend resume
  346  *
  347  * R    RW,  RWN        R;   setrun     -               RS      -
  348  * RS   RWS, RWNS       S;              -               -       R
  349  *                       suspend_wakeup
  350  * RN   RWN             RN;  setrun     -               RNS     -
  351  * RNS  RWNS            RNS; setrun     -               -       RN
  352  *
  353  * RW                   W               R               RWS     -
  354  * RWN                  WN              RN              RWNS    -
  355  * RWS                  WS;             RS              -       RW
  356  *                       suspend_wakeup
  357  * RWNS                 WNS             RNS             -       RWN
  358  *
  359  * W                                    R;   setrun     WS      -
  360  * WN                                   RN;  setrun     WNS     -
  361  * WNS                                  RNS; setrun     -       WN
  362  *
  363  * S                                    -               -       R
  364  * WS                                   S               -       W
  365  */
  366 
  367 /*
  368  *      Waiting protocols and implementation:
  369  *
  370  *      Each thread may be waiting for exactly one event; this event
  371  *      is set using assert_wait().  That thread may be awakened either
  372  *      by performing a thread_wakeup_prim() on its event,
  373  *      or by directly waking that thread up with clear_wait().
  374  *
  375  *      The implementation of wait events uses a hash table.  Each
  376  *      bucket is queue of threads having the same hash function
  377  *      value; the chain for the queue (linked list) is the run queue
  378  *      field.  [It is not possible to be waiting and runnable at the
  379  *      same time.]
  380  *
  381  *      Locks on both the thread and on the hash buckets govern the
  382  *      wait event field and the queue chain field.  Because wakeup
  383  *      operations only have the event as an argument, the event hash
  384  *      bucket must be locked before any thread.
  385  *
  386  *      Scheduling operations may also occur at interrupt level; therefore,
  387  *      interrupts below splsched() must be prevented when holding
  388  *      thread or hash bucket locks.
  389  *
  390  *      The wait event hash table declarations are as follows:
  391  */
  392 
  393 #define NUMQUEUES       59
  394 
  395 queue_head_t            wait_queue[NUMQUEUES];
  396 decl_simple_lock_data(, wait_lock[NUMQUEUES])
  397 
  398 /* NOTE: we want a small positive integer out of this */
  399 #define wait_hash(event) \
  400         ((int)((((integer_t)(event) < 0) ? ~(integer_t)(event) \
  401                                          :  (integer_t)(event)) % NUMQUEUES))
  402 
  403 void sched_init(void)
  404 {
  405         register int i;
  406 
  407         for (i = 0; i < NUMQUEUES; i++) {
  408                 queue_init(&wait_queue[i]);
  409                 simple_lock_init(&wait_lock[i]);
  410         }
  411 }
  412 
  413 /*
  414  *      Routine:        thread_will_wait
  415  *      Purpose:
  416  *              Assert that the thread intends to block.
  417  *              An event is not supplied; the thread
  418  *              will be awakened with clear_wait or
  419  *              thread_go.
  420  */
  421 
  422 void
  423 thread_will_wait(
  424         thread_t thread)
  425 {
  426         spl_t s;
  427 
  428         s = splsched();
  429         thread_sched_lock(thread);
  430 
  431         assert(thread->wait_result = -1);       /* for later assertions */
  432         thread->state |= TH_WAIT;
  433 
  434         thread_sched_unlock(thread);
  435         splx(s);
  436 }
  437 
  438 /*
  439  *      Routine:        thread_will_wait_with_timeout
  440  *      Purpose:
  441  *              Assert that the thread intends to block,
  442  *              with a timeout.
  443  */
  444 void thread_timeout(
  445         void    *param);        /*  forward */
  446 
  447 void
  448 thread_will_wait_with_timeout(
  449         thread_t thread,
  450         mach_msg_timeout_t msecs)
  451 {
  452         time_spec_t     interval;
  453         spl_t s;
  454 
  455         milliseconds_to_time_spec(msecs, interval);
  456 
  457         s = splsched();
  458         thread_sched_lock(thread);
  459 
  460         assert(thread->wait_result = -1);       /* for later assertions */
  461         thread->state |= TH_WAIT;
  462 
  463         thread->timer.te_fcn = thread_timeout;
  464         timer_elt_enqueue(&thread->timer, interval, FALSE);
  465 
  466         thread_sched_unlock(thread);
  467         splx(s);
  468 }
  469 
  470 /*
  471  *      Routine:        thread_go
  472  *      Purpose:
  473  *              Start a thread running.
  474  *      Conditions:
  475  *              IPC locks may be held.
  476  */
  477 
  478 void
  479 thread_go(
  480         thread_t thread)
  481 {
  482         int state;
  483         spl_t s;
  484 
  485         s = splsched();
  486         thread_sched_lock(thread);
  487 
  488         timer_elt_remove(&thread->timer);
  489 
  490         state = thread->state;
  491         switch (state & TH_SCHED_STATE) {
  492 
  493             case TH_WAIT | TH_SUSP | TH_UNINT:
  494             case TH_WAIT           | TH_UNINT:
  495             case TH_WAIT:
  496                 /*
  497                  *      Sleeping and not suspendable - put
  498                  *      on run queue.
  499                  */
  500                 thread->state = (state &~ TH_WAIT) | TH_RUN;
  501                 thread->wait_result = THREAD_AWAKENED;
  502                 thread_setrun(thread, TRUE);
  503                 break;
  504 
  505             case          TH_WAIT | TH_SUSP:
  506             case TH_RUN | TH_WAIT:
  507             case TH_RUN | TH_WAIT | TH_SUSP:
  508             case TH_RUN | TH_WAIT           | TH_UNINT:
  509             case TH_RUN | TH_WAIT | TH_SUSP | TH_UNINT:
  510                 /*
  511                  *      Either already running, or suspended.
  512                  */
  513                 thread->state = state & ~TH_WAIT;
  514                 thread->wait_result = THREAD_AWAKENED;
  515                 break;
  516 
  517             default:
  518                 /*
  519                  *      Not waiting.
  520                  */
  521                 break;
  522         }
  523 
  524         thread_sched_unlock(thread);
  525         splx(s);
  526 }
  527 
  528 /*
  529  *      assert_wait:
  530  *
  531  *      Assert that the current thread is about to go to
  532  *      sleep until the specified event occurs.
  533  */
  534 void assert_wait(
  535         event_t         event,
  536         boolean_t       interruptible)
  537 {
  538         register queue_t        q;
  539         register int            index;
  540         register thread_t       thread;
  541 #if     MACH_SLOCKS
  542         register simple_lock_t  lock;
  543 #endif  /* MACH_SLOCKS */
  544         spl_t                   s;
  545 
  546         thread = current_thread();
  547         if (thread->wait_event != 0) {
  548                 panic("assert_wait: already asserted event %#x\n",
  549                       thread->wait_event);
  550         }
  551         s = splsched();
  552         if (event != 0) {
  553                 index = wait_hash(event);
  554                 q = &wait_queue[index];
  555 #if     MACH_SLOCKS
  556                 lock = &wait_lock[index];
  557 #endif  /* MACH_SLOCKS */
  558                 simple_lock(lock);
  559                 thread_sched_lock(thread);
  560                 enqueue_tail(q, (queue_entry_t) thread);
  561                 thread->wait_event = event;
  562                 if (interruptible)
  563                         thread->state |= TH_WAIT;
  564                 else
  565                         thread->state |= TH_WAIT | TH_UNINT;
  566                 thread_sched_unlock(thread);
  567                 simple_unlock(lock);
  568         }
  569         else {
  570                 thread_sched_lock(thread);
  571                 if (interruptible)
  572                         thread->state |= TH_WAIT;
  573                 else
  574                         thread->state |= TH_WAIT | TH_UNINT;
  575                 thread_sched_unlock(thread);
  576         }
  577         splx(s);
  578 }
  579 
  580 /*
  581  *      clear_wait:
  582  *
  583  *      Clear the wait condition for the specified thread.  Start the thread
  584  *      executing if that is appropriate.
  585  *
  586  *      parameters:
  587  *        thread                thread to awaken
  588  *        result                Wakeup result the thread should see
  589  *        interrupt_only        Don't wake up the thread if it isn't
  590  *                              interruptible.
  591  */
  592 void clear_wait(
  593         register thread_t       thread,
  594         int                     result,
  595         boolean_t               interrupt_only)
  596 {
  597         register int            index;
  598         register queue_t        q;
  599 #if     MACH_SLOCKS
  600         register simple_lock_t  lock;
  601 #endif  /* MACH_SLOCKS */
  602         register event_t        event;
  603         spl_t                   s;
  604 
  605         s = splsched();
  606         thread_sched_lock(thread);
  607         if (interrupt_only && (thread->state & TH_UNINT)) {
  608                 /*
  609                  *      can`t interrupt thread
  610                  */
  611                 thread_sched_unlock(thread);
  612                 splx(s);
  613                 return;
  614         }
  615 
  616         event = thread->wait_event;
  617         if (event != 0) {
  618                 thread_sched_unlock(thread);
  619                 index = wait_hash(event);
  620                 q = &wait_queue[index];
  621 #if     MACH_SLOCKS
  622                 lock = &wait_lock[index];
  623 #endif  /* MACH_SLOCKS */
  624                 simple_lock(lock);
  625                 /*
  626                  *      If the thread is still waiting on that event,
  627                  *      then remove it from the list.  If it is waiting
  628                  *      on a different event, or no event at all, then
  629                  *      someone else did our job for us.
  630                  */
  631                 thread_sched_lock(thread);
  632                 if (thread->wait_event == event) {
  633                         remqueue(q, (queue_entry_t)thread);
  634                         thread->wait_event = 0;
  635                         event = 0;              /* cause to run below */
  636                 }
  637                 simple_unlock(lock);
  638         }
  639         if (event == 0) {
  640                 register int    state = thread->state;
  641 
  642                 timer_elt_remove(&thread->timer);
  643 
  644                 switch (state & TH_SCHED_STATE) {
  645                     case          TH_WAIT | TH_SUSP | TH_UNINT:
  646                     case          TH_WAIT           | TH_UNINT:
  647                     case          TH_WAIT:
  648                         /*
  649                          *      Sleeping and not suspendable - put
  650                          *      on run queue.
  651                          */
  652                         thread->state = (state &~ TH_WAIT) | TH_RUN;
  653                         thread->wait_result = result;
  654                         thread_setrun(thread, TRUE);
  655                         break;
  656 
  657                     case          TH_WAIT | TH_SUSP:
  658                     case TH_RUN | TH_WAIT:
  659                     case TH_RUN | TH_WAIT | TH_SUSP:
  660                     case TH_RUN | TH_WAIT           | TH_UNINT:
  661                     case TH_RUN | TH_WAIT | TH_SUSP | TH_UNINT:
  662                         /*
  663                          *      Either already running, or suspended.
  664                          */
  665                         thread->state = state &~ TH_WAIT;
  666                         thread->wait_result = result;
  667                         break;
  668 
  669                     default:
  670                         /*
  671                          *      Not waiting.
  672                          */
  673                         break;
  674                 }
  675         }
  676         thread_sched_unlock(thread);
  677         splx(s);
  678 }
  679 
  680 /*
  681  *      thread_wakeup_prim:
  682  *
  683  *      Common routine for thread_wakeup, thread_wakeup_with_result,
  684  *      and thread_wakeup_one.
  685  *
  686  */
  687 void thread_wakeup_prim(
  688         event_t         event,
  689         boolean_t       one_thread,
  690         int             result)
  691 {
  692         register queue_t        q;
  693         register int            index;
  694         register thread_t       thread, next_th;
  695 #if     MACH_SLOCKS
  696         register simple_lock_t  lock;
  697 #endif  /* MACH_SLOCKS */
  698         spl_t                   s;
  699         register int            state;
  700 
  701         index = wait_hash(event);
  702         q = &wait_queue[index];
  703         s = splsched();
  704 #if     MACH_SLOCKS
  705         lock = &wait_lock[index];
  706 #endif  /* MACH_SLOCKS */
  707         simple_lock(lock);
  708         thread = (thread_t) queue_first(q);
  709         while (!queue_end(q, (queue_entry_t)thread)) {
  710                 next_th = (thread_t) queue_next((queue_t) thread);
  711 
  712                 if (thread->wait_event == event) {
  713                         thread_sched_lock(thread);
  714                         remqueue(q, (queue_entry_t) thread);
  715                         thread->wait_event = 0;
  716                         timer_elt_remove(&thread->timer);
  717 
  718                         state = thread->state;
  719                         switch (state & TH_SCHED_STATE) {
  720 
  721                             case          TH_WAIT | TH_SUSP | TH_UNINT:
  722                             case          TH_WAIT           | TH_UNINT:
  723                             case          TH_WAIT:
  724                                 /*
  725                                  *      Sleeping and not suspendable - put
  726                                  *      on run queue.
  727                                  */
  728                                 thread->state = (state &~ TH_WAIT) | TH_RUN;
  729                                 thread->wait_result = result;
  730                                 thread_setrun(thread, TRUE);
  731                                 break;
  732 
  733                             case          TH_WAIT | TH_SUSP:
  734                             case TH_RUN | TH_WAIT:
  735                             case TH_RUN | TH_WAIT | TH_SUSP:
  736                             case TH_RUN | TH_WAIT           | TH_UNINT:
  737                             case TH_RUN | TH_WAIT | TH_SUSP | TH_UNINT:
  738                                 /*
  739                                  *      Either already running, or suspended.
  740                                  */
  741                                 thread->state = state &~ TH_WAIT;
  742                                 thread->wait_result = result;
  743                                 break;
  744 
  745                             default:
  746                                 panic("thread_wakeup");
  747                                 break;
  748                         }
  749                         thread_sched_unlock(thread);
  750                         if (one_thread)
  751                                 break;
  752                 }
  753                 thread = next_th;
  754         }
  755         simple_unlock(lock);
  756         splx(s);
  757 }
  758 
  759 /*
  760  *      thread_sleep:
  761  *
  762  *      Cause the current thread to wait until the specified event
  763  *      occurs.  The specified lock is unlocked before releasing
  764  *      the cpu.  (This is a convenient way to sleep without manually
  765  *      calling assert_wait).
  766  */
  767 void thread_sleep(
  768         event_t         event,
  769         simple_lock_t   lock,
  770         boolean_t       interruptible)
  771 {
  772         assert_wait(event, interruptible);      /* assert event */
  773         simple_unlock(lock);                    /* release the lock */
  774         thread_block(CONTINUE_NULL);            /* block ourselves */
  775 }
  776 
  777 /*
  778  *      Routine:        thread_handoff
  779  *      Purpose:
  780  *              Switch to a new thread (new), leaving the current
  781  *              thread (old) blocked.  If successful, moves the
  782  *              kernel stack from old to new and returns as the
  783  *              new thread.  An explicit continuation for the old thread
  784  *              must be supplied.
  785  *
  786  *              NOTE:  Although we wakeup new, we don't set new->wait_result.
  787  *      Returns:
  788  *              TRUE if the handoff happened.
  789  */
  790 
  791 boolean_t
  792 thread_handoff(
  793         register thread_t old,
  794         register continuation_t continuation,
  795         register thread_t new)
  796 {
  797         spl_t s;
  798 
  799         assert(current_thread() == old);
  800 
  801         /*
  802          *      XXX Dubious things here:
  803          *      I don't check the idle_count on the processor set.
  804          *      No scheduling priority or policy checks.
  805          *      I assume the new thread is interruptible.
  806          */
  807 
  808         s = splsched();
  809         thread_sched_lock(new);
  810 
  811         /*
  812          *      The first thing we must do is check the state
  813          *      of the threads, to ensure we can handoff.
  814          *      This check uses current_processor()->processor_set,
  815          *      which we can read without locking.
  816          */
  817 
  818         if ((old->stack_privilege == current_stack()) ||
  819             (new->state != (TH_WAIT|TH_SWAPPED)) ||
  820              !check_processor_set(new) ||
  821              !check_bound_processor(new)) {
  822                 thread_sched_unlock(new);
  823                 splx(s);
  824 
  825                 counter_always(c_thread_handoff_misses++);
  826                 return FALSE;
  827         }
  828 
  829         timer_elt_remove(&new->timer);
  830 
  831         new->state = TH_RUN;
  832         thread_sched_unlock(new);
  833 
  834 #if     NCPUS > 1
  835         new->last_processor = current_processor();
  836 #endif  /* NCPUS > 1 */
  837 
  838         ast_context(new, cpu_number());
  839         timer_switch(&new->system_timer);
  840 
  841         /*
  842          *      stack_handoff is machine-dependent.  It does the
  843          *      machine-dependent components of a context-switch, like
  844          *      changing address spaces.  It updates active_threads.
  845          */
  846 
  847         stack_handoff(old, new);
  848 
  849         /*
  850          *      Now we must dispose of the old thread.
  851          *      This is like thread_continue, except
  852          *      that the old thread isn't waiting yet.
  853          */
  854 
  855         thread_sched_lock(old);
  856         old->swap_func = continuation;
  857         assert(old->wait_result = -1);          /* for later assertions */
  858 
  859         if (old->state == TH_RUN) {
  860                 /*
  861                  *      This is our fast path.
  862                  */
  863 
  864                 old->state = TH_WAIT|TH_SWAPPED;
  865         }
  866         else if (old->state == (TH_RUN|TH_SUSP)) {
  867                 /*
  868                  *      Somebody is trying to suspend the thread.
  869                  */
  870 
  871                 old->state = TH_WAIT|TH_SUSP|TH_SWAPPED;
  872                 if (old->suspend_wait) {
  873                         /*
  874                          *      Someone wants to know when the thread
  875                          *      really stops.
  876                          */
  877                         old->suspend_wait = FALSE;
  878                         thread_sched_unlock(old);
  879                         thread_wakeup((event_t) &old->suspend_wait);
  880                         goto after_old_thread;
  881                 }
  882         } else
  883                 panic("thread_handoff");
  884 
  885         thread_sched_unlock(old);
  886     after_old_thread:
  887         splx(s);
  888 
  889         counter_always(c_thread_handoff_hits++);
  890         return TRUE;
  891 }
  892 
  893 /*
  894  *      [internal]
  895  *
  896  *      Stop running the current thread and start running the new thread.
  897  *      If continuation is non-zero, and the current thread is blocked,
  898  *      then it will resume by executing continuation on a new stack.
  899  *      Returns TRUE if the hand-off succeeds.
  900  *      Assumes splsched.
  901  */
  902 
  903 boolean_t thread_invoke(
  904         register thread_t old_thread,
  905         continuation_t    continuation,
  906         register thread_t new_thread)
  907 {
  908         register int    mycpu = cpu_number();
  909 
  910         /*
  911          *      Check for invoking the same thread.
  912          */
  913         if (old_thread == new_thread) {
  914             /*
  915              *  Mark thread interruptible.
  916              *  Run continuation if there is one.
  917              */
  918             thread_sched_lock(new_thread);
  919             new_thread->state &= ~TH_UNINT;
  920             thread_sched_unlock(new_thread);
  921 
  922             if (continuation != (void (*)(void)) 0) {
  923                 (void) spl0();
  924 
  925                 /*
  926                  * Check for asynchronous kernel activities
  927                  * here - we expected to context switch.
  928                  */
  929                 AST_KERNEL_CHECK(mycpu);
  930                 call_continuation(continuation);
  931                 /*NOTREACHED*/
  932             }
  933             return TRUE;
  934         }
  935 
  936         /*
  937          *      Check for stack-handoff.
  938          */
  939         thread_sched_lock(new_thread);
  940         if ((old_thread->stack_privilege != current_stack()) &&
  941             (continuation != (void (*)(void)) 0))
  942         {
  943             switch (new_thread->state & TH_SWAP_STATE) {
  944                 case TH_SWAPPED:
  945 
  946                     new_thread->state &= ~(TH_SWAPPED | TH_UNINT);
  947                     thread_sched_unlock(new_thread);
  948 
  949 #if     NCPUS > 1
  950                     new_thread->last_processor = current_processor();
  951 #endif  /* NCPUS > 1 */
  952 
  953                     /*
  954                      *  Set up ast context of new thread and
  955                      *  switch to its timer.
  956                      */
  957                     ast_context(new_thread, mycpu);
  958                     timer_switch(&new_thread->system_timer);
  959 
  960                     stack_handoff(old_thread, new_thread);
  961 
  962                     /*
  963                      *  We can dispatch the old thread now.
  964                      *  This is like thread_dispatch, except
  965                      *  that the old thread is left swapped
  966                      *  *without* freeing its stack.
  967                      *  This path is also much more frequent
  968                      *  than actual calls to thread_dispatch.
  969                      */
  970 
  971                     thread_sched_lock(old_thread);
  972                     old_thread->swap_func = continuation;
  973 
  974                     switch (old_thread->state) {
  975                         case TH_RUN           | TH_SUSP:
  976                         case TH_RUN           | TH_SUSP | TH_HALTED:
  977                         case TH_RUN | TH_WAIT | TH_SUSP:
  978                             /*
  979                              *  Suspend the thread
  980                              */
  981                             old_thread->state = (old_thread->state & ~TH_RUN)
  982                                                 | TH_SWAPPED;
  983                             if (old_thread->suspend_wait) {
  984                                 old_thread->suspend_wait = FALSE;
  985                                 thread_sched_unlock(old_thread);
  986                                 thread_wakeup(
  987                                         (event_t) &old_thread->suspend_wait);
  988                                 goto after_old_thread;
  989                             }
  990                             break;
  991 
  992                         case TH_RUN           | TH_SUSP | TH_UNINT:
  993                         case TH_RUN                     | TH_UNINT:
  994                         case TH_RUN:
  995                             /*
  996                              *  We can`t suspend the thread yet,
  997                              *  or it`s still running.
  998                              *  Put back on a run queue.
  999                              */
 1000                             old_thread->state |= TH_SWAPPED;
 1001                             thread_setrun(old_thread, FALSE);
 1002                             break;
 1003 
 1004                         case TH_RUN | TH_WAIT | TH_SUSP | TH_UNINT:
 1005                         case TH_RUN | TH_WAIT           | TH_UNINT:
 1006                         case TH_RUN | TH_WAIT:
 1007                             /*
 1008                              *  Waiting, and not suspendable.
 1009                              */
 1010                             old_thread->state = (old_thread->state & ~TH_RUN)
 1011                                                 | TH_SWAPPED;
 1012                             break;
 1013 
 1014                         case TH_RUN | TH_IDLE:
 1015                             /*
 1016                              *  Drop idle thread -- it is already in
 1017                              *  idle_thread_array.
 1018                              */
 1019                             old_thread->state = TH_RUN | TH_IDLE | TH_SWAPPED;
 1020                             break;
 1021 
 1022                         default:
 1023                             panic("thread_invoke");
 1024                     }
 1025                     thread_sched_unlock(old_thread);
 1026                 after_old_thread:
 1027 
 1028                     /*
 1029                      *  call_continuation calls the continuation
 1030                      *  after resetting the current stack pointer
 1031                      *  to recover stack space.  If we called
 1032                      *  the continuation directly, we would risk
 1033                      *  running out of stack.
 1034                      */
 1035 
 1036                     counter_always(c_thread_invoke_hits++);
 1037                     (void) spl0();
 1038                     AST_KERNEL_CHECK(mycpu);
 1039                     call_continuation(new_thread->swap_func);
 1040                     /*NOTREACHED*/
 1041                     return TRUE; /* help for the compiler */
 1042 
 1043                 case TH_SW_COMING_IN:
 1044                     /*
 1045                      *  Waiting for a stack
 1046                      */
 1047                     thread_swapin(new_thread);
 1048                     thread_sched_unlock(new_thread);
 1049                     counter_always(c_thread_invoke_misses++);
 1050                     return FALSE;
 1051 
 1052                 case 0:
 1053                     /*
 1054                      *  Already has a stack - can`t handoff.
 1055                      */
 1056                     break;
 1057             }
 1058         }
 1059 
 1060         else {
 1061             /*
 1062              *  Check that the thread is swapped-in.
 1063              */
 1064             if (new_thread->state & TH_SWAPPED) {
 1065                 if ((new_thread->state & TH_SW_COMING_IN) ||
 1066                     !stack_alloc_try(new_thread, thread_continue))
 1067                 {
 1068                     thread_swapin(new_thread);
 1069                     thread_sched_unlock(new_thread);
 1070                     counter_always(c_thread_invoke_misses++);
 1071                     return FALSE;
 1072                 }
 1073             }
 1074         }
 1075 
 1076         new_thread->state &= ~(TH_SWAPPED | TH_UNINT);
 1077         thread_sched_unlock(new_thread);
 1078 
 1079         /*
 1080          *      Thread is now interruptible.
 1081          */
 1082 #if     NCPUS > 1
 1083         new_thread->last_processor = current_processor();
 1084 #endif  /* NCPUS > 1 */
 1085 
 1086         /*
 1087          *      Set up ast context of new thread and switch to its timer.
 1088          */
 1089         ast_context(new_thread, mycpu);
 1090         timer_switch(&new_thread->system_timer);
 1091 
 1092         /*
 1093          *      switch_context is machine-dependent.  It does the
 1094          *      machine-dependent components of a context-switch, like
 1095          *      changing address spaces.  It updates active_threads.
 1096          *      It returns only if a continuation is not supplied.
 1097          */
 1098         counter_always(c_thread_invoke_csw++);
 1099         old_thread = switch_context(old_thread, continuation, new_thread);
 1100 
 1101         /*
 1102          *      We're back.  Now old_thread is the thread that resumed
 1103          *      us, and we have to dispatch it.
 1104          */
 1105         thread_dispatch(old_thread);
 1106 
 1107         return TRUE;
 1108 }
 1109 
 1110 /*
 1111  *      thread_continue:
 1112  *
 1113  *      Called when the current thread is given a new stack.
 1114  *      Called at splsched.
 1115  */
 1116 no_return thread_continue(
 1117         register thread_t old_thread)
 1118 {
 1119         /*
 1120          *      We must dispatch the old thread and then
 1121          *      call the current thread's continuation.
 1122          *      There might not be an old thread, if we are
 1123          *      the first thread to run on this processor.
 1124          */
 1125 
 1126         if (old_thread != THREAD_NULL)
 1127                 thread_dispatch(old_thread);
 1128         (void) spl0();
 1129 
 1130         /*
 1131          *      Before starting the new thread, check for
 1132          *      asynchronous kernel activities.
 1133          */
 1134         AST_KERNEL_CHECK(cpu_number());
 1135 
 1136         (*current_thread()->swap_func)();
 1137 
 1138         /*NOTREACHED*/
 1139 }
 1140 
 1141 
 1142 /*
 1143  *      thread_block:
 1144  *
 1145  *      Block the current thread.  If the thread is runnable
 1146  *      then someone must have woken it up between its request
 1147  *      to sleep and now.  In this case, it goes back on a
 1148  *      run queue.
 1149  *
 1150  *      If a continuation is specified, then thread_block will
 1151  *      attempt to discard the thread's kernel stack.  When the
 1152  *      thread resumes, it will execute the continuation function
 1153  *      on a new kernel stack.
 1154  */
 1155 
 1156 void thread_block(
 1157         continuation_t  continuation)
 1158 {
 1159         int     mycpu = cpu_number();
 1160         register thread_t thread = active_threads[mycpu];
 1161         register processor_t myprocessor = cpu_to_processor(mycpu);
 1162         register thread_t new_thread;
 1163         spl_t s;
 1164 
 1165         check_simple_locks();
 1166 
 1167         s = splsched();
 1168 
 1169 #if     FAST_TAS
 1170         {
 1171             extern void recover_ras(thread_t);
 1172 
 1173             if (csw_needed(thread, myprocessor))
 1174                 recover_ras(thread);
 1175         }
 1176 #endif  /* FAST_TAS */
 1177         
 1178         ast_off(mycpu, AST_BLOCK);
 1179 
 1180         do
 1181                 new_thread = thread_select(myprocessor);
 1182         while (!thread_invoke(thread, continuation, new_thread));
 1183 
 1184         /*
 1185          *      Check for asynchronous kernel activities before
 1186          *      resuming the new thread.  We already hold splhigh.
 1187          */
 1188         AST_KERNEL_CHECK_HIGH(mycpu);
 1189 
 1190         splx(s);
 1191 }
 1192 
 1193 /*
 1194  *      thread_run:
 1195  *
 1196  *      Switch directly from the current thread to a specified
 1197  *      thread.  Both the current and new threads must be
 1198  *      runnable.
 1199  *
 1200  *      If a continuation is specified, then thread_block will
 1201  *      attempt to discard the current thread's kernel stack.  When the
 1202  *      thread resumes, it will execute the continuation function
 1203  *      on a new kernel stack.
 1204  */
 1205 void thread_run(
 1206         continuation_t          continuation,
 1207         register thread_t       new_thread)
 1208 {
 1209         int     mycpu = cpu_number();
 1210         register thread_t thread = active_threads[mycpu];
 1211         register processor_t myprocessor = cpu_to_processor(mycpu);
 1212         spl_t   s;
 1213 
 1214         check_simple_locks();
 1215 
 1216         s = splsched();
 1217 
 1218         while (!thread_invoke(thread, continuation, new_thread))
 1219                 new_thread = thread_select(myprocessor);
 1220 
 1221         /*
 1222          *      Check for asynchronous kernel activities before
 1223          *      resuming the new thread.  We already hold splhigh.
 1224          */
 1225         AST_KERNEL_CHECK_HIGH(mycpu);
 1226 
 1227         splx(s);
 1228 }
 1229 
 1230 /*
 1231  *      Dispatches a running thread that is not on a runq.
 1232  *      Called at splsched.
 1233  */
 1234 
 1235 void thread_dispatch(
 1236         register thread_t       thread)
 1237 {
 1238         /*
 1239          *      If we are discarding the thread's stack, we must do it
 1240          *      before the thread has a chance to run.
 1241          */
 1242 
 1243         thread_sched_lock(thread);
 1244 
 1245         if (thread->swap_func != 0) {
 1246                 assert((thread->state & TH_SWAP_STATE) == 0);
 1247                 thread->state |= TH_SWAPPED;
 1248                 stack_free(thread);
 1249         }
 1250 
 1251         switch (thread->state &~ TH_SWAP_STATE) {
 1252             case TH_RUN           | TH_SUSP:
 1253             case TH_RUN           | TH_SUSP | TH_HALTED:
 1254             case TH_RUN | TH_WAIT | TH_SUSP:
 1255                 /*
 1256                  *      Suspend the thread
 1257                  */
 1258                 thread->state &= ~TH_RUN;
 1259                 if (thread->suspend_wait) {
 1260                     thread->suspend_wait = FALSE;
 1261                     thread_sched_unlock(thread);
 1262                     thread_wakeup((event_t) &thread->suspend_wait);
 1263                     return;
 1264                 }
 1265                 break;
 1266 
 1267             case TH_RUN           | TH_SUSP | TH_UNINT:
 1268             case TH_RUN                     | TH_UNINT:
 1269             case TH_RUN:
 1270                 /*
 1271                  *      No reason to stop.  Put back on a run queue.
 1272                  */
 1273                 thread_setrun(thread, FALSE);
 1274                 break;
 1275 
 1276             case TH_RUN | TH_WAIT | TH_SUSP | TH_UNINT:
 1277             case TH_RUN | TH_WAIT           | TH_UNINT:
 1278             case TH_RUN | TH_WAIT:
 1279                 /*
 1280                  *      Waiting, and not suspended.
 1281                  */
 1282                 thread->state &= ~TH_RUN;
 1283                 break;
 1284 
 1285             case TH_RUN | TH_IDLE:
 1286                 /*
 1287                  *      Drop idle thread -- it is already in
 1288                  *      idle_thread_array.
 1289                  */
 1290                 break;
 1291 
 1292             default:
 1293                 panic("thread_dispatch");
 1294         }
 1295         thread_sched_unlock(thread);
 1296 }
 1297 
 1298 
 1299 /*
 1300  *      Thread timeout routine, called when timer expires.
 1301  *      Called at splsoftclock.
 1302  */
 1303 void thread_timeout(
 1304         void    *param)
 1305 {
 1306         thread_t        thread = (thread_t) param;
 1307 
 1308         assert(thread->timer.set == TELT_UNSET);
 1309 
 1310         clear_wait(thread, THREAD_TIMED_OUT, FALSE);
 1311 }
 1312 
 1313 /*
 1314  *      thread_set_timeout:
 1315  *
 1316  *      Set a timer for the current thread, if the thread
 1317  *      is ready to wait.  Must be called between assert_wait()
 1318  *      and thread_block().
 1319  */
 1320  
 1321 void thread_set_timeout(
 1322         int     msecs)          /* timeout interval in milliseconds */
 1323 {
 1324         thread_t        thread = current_thread();
 1325         time_spec_t     interval;
 1326         spl_t s;
 1327 
 1328         milliseconds_to_time_spec(msecs, interval);
 1329 
 1330         s = splsched();
 1331         thread_sched_lock(thread);
 1332         if ((thread->state & TH_WAIT) != 0) {
 1333                 thread->timer.te_fcn = thread_timeout;
 1334                 timer_elt_enqueue(&thread->timer, interval, FALSE);
 1335         }
 1336         thread_sched_unlock(thread);
 1337         splx(s);
 1338 }
 1339 
 1340 /*
 1341  *      Halt a thread at a clean point, leaving it suspended.
 1342  *
 1343  *      must_halt indicates whether thread must halt.
 1344  *
 1345  */
 1346 kern_return_t thread_halt(
 1347         register thread_t       thread,
 1348         boolean_t               must_halt)
 1349 {
 1350         register thread_t       cur_thread = current_thread();
 1351         register kern_return_t  ret;
 1352         spl_t   s;
 1353 #if     MACH_HOST
 1354         processor_set_t         old_pset = PROCESSOR_SET_NULL;
 1355 #endif
 1356 
 1357         if (thread == cur_thread)
 1358                 panic("thread_halt: trying to halt current thread.");
 1359 
 1360         /*
 1361          *      If must_halt is FALSE, then a check must be made for
 1362          *      a cycle of halt operations.
 1363          */
 1364         if (!must_halt) {
 1365                 /*
 1366                  *      Grab both thread locks.
 1367                  */
 1368                 s = splsched();
 1369                 if ((vm_offset_t)thread < (vm_offset_t)cur_thread) {
 1370                         thread_sched_lock(thread);
 1371                         thread_sched_lock(cur_thread);
 1372                 }
 1373                 else {
 1374                         thread_sched_lock(cur_thread);
 1375                         thread_sched_lock(thread);
 1376                 }
 1377 
 1378                 /*
 1379                  *      If target thread is already halted, grab a hold
 1380                  *      on it and return.
 1381                  */
 1382                 if (thread->state & TH_HALTED) {
 1383                         thread->suspend_count++;
 1384                         thread_sched_unlock(cur_thread);
 1385                         thread_sched_unlock(thread);
 1386                         splx(s);
 1387                         return KERN_SUCCESS;
 1388                 }
 1389 
 1390                 /*
 1391                  *      If someone is trying to halt us, we have a potential
 1392                  *      halt cycle.  Break the cycle by interrupting anyone
 1393                  *      who is trying to halt us, and causing this operation
 1394                  *      to fail; retry logic will only retry operations
 1395                  *      that cannot deadlock.  (If must_halt is TRUE, this
 1396                  *      operation can never cause a deadlock.)
 1397                  */
 1398                 if (cur_thread->ast & AST_HALT) {
 1399                         thread_wakeup_with_result(&cur_thread->suspend_wait,
 1400                                 THREAD_INTERRUPTED);
 1401                         thread_sched_unlock(thread);
 1402                         thread_sched_unlock(cur_thread);
 1403                         splx(s);
 1404                         return KERN_FAILURE;
 1405                 }
 1406 
 1407                 thread_sched_unlock(cur_thread);
 1408         
 1409         }
 1410         else {
 1411                 /*
 1412                  *      Lock thread and check whether it is already halted.
 1413                  */
 1414                 s = splsched();
 1415                 thread_sched_lock(thread);
 1416                 if (thread->state & TH_HALTED) {
 1417                         thread->suspend_count++;
 1418                         thread_sched_unlock(thread);
 1419                         splx(s);
 1420                         return KERN_SUCCESS;
 1421                 }
 1422         }
 1423 
 1424         /*
 1425          *      Suspend thread - inline version of thread_hold() because
 1426          *      thread is already locked.
 1427          */
 1428         thread->suspend_count++;
 1429         thread->state |= TH_SUSP;
 1430 
 1431         /*
 1432          *      If someone else is halting it, wait for that to complete.
 1433          *      Fail if wait interrupted and must_halt is false.
 1434          */
 1435         while ((thread->ast & AST_HALT) && (!(thread->state & TH_HALTED))) {
 1436                 thread->suspend_wait = TRUE;
 1437                 thread_sleep(&thread->suspend_wait,
 1438                         simple_lock_addr(thread->sched_lock), TRUE);
 1439 
 1440                 if (thread->state & TH_HALTED) {
 1441                         splx(s);
 1442                         return KERN_SUCCESS;
 1443                 }
 1444                 if ((current_thread()->wait_result != THREAD_AWAKENED)
 1445                     && !(must_halt)) {
 1446                         splx(s);
 1447                         thread_release(thread);
 1448                         return KERN_FAILURE;
 1449                 }
 1450                 thread_sched_lock(thread);
 1451         }
 1452 
 1453         /*
 1454          *      Otherwise, have to do it ourselves.
 1455          */
 1456                 
 1457         thread_ast_set(thread, AST_HALT);
 1458 
 1459         while (TRUE) {
 1460                 /*
 1461                  *      Wait for thread to stop.
 1462                  */
 1463                 thread_sched_unlock(thread);
 1464                 splx(s);
 1465 
 1466                 ret = thread_dowait(thread, must_halt);
 1467 
 1468                 /*
 1469                  *      If the dowait failed, so do we.  Drop AST_HALT, and
 1470                  *      wake up anyone else who might be waiting for it.
 1471                  */
 1472                 if (ret != KERN_SUCCESS) {
 1473                         s = splsched();
 1474                         thread_sched_lock(thread);
 1475                         thread_ast_clear(thread, AST_HALT);
 1476                         thread_wakeup_with_result(&thread->suspend_wait,
 1477                                 THREAD_INTERRUPTED);
 1478                         thread_sched_unlock(thread);
 1479                         splx(s);
 1480 
 1481                         thread_release(thread);
 1482                         break;
 1483                 }
 1484 
 1485                 /*
 1486                  *      Clear any interruptible wait.
 1487                  */
 1488                 clear_wait(thread, THREAD_INTERRUPTED, TRUE);
 1489 
 1490                 /*
 1491                  *      If the thread's at a clean point, we're done.
 1492                  *      Don't need a lock because it really is stopped.
 1493                  */
 1494                 if (thread->state & TH_HALTED) {
 1495                         ret = KERN_SUCCESS;
 1496                         break;
 1497                 }
 1498 
 1499                 /*
 1500                  *      If the thread is at a nice continuation,
 1501                  *      or a continuation with a cleanup routine,
 1502                  *      call the cleanup routine.
 1503                  */
 1504                 if (((thread->swap_func == mach_msg_continue ||
 1505                       thread->swap_func == mach_msg_receive_continue) &&
 1506                      mach_msg_interrupt(thread))
 1507                  || thread->swap_func == thread_exception_return
 1508                  || thread->swap_func == thread_bootstrap_return
 1509                  || evc_wait_interrupt(thread)) {
 1510                         s = splsched();
 1511                         thread_sched_lock(thread);
 1512                         thread->state |= TH_HALTED;
 1513                         thread_ast_clear(thread, AST_HALT);
 1514                         thread_sched_unlock(thread);
 1515                         splx(s);
 1516 
 1517                         ret = KERN_SUCCESS;
 1518                         break;
 1519                 }
 1520 
 1521                 /*
 1522                  *      Force the thread to stop at a clean
 1523                  *      point, and arrange to wait for it.
 1524                  *
 1525                  *      Set it running, so it can notice.  Override
 1526                  *      the suspend count.  We know that the thread
 1527                  *      is suspended and not waiting.
 1528                  *
 1529                  *      Since the thread may hit an interruptible wait
 1530                  *      before it reaches a clean point, we must force it
 1531                  *      to wake us up when it does so.  This involves some
 1532                  *      trickery:
 1533                  *        We mark the thread SUSPENDED so that thread_block
 1534                  *      will suspend it and wake us up.
 1535                  *        We mark the thread RUNNING so that it will run.
 1536                  *        We mark the thread UN-INTERRUPTIBLE (!) so that
 1537                  *      some other thread trying to halt or suspend it won't
 1538                  *      take it off the run queue before it runs.  Since
 1539                  *      dispatching a thread (the tail of thread_invoke) marks
 1540                  *      the thread interruptible, it will stop at the next
 1541                  *      context switch or interruptible wait.
 1542                  */
 1543 
 1544 #if     MACH_HOST
 1545                 /*
 1546                  *      But first, if the thread is a member of
 1547                  *      an empty processor set, we must move it
 1548                  *      to the default processor set so it can
 1549                  *      execute.
 1550                  */
 1551                 if (old_pset == PROCESSOR_SET_NULL)
 1552                         old_pset = thread_assign_if_empty(thread);
 1553                                 /* donates a reference */
 1554 #endif
 1555 
 1556                 s = splsched();
 1557                 thread_sched_lock(thread);
 1558                 if ((thread->state & TH_SCHED_STATE) != TH_SUSP)
 1559                         panic("thread_halt");
 1560                 thread->state |= TH_RUN | TH_UNINT;
 1561                 thread_setrun(thread, FALSE);
 1562 
 1563                 /*
 1564                  *      Continue loop and wait for thread to stop.
 1565                  */
 1566         }
 1567 
 1568 #if     MACH_HOST
 1569         /*
 1570          *      Reassign thread to its original processor set.
 1571          */
 1572         if (old_pset != PROCESSOR_SET_NULL) {
 1573                 (void) thread_assign(thread, old_pset);
 1574                 pset_deallocate(old_pset);      /* remove extra reference */
 1575         }
 1576 #endif
 1577 
 1578         /*
 1579          *      Thread is now halted.
 1580          */
 1581 
 1582         return ret;
 1583 }
 1584 
 1585 /*
 1586  *      Thread calls this routine on exit from the kernel when it
 1587  *      notices a halt request.
 1588  */
 1589 void thread_halt_self(
 1590         continuation_t  continuation)
 1591 {
 1592         register thread_t       thread = current_thread();
 1593         spl_t   s;
 1594 
 1595         /*
 1596          *      Thread was asked to halt - show that it
 1597          *      has done so.
 1598          */
 1599         s = splsched();
 1600         thread_sched_lock(thread);
 1601         thread->state |= TH_HALTED;
 1602         thread_ast_clear(thread, AST_HALT);
 1603         thread_sched_unlock(thread);
 1604         splx(s);
 1605 
 1606         counter(c_thread_halt_self_block++);
 1607         thread_block(continuation);
 1608 
 1609         /*
 1610          *      thread_release resets TH_HALTED.
 1611          */
 1612 }
 1613 
 1614 /*
 1615  *      thread_hold:
 1616  *
 1617  *      Suspend execution of the specified thread.
 1618  *      This is a recursive-style suspension of the thread, a count of
 1619  *      suspends is maintained.
 1620  */
 1621 void thread_hold(
 1622         register thread_t       thread)
 1623 {
 1624         spl_t   s;
 1625 
 1626         s = splsched();
 1627         thread_sched_lock(thread);
 1628         thread->suspend_count++;
 1629         thread->state |= TH_SUSP;
 1630         thread_sched_unlock(thread);
 1631         splx(s);
 1632 }
 1633 
 1634 /*
 1635  *      thread_dowait:
 1636  *
 1637  *      Wait for a thread to actually enter stopped state.
 1638  *
 1639  *      must_halt argument indicates if this may fail on interruption.
 1640  *      This is FALSE only if called from thread_abort via thread_halt.
 1641  */
 1642 kern_return_t
 1643 thread_dowait(
 1644         register thread_t       thread,
 1645         boolean_t               must_halt)
 1646 {
 1647         register boolean_t      need_wakeup;
 1648         register kern_return_t  ret = KERN_SUCCESS;
 1649         spl_t                   s;
 1650 
 1651         if (thread == current_thread())
 1652                 panic("thread_dowait");
 1653 
 1654         /*
 1655          *      If a thread is not interruptible, it may not be suspended
 1656          *      until it becomes interruptible.  In this case, we wait for
 1657          *      the thread to stop itself, and indicate that we are waiting
 1658          *      for it to stop so that it can wake us up when it does stop.
 1659          *
 1660          *      If the thread is interruptible, we may be able to suspend
 1661          *      it immediately.  There are several cases:
 1662          *
 1663          *      1) The thread is already stopped (trivial)
 1664          *      2) The thread is runnable (marked RUN and on a run queue).
 1665          *         We pull it off the run queue and mark it stopped.
 1666          *      3) The thread is running.  We wait for it to stop.
 1667          */
 1668 
 1669         need_wakeup = FALSE;
 1670         s = splsched();
 1671         thread_sched_lock(thread);
 1672 
 1673         for (;;) {
 1674             switch (thread->state & TH_SCHED_STATE) {
 1675                 case                    TH_SUSP:
 1676                 case          TH_WAIT | TH_SUSP:
 1677                     /*
 1678                      *  Thread is already suspended, or sleeping in an
 1679                      *  interruptible wait.  We win!
 1680                      */
 1681                     break;
 1682 
 1683                 case TH_RUN           | TH_SUSP:
 1684                     /*
 1685                      *  The thread is interruptible.  If we can pull
 1686                      *  it off a runq, stop it here.
 1687                      */
 1688                     if (rem_runq(thread) != RUN_QUEUE_HEAD_NULL) {
 1689                         thread->state &= ~TH_RUN;
 1690                         need_wakeup = thread->suspend_wait;
 1691                         thread->suspend_wait = FALSE;
 1692                         break;
 1693                     }
 1694 #if     NCPUS > 1
 1695                     /*
 1696                      *  The thread must be running, so make its
 1697                      *  processor execute ast_check().  This
 1698                      *  should cause the thread to take an ast and
 1699                      *  context switch to suspend for us.
 1700                      */
 1701                     cause_ast_check(thread->last_processor);
 1702 #endif  /* NCPUS > 1 */
 1703 
 1704                     /*
 1705                      *  Fall through to wait for thread to stop.
 1706                      */
 1707 
 1708                 case TH_RUN           | TH_SUSP | TH_UNINT:
 1709                 case TH_RUN | TH_WAIT | TH_SUSP:
 1710                 case TH_RUN | TH_WAIT | TH_SUSP | TH_UNINT:
 1711                 case          TH_WAIT | TH_SUSP | TH_UNINT:
 1712                     /*
 1713                      *  Wait for the thread to stop, or sleep interruptibly
 1714                      *  (thread_block will stop it in the latter case).
 1715                      *  Check for failure if interrupted.
 1716                      */
 1717                     thread->suspend_wait = TRUE;
 1718                     thread_sleep(&thread->suspend_wait,
 1719                                 simple_lock_addr(thread->sched_lock), TRUE);
 1720                     thread_sched_lock(thread);
 1721                     if ((current_thread()->wait_result != THREAD_AWAKENED) &&
 1722                             !must_halt) {
 1723                         ret = KERN_FAILURE;
 1724                         break;
 1725                     }
 1726 
 1727                     /*
 1728                      *  Repeat loop to check thread`s state.
 1729                      */
 1730                     continue;
 1731             }
 1732             /*
 1733              *  Thread is stopped at this point.
 1734              */
 1735             break;
 1736         }
 1737 
 1738         thread_sched_unlock(thread);
 1739         splx(s);
 1740 
 1741         if (need_wakeup)
 1742             thread_wakeup(&thread->suspend_wait);
 1743 
 1744         return ret;
 1745 }
 1746 
 1747 void thread_release(
 1748         register thread_t       thread)
 1749 {
 1750         spl_t                   s;
 1751 
 1752         s = splsched();
 1753         thread_sched_lock(thread);
 1754         if (--thread->suspend_count == 0) {
 1755                 thread->state &= ~(TH_SUSP | TH_HALTED);
 1756                 if ((thread->state & (TH_WAIT | TH_RUN)) == 0) {
 1757                         /* was only suspended */
 1758                         thread->state |= TH_RUN;
 1759                         thread_setrun(thread, TRUE);
 1760                 }
 1761         }
 1762         thread_sched_unlock(thread);
 1763         splx(s);
 1764 }
 1765 
 1766 kern_return_t thread_suspend(
 1767         register thread_t       thread)
 1768 {
 1769         register boolean_t      hold;
 1770         spl_t                   s;
 1771 
 1772         if (thread == THREAD_NULL)
 1773                 return KERN_INVALID_ARGUMENT;
 1774 
 1775         hold = FALSE;
 1776         s = splsched();
 1777         thread_sched_lock(thread);
 1778         if (thread->user_stop_count++ == 0) {
 1779                 hold = TRUE;
 1780                 thread->suspend_count++;
 1781                 thread->state |= TH_SUSP;
 1782         }
 1783         thread_sched_unlock(thread);
 1784         splx(s);
 1785 
 1786         /*
 1787          *      Now  wait for the thread if necessary.
 1788          */
 1789         if (hold) {
 1790                 if (thread == current_thread()) {
 1791                         /*
 1792                          *      We want to call thread_block on our way out,
 1793                          *      to stop running.
 1794                          */
 1795                         s = splsched();
 1796                         ast_on(cpu_number(), AST_BLOCK);
 1797                         splx(s);
 1798                 } else
 1799                         (void) thread_dowait(thread, TRUE);
 1800         }
 1801         return KERN_SUCCESS;
 1802 }
 1803 
 1804 
 1805 kern_return_t thread_resume(
 1806         register thread_t       thread)
 1807 {
 1808         register kern_return_t  ret;
 1809         spl_t                   s;
 1810 
 1811         if (thread == THREAD_NULL)
 1812                 return KERN_INVALID_ARGUMENT;
 1813 
 1814         ret = KERN_SUCCESS;
 1815 
 1816         s = splsched();
 1817         thread_sched_lock(thread);
 1818         if (thread->user_stop_count > 0) {
 1819             if (--thread->user_stop_count == 0) {
 1820                 if (--thread->suspend_count == 0) {
 1821                     thread->state &= ~(TH_SUSP | TH_HALTED);
 1822                     if ((thread->state & (TH_WAIT | TH_RUN)) == 0) {
 1823                             /* was only suspended */
 1824                             thread->state |= TH_RUN;
 1825                             thread_setrun(thread, TRUE);
 1826                     }
 1827                 }
 1828             }
 1829         }
 1830         else {
 1831                 ret = KERN_FAILURE;
 1832         }
 1833 
 1834         thread_sched_unlock(thread);
 1835         splx(s);
 1836 
 1837         return ret;
 1838 }
 1839 
 1840 /*
 1841  *      Special version of thread_suspend that is only
 1842  *      used by timer expiration.  This is called from
 1843  *      an AST, so it cannot block.
 1844  *
 1845  *      The thread may still be running or in an uninterruptible
 1846  *      wait on exit.  User-level code must determine that the
 1847  *      thread has stopped (by calling thread_abort or thread_get_status).
 1848  */
 1849 void thread_suspend_nowait(
 1850         thread_t        thread)
 1851 {
 1852         spl_t           s;
 1853 
 1854         assert(thread != THREAD_NULL);
 1855 
 1856         /*
 1857          *      Increment the user-visible suspend count.
 1858          */
 1859         s = splsched();
 1860         thread_sched_lock(thread);
 1861         if (thread->user_stop_count++ == 0) {
 1862             /*
 1863              *  Not suspended yet - do it now.
 1864              */
 1865             thread->suspend_count++;
 1866             thread->state |= TH_SUSP;
 1867 
 1868             if (thread == current_thread()) {
 1869                 /*
 1870                  *      Stop the thread on the way out of the kernel
 1871                  */
 1872                 ast_on(cpu_number(), AST_BLOCK);
 1873             }
 1874             else {
 1875                 /*
 1876                  *      If the thread is on a run queue, pull it off.
 1877                  */
 1878                 switch (thread->state & TH_SCHED_STATE) {
 1879                     case TH_WAIT | TH_SUSP:
 1880                         /*
 1881                          *      Already stopped
 1882                          */
 1883                         break;
 1884 
 1885                     case TH_RUN | TH_SUSP:
 1886                         if (rem_runq(thread) != RUN_QUEUE_HEAD_NULL) {
 1887                             thread->state &= ~TH_RUN;
 1888                             assert(!thread->suspend_wait);
 1889                             break;
 1890                         }
 1891 #if     NCPUS > 1
 1892                         cause_ast_check(thread->last_processor);
 1893 #endif
 1894                         /* fall through to ... */
 1895 
 1896                     default:
 1897                         /*
 1898                          *      Running, or in an uninterruptible wait.
 1899                          *      We can`t do anything more, since we
 1900                          *      cannot block.
 1901                          */
 1902                         break;
 1903                 }
 1904             }
 1905         }
 1906         thread_sched_unlock(thread);
 1907         splx(s);
 1908 }
 1909 

Cache object: de7d67bc04481a72cfef7ddd36f5ce24


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.