The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/kern/task.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1993-1988 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        task.c,v $
   29  * Revision 2.27  93/11/17  17:28:56  dbg
   30  *      Rewrote task_terminate to iterate once down the list of threads
   31  *      in the task.  Removed the call to thread_block() in the thread
   32  *      termination loop and replaced with logic added earlier
   33  *      (21-Jun-88) but lost in a later merge:
   34  * 
   35  *      > Loop in task_terminate to terminate threads was incorrect; if
   36  *      > another component of the system had a reference to the thread,
   37  *      > the thread would remain in the thread_list for the task, and the
   38  *      > loop would never terminate.  Rewrote it to run down the list
   39  *      > like task_hold: take a reference to each thread, and release
   40  *      > the reference on the next iteration of the loop.
   41  * 
   42  *      When the current thread is in the task being terminated,
   43  *      task_terminate uses thread_terminate to prepare the current
   44  *      thread for termination, but not actually terminate it.
   45  *      [93/08/26            dbg]
   46  * 
   47  *      Changed termination protocol to simultaneously clear 'active'
   48  *      and remove task port association.
   49  *      [93/07/12            dbg]
   50  * 
   51  *      Added separate task_ref_lock to simplify interactions between
   52  *      task locks and processor set locks.
   53  *      [93/06/10            dbg]
   54  * 
   55  *      Break up thread lock, to simplify interactions between thread
   56  *      lock and processor set lock and to reduce the amount of code
   57  *      that runs with interrupts blocked.  There are now three locks:
   58  *      . thread_ref_lock       locks the reference count
   59  *      . thread_sched_lock     locks fields involved with scheduling
   60  *                              state machine
   61  *      . thread_lock           locks everything else.
   62  *      [93/05/26            dbg]
   63  * 
   64  *      Removed priority field.  New threads are set to the
   65  *      processor set's default policy (timesharing or background)
   66  *      and can be changed after creation.
   67  * 
   68  *      Converted time_values to time_specs, internally.  task_info still
   69  *      returns time_values.
   70  *      [93/05/21            dbg]
   71  * 
   72  * Revision 2.26  93/08/10  15:12:10  mrt
   73  *      Conditionalized atm hooks.
   74  *      [93/07/30            cmaeda]
   75  *      Included network interface hooks.
   76  *      [93/06/09  15:43:04  jcb]
   77  * 
   78  * Revision 2.25  93/08/03  12:31:22  mrt
   79  *      [93/07/30  10:30:30  bershad]
   80  * 
   81  *      Change way in which kernel tasks share the same kernel map to avoid
   82  *      fault on current thread reference during bootstrap.
   83  *      [93/07/30  10:24:25  bershad]
   84  * 
   85  * Revision 2.24  93/05/15  18:47:52  mrt
   86  *      machparam.h -> machspl.h
   87  * 
   88  * Revision 2.23  93/01/24  13:19:56  danner
   89  *      We must explicitly set "new_thread->pc_sample.buffer = 0;" so
   90  *      that we don't think we have a sampling buffer.
   91  *      [93/01/13            rvb]
   92  * 
   93  * Revision 2.22  93/01/21  12:22:15  danner
   94  *      fast tas changes.
   95  *      [93/01/20            bershad]
   96  * 
   97  * Revision 2.21  93/01/14  17:36:41  danner
   98  *      Added ANSI function prototypes.
   99  *      [92/12/29            dbg]
  100  * 
  101  *      Proper spl typing. 64bit cleanup.
  102  *      [92/12/01            af]
  103  * 
  104  *      Fixed pset locking.  Pset lock must be taken before task or
  105  *      thread lock.
  106  *      [92/10/28            dbg]
  107  * 
  108  * Revision 2.20  92/08/03  17:39:45  jfriedl
  109  *      removed silly prototypes
  110  *      [92/08/02            jfriedl]
  111  * 
  112  * Revision 2.19  92/07/20  13:32:53  cmaeda
  113  *      Added fast tas support:
  114  *              Added task_set_ras_pc.
  115  *              Inherit ras addresses when forking.
  116  *      [92/05/11  14:36:17  cmaeda]
  117  * 
  118  * Revision 2.18  92/05/21  17:16:22  jfriedl
  119  *      tried prototypes.
  120  *      [92/05/20            jfriedl]
  121  * 
  122  * Revision 2.17  92/04/01  10:54:11  rpd
  123  *      Initialize kernel_task to TASK_NULL to support ddb use before the
  124  *       bss is zeroed. Remove duplicate include of machine/machparam.h.
  125  *      Update copyright.
  126  *      [92/03/21            danner]
  127  * 
  128  * Revision 2.16  91/12/11  08:42:30  jsb
  129  *      Fixed assert_wait/thread_wakeup rendezvous in task_assign.
  130  *      [91/11/26            rpd]
  131  * 
  132  * Revision 2.15  91/11/15  14:11:59  rpd
  133  *      NORMA_TASK: initialize new child_node field in task upon creation.
  134  *      [91/09/23  09:20:23  jsb]
  135  * 
  136  * Revision 2.14  91/06/25  10:29:32  rpd
  137  *      Updated convert_thread_to_port usage.
  138  *      [91/05/27            rpd]
  139  * 
  140  * Revision 2.13  91/06/17  15:47:19  jsb
  141  *      Added norma_task hooks. See norma/kern_task.c for code.
  142  *      [91/06/17  10:53:30  jsb]
  143  * 
  144  * Revision 2.12  91/05/14  16:48:05  mrt
  145  *      Correcting copyright
  146  * 
  147  * Revision 2.11  91/03/16  14:52:24  rpd
  148  *      Can't use thread_dowait on the current thread now.
  149  *      [91/01/20            rpd]
  150  * 
  151  * Revision 2.10  91/02/05  17:29:55  mrt
  152  *      Changed to new Mach copyright
  153  *      [91/02/01  16:19:00  mrt]
  154  * 
  155  * Revision 2.9  91/01/08  15:17:44  rpd
  156  *      Added consider_task_collect, task_collect_scan.
  157  *      [91/01/03            rpd]
  158  *      Added continuation argument to thread_block.
  159  *      [90/12/08            rpd]
  160  * 
  161  * Revision 2.8  90/10/25  14:45:26  rwd
  162  *      From OSF: Add thread_block() to loop that forcibly terminates
  163  *      threads in task_terminate() to fix livelock.  Also hold
  164  *      reference to thread when calling thread_force_terminate().
  165  *      [90/10/19            rpd]
  166  * 
  167  * Revision 2.7  90/06/19  22:59:41  rpd
  168  *      Fixed task_info to return the correct base_priority.
  169  *      [90/06/18            rpd]
  170  * 
  171  * Revision 2.6  90/06/02  14:56:40  rpd
  172  *      Moved trap versions of kernel calls to kern/ipc_mig.c.
  173  *      [90/05/31            rpd]
  174  * 
  175  *      Removed references to kernel_vm_space, keep_wired_memory.
  176  *      [90/04/29            rpd]
  177  *      Converted to new IPC and scheduling technology.
  178  *      [90/03/26  22:22:19  rpd]
  179  * 
  180  * Revision 2.5  90/05/29  18:36:51  rwd
  181  *      Added trap versions of task routines from rfr.
  182  *      [90/04/20            rwd]
  183  *      Add TASK_THREAD_TIMES_INFO flavor to task_info, to get times for
  184  *      all live threads.
  185  *      [90/04/03            dbg]
  186  * 
  187  *      Use kmem_alloc_wired instead of vm_allocate in task_threads.
  188  *      [90/03/28            dbg]
  189  * 
  190  * Revision 2.4  90/05/03  15:46:58  dbg
  191  *      Add TASK_THREAD_TIMES_INFO flavor to task_info, to get times for
  192  *      all live threads.
  193  *      [90/04/03            dbg]
  194  * 
  195  *      Use kmem_alloc_wired instead of vm_allocate in task_threads.
  196  *      [90/03/28            dbg]
  197  * 
  198  * Revision 2.3  90/01/11  11:44:17  dbg
  199  *      Removed task_halt (unused).  De-linted.
  200  *      [89/12/12            dbg]
  201  * 
  202  * Revision 2.2  89/09/08  11:26:37  dbg
  203  *      Initialize keep_wired_memory in task_create.
  204  *      [89/07/17            dbg]
  205  * 
  206  * 19-May-89  David Golub (dbg) at Carnegie-Mellon University
  207  *      Changed task_info to check for kernel_task, not first_task.
  208  *
  209  * 19-Oct-88  David Golub (dbg) at Carnegie-Mellon University
  210  *      Moved all syscall_emulation routine calls here.  Removed
  211  *      all non-MACH data structures.  Added routine to create
  212  *      new tasks running in the kernel.  Changed kernel_task
  213  *      creation to create it as a normal task.
  214  *
  215  * Revision 2.6  88/10/11  10:21:38  rpd
  216  *      Changed includes to the new style.
  217  *      Rewrote task_threads; the old version could return
  218  *      an inconsistent picture of the task.
  219  *      [88/10/05  10:28:13  rpd]
  220  * 
  221  * Revision 2.5  88/08/06  18:25:53  rpd
  222  * Changed to use ipc_task_lock/ipc_task_unlock macros.
  223  * Eliminated use of kern/mach_ipc_defs.h.
  224  * Enable kernel_task for IPC access.  (See hack in task_by_unix_pid to
  225  * allow a user to get the kernel_task's port.)
  226  * Made kernel_task's ref_count > 0, so that task_reference/task_deallocate
  227  * works on it.  (Previously the task_deallocate would try to destroy it.)
  228  * 
  229  * Revision 2.4  88/07/20  16:40:17  rpd
  230  * Removed task_ports (replaced by port_names).
  231  * Didn't leave xxx form, because it wasn't implemented.
  232  * 
  233  * Revision 2.3  88/07/17  17:55:52  mwyoung
  234  * Split up uses of task.kernel_only field.  Condensed history.
  235  * 
  236  * Revision 2.2.1.1  88/06/28  20:46:20  mwyoung
  237  * Split up uses of task.kernel_only field.  Condensed history.
  238  * 
  239  * 21-Jun-88  Michael Young (mwyoung) at Carnegie-Mellon University.
  240  *      Split up uses of task.kernel_only field.
  241  *
  242  * 21-Jun-88  David Golub (dbg) at Carnegie-Mellon University
  243  *      Loop in task_terminate to terminate threads was incorrect; if
  244  *      another component of the system had a reference to the thread,
  245  *      the thread would remain in the thread_list for the task, and the
  246  *      loop would never terminate.  Rewrote it to run down the list
  247  *      like task_hold.  Thread_create terminates new thread if
  248  *      task_terminate occurs simultaneously.
  249  *
  250  * 27-Jan-88  Douglas Orr (dorr) at Carnegie-Mellon University
  251  *      Init user space library structures.
  252  *
  253  * 21-Jan-88  David Golub (dbg) at Carnegie-Mellon University
  254  *      Task_create no longer returns the data port.  Task_status and
  255  *      task_set_notify are obsolete (use task_{get,set}_special_port).
  256  *
  257  * 21-Jan-88  Karl Hauth (hauth) at Carnegie-Mellon University
  258  *      task_info(kernel_task, ...) now looks explicitly in the
  259  *      kernel_map, so it actually returns useful numbers.
  260  *
  261  * 17-Jan-88  David Golub (dbg) at Carnegie-Mellon University
  262  *      Added new task interfaces: task_suspend, task_resume,
  263  *      task_info, task_get_special_port, task_set_special_port.
  264  *      Old interfaces remain (temporarily) for binary
  265  *      compatibility, prefixed with 'xxx_'.
  266  *
  267  * 29-Dec-87  David Golub (dbg) at Carnegie-Mellon University
  268  *      Delinted.
  269  *
  270  * 23-Dec-87  David Golub (dbg) at Carnegie-Mellon University
  271  *      Added task_halt to halt all threads in a task.
  272  *
  273  * 15-Dec-87  David Golub (dbg) at Carnegie-Mellon University
  274  *      Check for null task pointer in task_reference and
  275  *      task_deallocate.
  276  *
  277  *  9-Dec-87  David Golub (dbg) at Carnegie-Mellon University
  278  *      Removed extra thread reference from task_terminate for new thread
  279  *      termination code.
  280  *
  281  *  8-Dec-87  David Black (dlb) at Carnegie-Mellon University
  282  *      Added call to ipc_task_disable.
  283  *
  284  *  3-Dec-87  David Black (dlb) at Carnegie-Mellon University
  285  *      Implemented better task termination base on task active field:
  286  *              1.  task_terminate sets active field to false.
  287  *              2.  All but the most simple task operations check the
  288  *                      active field and abort if it is false.
  289  *              3.  task_{hold, dowait, release} now return kern_return_t's.
  290  *              4.  task_dowait has a second parameter to ignore active
  291  *                      field if called from task_terminate.
  292  *      Task terminate acquires extra reference to current thread before
  293  *      terminating it (see thread_terminate()).
  294  *
  295  * 19-Nov-87  Avadis Tevanian (avie) at Carnegie-Mellon University
  296  *      Eliminated TT conditionals.
  297  *
  298  * 13-Oct-87  David Black (dlb) at Carnegie-Mellon University
  299  *      Use counts for suspend and resume primitives.
  300  *
  301  * 13-Oct-87  David Golub (dbg) at Carnegie-Mellon University
  302  *      Added port reference counting to task_set_notify.
  303  *
  304  *  5-Oct-87  David Golub (dbg) at Carnegie-Mellon University
  305  *      Completely replaced old scheduling state machine.
  306  *
  307  * 14-Sep-87  Michael Young (mwyoung) at Carnegie-Mellon University
  308  *      De-linted.
  309  *
  310  * 25-Aug-87  Robert Baron (rvb) at Carnegie-Mellon University
  311  *      Must initialize the kernel_task->lock (at least on the Sequent)
  312  *
  313  *  6-Aug-87  David Golub (dbg) at Carnegie-Mellon University
  314  *      Moved ipc_task_terminate to task_terminate, to shut down other
  315  *      threads that are manipulating the task via its task_port.
  316  *      Changed task_terminate to terminate all threads in the task.
  317  *
  318  * 29-Jul-87  David Golub (dbg) at Carnegie-Mellon University
  319  *      Fix task_suspend not to hold the task if the task has been
  320  *      resumed.  Change task_hold/task_wait so that if the current
  321  *      thread is in the task, it is not held until after all of the
  322  *      other threads in the task have stopped.  Make task_terminate be
  323  *      able to terminate the current task.
  324  *
  325  *  9-Jul-87  Karl Hauth (hauth) at Carnegie-Mellon University
  326  *      Modified task_statistics to reflect changes in the structure.
  327  *
  328  * 10-Jun-87  Karl Hauth (hauth) at Carnegie-Mellon University
  329  *      Added code to fill in the task_statistics structure with
  330  *      zeros and to make mig happier by returning something.
  331  *
  332  *  1-Jun-87  Avadis Tevanian (avie) at Carnegie-Mellon University
  333  *      Added task_statistics stub.
  334  *
  335  * 27-Apr-87  Michael Young (mwyoung) at Carnegie-Mellon University
  336  *      Move ipc_task_init into task_create; it *should* return
  337  *      the data port (with a reference) at some point.
  338  *
  339  * 20-Apr-87  David Black (dlb) at Carnegie-Mellon University
  340  *      Fixed task_suspend to ignore multiple suspends.
  341  *      Fixed task_dowait to work if current thread is in the affected task.
  342  *
  343  * 24-Feb-87  Avadis Tevanian (avie) at Carnegie-Mellon University
  344  *      Rewrote task_suspend/task_hold and added task_wait for new user
  345  *      synchronization paradigm.
  346  *
  347  * 10-Feb-87  Michael Young (mwyoung) at Carnegie-Mellon University
  348  *      Add task.kernel_only initialization.
  349  *
  350  * 31-Jan-87  Avadis Tevanian (avie) at Carnegie-Mellon University
  351  *      Merged in my changes for real thread implementation.
  352  *
  353  *  7-Nov-86  Michael Young (mwyoung) at Carnegie-Mellon University
  354  *      Fixed up stubs for eventual task calls.
  355  *
  356  * 30-Sep-86  Avadis Tevanian (avie) at Carnegie-Mellon University
  357  *      Make floating u-area work, add all_task list management.
  358  *
  359  * 26-Sep-86  Michael Young (mwyoung) at Carnegie-Mellon University
  360  *      Added argument to ipc_task_init to get parent.
  361  *
  362  *  1-Aug-86  Michael Young (mwyoung) at Carnegie-Mellon University
  363  *      Added initialization for Mach IPC.
  364  *
  365  * 20-Jul-86  Michael Young (mwyoung) at Carnegie-Mellon University
  366  *      Added kernel_task.
  367  */
  368 /*
  369  *      File:   kern/task.c
  370  *      Author: Avadis Tevanian, Jr., Michael Wayne Young, David Golub,
  371  *              David Black
  372  *
  373  *      Task management primitives implementation.
  374  */
  375 
  376 #include <fast_tas.h>
  377 #include <mach_host.h>
  378 #include <mach_pcsample.h>
  379 #include <net_atm.h>
  380 #include <norma_task.h>
  381 
  382 #include <mach/machine/vm_types.h>
  383 #include <mach/vm_param.h>
  384 #include <mach/task_info.h>
  385 #include <mach/task_special_ports.h>
  386 #include <ipc/ipc_space.h>
  387 #include <kern/mach_param.h>
  388 #include <kern/task.h>
  389 #include <kern/thread.h>
  390 #include <kern/zalloc.h>
  391 #include <kern/kalloc.h>
  392 #include <kern/memory.h>
  393 #include <kern/processor.h>
  394 #include <kern/sched.h>         /* for sched_tick */
  395 #include <kern/sched_prim.h>    /* for thread_wakeup */
  396 #include <kern/ipc_tt.h>
  397 #include <kern/syscall_emulation.h>
  398 #include <sched_policy/standard.h>
  399 #include <vm/vm_kern.h>         /* for kernel_map, ipc_kernel_map */
  400 #include <machine/machspl.h>    /* for splsched */
  401 
  402 #if     NET_ATM
  403 #include <chips/nw_mk.h>
  404 #endif
  405 
  406 #if     NORMA_TASK
  407 #define task_create     task_create_local
  408 #endif  /* NORMA_TASK */
  409 
  410 task_t  kernel_task = TASK_NULL;
  411 zone_t  task_zone;
  412 
  413 
  414 void task_init(void)
  415 {
  416         task_zone = zinit(
  417                         sizeof(struct task),
  418                         TASK_MAX * sizeof(struct task),
  419                         TASK_CHUNK * sizeof(struct task),
  420                         FALSE, "tasks");
  421 
  422         eml_init();
  423 
  424         /*
  425          * Create the kernel task as the first task.
  426          * Task_create must assign to kernel_task as a side effect,
  427          * for other initialization. (:-()
  428          */
  429         (void) task_create(TASK_NULL, FALSE, &kernel_task);
  430 }
  431 
  432 /*
  433  * Create a task running in the kernel address space.  It may
  434  * have its own map of size mem_size (if 0, it uses the kernel map),
  435  * and may have ipc privileges.
  436  */
  437 task_t  kernel_task_create(
  438         task_t          parent_task,
  439         vm_size_t       map_size)
  440 {
  441         task_t          new_task;
  442         vm_offset_t     min, max;
  443 
  444         /*
  445          * Create the task.
  446          */
  447         (void) task_create(parent_task, FALSE, &new_task);
  448 
  449         /*
  450          * Task_create creates the task with a user-space map.
  451          * Remove the map and replace it with the kernel map
  452          * or a submap of the kernel map.
  453          */
  454         vm_map_deallocate(new_task->map);
  455         if (map_size == 0)
  456             new_task->map = kernel_map;
  457         else
  458             new_task->map = kmem_suballoc(kernel_map, &min, &max,
  459                                           map_size, FALSE);
  460 
  461         return new_task;
  462 }
  463 
  464 kern_return_t task_create(
  465         task_t          parent_task,
  466         boolean_t       inherit_memory,
  467         task_t          *child_task)            /* OUT */
  468 {
  469         register task_t new_task;
  470         register processor_set_t        pset;
  471 
  472         new_task = (task_t) zalloc(task_zone);
  473         if (new_task == TASK_NULL) {
  474                 panic("task_create: no memory for task structure");
  475         }
  476 
  477         /* one ref for just being alive; one for our caller */
  478         new_task->ref_count = 2;
  479 
  480         if (child_task == &kernel_task)  {
  481                 new_task->map = kernel_map; 
  482         } else if (inherit_memory) {
  483                 new_task->map = vm_map_fork(parent_task->map);
  484         } else {
  485                 new_task->map = vm_map_create(pmap_create(0),
  486                                         round_page(VM_MIN_ADDRESS),
  487                                         trunc_page(VM_MAX_ADDRESS), TRUE);
  488         }
  489 
  490         simple_lock_init(&new_task->lock);
  491         simple_lock_init(&new_task->ref_lock);
  492         queue_init(&new_task->thread_list);
  493         new_task->suspend_count = 0;
  494         new_task->active = TRUE;
  495         new_task->user_stop_count = 0;
  496         new_task->thread_count = 0;
  497 
  498         eml_task_reference(new_task, parent_task);
  499 
  500         ipc_task_init(new_task, parent_task);
  501 
  502 #if     NET_ATM
  503         new_task->nw_ep_owned = 0;
  504 #endif
  505 
  506         new_task->total_user_time.seconds = 0;
  507         new_task->total_user_time.nanoseconds = 0;
  508         new_task->total_system_time.seconds = 0;
  509         new_task->total_system_time.nanoseconds = 0;
  510 
  511         if (parent_task != TASK_NULL) {
  512                 task_lock(parent_task);
  513                 task_inherit_default_policy(parent_task, new_task);
  514                 pset = parent_task->processor_set;
  515                 if (pset == PROCESSOR_SET_NULL || !pset->active)
  516                         pset = &default_pset;
  517                 pset_reference(pset);
  518                 task_unlock(parent_task);
  519         }
  520         else {
  521                 task_inherit_default_policy(TASK_NULL, new_task);
  522                 pset = &default_pset;
  523                 pset_reference(pset);
  524         }
  525         pset_lock(pset);
  526         pset_add_task(pset, new_task);
  527         pset_unlock(pset);
  528 
  529         new_task->may_assign = TRUE;
  530         new_task->assign_wait = FALSE;
  531 
  532 #if     MACH_PCSAMPLE
  533         new_task->pc_sample.buffer = 0;
  534         new_task->pc_sample.seqno = 0;
  535         new_task->pc_sample.sampletypes = 0;
  536 #endif
  537 
  538 #if     FAST_TAS
  539     {
  540         int i;
  541 
  542         for (i = 0; i < TASK_FAST_TAS_NRAS; i++)  {
  543             if (inherit_memory) {
  544                 new_task->fast_tas_base[i] = parent_task->fast_tas_base[i];
  545                 new_task->fast_tas_end[i]  = parent_task->fast_tas_end[i];
  546             } else {
  547                 new_task->fast_tas_base[i] = (vm_offset_t)0;
  548                 new_task->fast_tas_end[i]  = (vm_offset_t)0;
  549             }
  550         }
  551     }
  552 #endif  /* FAST_TAS */
  553  
  554         ipc_task_enable(new_task);
  555 
  556 #if     NORMA_TASK
  557         new_task->child_node = -1;
  558 #endif  /* NORMA_TASK */
  559 
  560         *child_task = new_task;
  561         return KERN_SUCCESS;
  562 }
  563 
  564 /*
  565  *      task_deallocate:
  566  *
  567  *      Give up a reference to the specified task and destroy it if there
  568  *      are no other references left.  It is assumed that the current thread
  569  *      is never in this task.
  570  */
  571 void task_deallocate(
  572         register task_t task)
  573 {
  574         if (task == TASK_NULL)
  575                 return;
  576 
  577         task_ref_lock(task);
  578         if (--task->ref_count > 0) {
  579             task_ref_unlock(task);
  580             return;
  581         }
  582 
  583         /*
  584          *      No more references - we can remove task.
  585          */
  586         task_ref_unlock(task);
  587 
  588 #if     NORMA_TASK
  589         if (task->map == VM_MAP_NULL) {
  590                 /* norma placeholder task */
  591                 zfree(task_zone, (vm_offset_t) task);
  592                 return;
  593         }
  594 #endif  /* NORMA_TASK */
  595 
  596         eml_task_deallocate(task);
  597 
  598         assert(task->processor_set == PROCESSOR_SET_NULL);
  599 
  600         vm_map_deallocate(task->map);
  601         is_release(task->itk_space);
  602         zfree(task_zone, (vm_offset_t) task);
  603 }
  604 
  605 void task_reference(
  606         register task_t task)
  607 {
  608         if (task == TASK_NULL)
  609                 return;
  610 
  611         task_ref_lock(task);
  612         task->ref_count++;
  613         task_ref_unlock(task);
  614 }
  615 
  616 /*
  617  *      task_terminate:
  618  *
  619  *      Terminate the specified task.  See comments on thread_terminate
  620  *      (kern/thread.c) about problems with terminating the "current task."
  621  */
  622 kern_return_t task_terminate(
  623         task_t  task)
  624 {
  625         thread_t        thread, cur_thread, prev_thread;
  626         task_t          cur_task;
  627         processor_set_t pset;
  628 
  629         if (task == TASK_NULL)
  630                 return KERN_INVALID_ARGUMENT;
  631 
  632         cur_thread = current_thread();
  633         cur_task = cur_thread->task;
  634 
  635 #if     NET_ATM
  636         /*
  637          *      Shut down networking.
  638          */
  639         mk_endpoint_collect(task);
  640 #endif
  641 
  642         /*
  643          *      Deactivate task so that it can't be terminated again,
  644          *      and so lengthy operations in progress will abort.
  645          *
  646          *      If the current thread is in this task, remove it
  647          *      from IPC control.
  648          */
  649         if (task == cur_task) {
  650                 task_lock(task);
  651                 if (!task->active) {
  652                         /*
  653                          *      Task is already being terminated.
  654                          *      Terminate the current thread (known
  655                          *      to be in this task) to ensure that
  656                          *      a thread terminating itself never
  657                          *      returns.
  658                          */
  659                         task_unlock(task);
  660                         (void) thread_terminate(cur_thread);
  661                         return KERN_FAILURE;
  662                 }
  663 
  664                 /*
  665                  *      Make sure the current thread is not being
  666                  *      terminated.
  667                  *
  668                  *      We do this by trying to terminate the current
  669                  *      thread.  If thread_terminate succeeds, it does
  670                  *      not actually terminate the thread; but it marks
  671                  *      the thread inactive, disables its IPC access
  672                  *      (so that the thread will remain alive to finish
  673                  *      task_terminate), and sets AST_TERMINATE, so that
  674                  *      the thread will terminate itself on exit.  If
  675                  *      thread_terminate fails, it is because the thread
  676                  *      is being terminated.  In this case, we exit.
  677                  *
  678                  *      This knows ENTIRELY too much about thread_terminate,
  679                  *      but it avoids much duplication of code.
  680                  */
  681 
  682                 if (thread_terminate(cur_thread) != KERN_SUCCESS) {
  683                         task_unlock(task);
  684                         return KERN_FAILURE;
  685                 }
  686 
  687                 /*
  688                  *      Task_terminate can proceed.  The current thread
  689                  *      has been marked inactive, and removed from IPC
  690                  *      control.
  691                  */
  692         }
  693         else {
  694                 /*
  695                  *      Lock both current and victim task to check for
  696                  *      potential deadlock.
  697                  */
  698                 if ((vm_offset_t)task < (vm_offset_t)cur_task) {
  699                         task_lock(task);
  700                         task_lock(cur_task);
  701                 }
  702                 else {
  703                         task_lock(cur_task);
  704                         task_lock(task);
  705                 }
  706                 /*
  707                  *      Check if current thread or task is being terminated.
  708                  */
  709                 thread_lock(cur_thread);
  710                 if (!cur_task->active || !cur_thread->active) {
  711                         /*
  712                          * Current task or thread is being terminated.
  713                          */
  714                         thread_unlock(cur_thread);
  715                         task_unlock(task);
  716                         task_unlock(cur_task);
  717                         (void) thread_terminate(cur_thread);
  718                         return KERN_FAILURE;
  719                 }
  720                 thread_unlock(cur_thread);
  721                 task_unlock(cur_task);
  722 
  723                 if (!task->active) {
  724                         /*
  725                          *      Task is already being terminated.
  726                          */
  727                         task_unlock(task);
  728                         return KERN_FAILURE;
  729                 }
  730         }
  731 
  732         /*
  733          *      Mark the task inactive, and disable IPC access.
  734          *      Any pending thread_create operations will notice
  735          *      that the task is inactive, and abort.
  736          */
  737 
  738         task->active = FALSE;
  739         ipc_task_disable(task);
  740 
  741         /*
  742          *      Mark the task as suspended.
  743          */
  744         task->suspend_count++;
  745 
  746         /*
  747          *      Terminate each thread in the task, except for the
  748          *      current thread (if it is within the task).  Since
  749          *      the task port is disabled, no new threads can be
  750          *      created.  Thus the loop will terminate.  We call
  751          *      thread_force_terminate instead of thread_terminate
  752          *      to avoid deadlock checks.
  753          */
  754         prev_thread = THREAD_NULL;
  755         queue_iterate(&task->thread_list, thread, thread_t, thread_list) {
  756 
  757             if (thread != cur_thread) {
  758 
  759                 /*
  760                  *      Take a reference to the thread,
  761                  *      so it will remain in the task`s
  762                  *      thread list.
  763                  */
  764                 thread_reference(thread);
  765                 task_unlock(task);
  766 
  767                 /*
  768                  *      Deallocate the reference to the
  769                  *      previous thread (this should free
  770                  *      the thread).
  771                  */
  772                 if (prev_thread != THREAD_NULL)
  773                         thread_deallocate(prev_thread);
  774 
  775                 /*
  776                  *      Hold the thread, and wait for
  777                  *      it to stop.
  778                  */
  779                 (void) thread_hold(thread);
  780                 (void) thread_dowait(thread, TRUE);
  781 
  782                 /*
  783                  *      Terminate the thread.  The extra
  784                  *      reference will keep it in the thread
  785                  *      list, so that the next_thread field
  786                  *      is valid.  Save the thread to be
  787                  *      deallocated on the next iteration.
  788                  */
  789                 thread_force_terminate(thread);
  790                 prev_thread = thread;
  791 
  792                 task_lock(task);
  793             }
  794         }
  795 
  796         task_unlock(task);
  797         if (prev_thread != THREAD_NULL)
  798             thread_deallocate(prev_thread);
  799 
  800         /*
  801          *      Shut down IPC.
  802          */
  803         ipc_task_terminate(task);
  804 
  805 
  806         /*
  807          *      Remove the task from the processor set.
  808          */
  809         task_lock(task);
  810         pset = task->processor_set;
  811         pset_lock(pset);
  812         pset_remove_task(pset,task);
  813         task->processor_set = PROCESSOR_SET_NULL;
  814         pset_unlock(pset);
  815         pset_deallocate(pset);
  816         task_unlock(task);
  817 
  818         /*
  819          *      Deallocate the task's reference to itself.
  820          */
  821         task_deallocate(task);
  822 
  823         /*
  824          *      Return.  If the current thread is in this
  825          *      task, it has already had AST_TERMINATE set,
  826          *      so it will terminate itself on exit from
  827          *      the kernel.  Since it holds the last reference
  828          *      to the task, terminating it will deallocate
  829          *      the task.
  830          */
  831 
  832         return KERN_SUCCESS;
  833 }
  834 
  835 /*
  836  *      task_hold:
  837  *
  838  *      Suspend execution of the specified task.
  839  *      This is a recursive-style suspension of the task, a count of
  840  *      suspends is maintained.
  841  */
  842 kern_return_t task_hold(
  843         register task_t task)
  844 {
  845         register queue_head_t   *list;
  846         register thread_t       thread, cur_thread;
  847 
  848         cur_thread = current_thread();
  849 
  850         task_lock(task);
  851         if (!task->active) {
  852                 task_unlock(task);
  853                 return KERN_FAILURE;
  854         }
  855 
  856         task->suspend_count++;
  857 
  858         /*
  859          *      Iterate through all the threads and hold them.
  860          *      Do not hold the current thread if it is within the
  861          *      task.
  862          */
  863         list = &task->thread_list;
  864         queue_iterate(list, thread, thread_t, thread_list) {
  865                 if (thread != cur_thread)
  866                         thread_hold(thread);
  867         }
  868         task_unlock(task);
  869         return KERN_SUCCESS;
  870 }
  871 
  872 /*
  873  *      task_dowait:
  874  *
  875  *      Wait until the task has really been suspended (all of the threads
  876  *      are stopped).  Skip the current thread if it is within the task.
  877  *
  878  *      If task is deactivated while waiting, return a failure code unless
  879  *      must_wait is true.
  880  */
  881 kern_return_t task_dowait(
  882         register task_t task,
  883         boolean_t must_wait)
  884 {
  885         register queue_head_t   *list;
  886         register thread_t       thread, cur_thread, prev_thread;
  887         register kern_return_t  ret = KERN_SUCCESS;
  888 
  889         /*
  890          *      Iterate through all the threads.
  891          *      While waiting for each thread, we gain a reference to it
  892          *      to prevent it from going away on us.  This guarantees
  893          *      that the "next" thread in the list will be a valid thread.
  894          *
  895          *      We depend on the fact that if threads are created while
  896          *      we are looping through the threads, they will be held
  897          *      automatically.  We don't care about threads that get
  898          *      deallocated along the way (the reference prevents it
  899          *      from happening to the thread we are working with).
  900          *
  901          *      If the current thread is in the affected task, it is skipped.
  902          *
  903          *      If the task is deactivated before we're done, and we don't
  904          *      have to wait for it (must_wait is FALSE), just bail out.
  905          */
  906         cur_thread = current_thread();
  907 
  908         list = &task->thread_list;
  909         prev_thread = THREAD_NULL;
  910         task_lock(task);
  911         queue_iterate(list, thread, thread_t, thread_list) {
  912                 if (!(task->active) && !(must_wait)) {
  913                         ret = KERN_FAILURE;
  914                         break;
  915                 }
  916                 if (thread != cur_thread) {
  917                         thread_reference(thread);
  918                         task_unlock(task);
  919                         if (prev_thread != THREAD_NULL)
  920                                 thread_deallocate(prev_thread);
  921                                                         /* may block */
  922                         (void) thread_dowait(thread, TRUE);  /* may block */
  923                         prev_thread = thread;
  924                         task_lock(task);
  925                 }
  926         }
  927         task_unlock(task);
  928         if (prev_thread != THREAD_NULL)
  929                 thread_deallocate(prev_thread);         /* may block */
  930         return ret;
  931 }
  932 
  933 kern_return_t task_release(
  934         register task_t task)
  935 {
  936         register queue_head_t   *list;
  937         register thread_t       thread, next;
  938 
  939         task_lock(task);
  940         if (!task->active) {
  941                 task_unlock(task);
  942                 return KERN_FAILURE;
  943         }
  944 
  945         task->suspend_count--;
  946 
  947         /*
  948          *      Iterate through all the threads and release them
  949          */
  950         list = &task->thread_list;
  951         thread = (thread_t) queue_first(list);
  952         while (!queue_end(list, (queue_entry_t) thread)) {
  953                 next = (thread_t) queue_next(&thread->thread_list);
  954                 thread_release(thread);
  955                 thread = next;
  956         }
  957         task_unlock(task);
  958         return KERN_SUCCESS;
  959 }
  960 
  961 kern_return_t task_threads(
  962         task_t          task,
  963         thread_array_t  *thread_list,
  964         natural_t       *count)
  965 {
  966         unsigned int actual;    /* this many threads */
  967         thread_t thread;
  968         thread_t *threads;
  969         int i;
  970 
  971         vm_size_t size, size_needed;
  972         vm_offset_t addr;
  973 
  974         if (task == TASK_NULL)
  975                 return KERN_INVALID_ARGUMENT;
  976 
  977         size = 0; addr = 0;
  978 
  979         for (;;) {
  980                 task_lock(task);
  981                 if (!task->active) {
  982                         task_unlock(task);
  983                         return KERN_FAILURE;
  984                 }
  985 
  986                 actual = task->thread_count;
  987 
  988                 /* do we have the memory we need? */
  989 
  990                 size_needed = actual * sizeof(mach_port_t);
  991                 if (size_needed <= size)
  992                         break;
  993 
  994                 /* unlock the task and allocate more memory */
  995                 task_unlock(task);
  996 
  997                 if (size != 0)
  998                         kfree(addr, size);
  999 
 1000                 assert(size_needed > 0);
 1001                 size = size_needed;
 1002 
 1003                 addr = kalloc(size);
 1004                 if (addr == 0)
 1005                         return KERN_RESOURCE_SHORTAGE;
 1006         }
 1007 
 1008         /* OK, have memory and the task is locked & active */
 1009 
 1010         threads = (thread_t *) addr;
 1011 
 1012         for (i = 0, thread = (thread_t) queue_first(&task->thread_list);
 1013              i < actual;
 1014              i++, thread = (thread_t) queue_next(&thread->thread_list)) {
 1015                 /* take ref for convert_thread_to_port */
 1016                 thread_reference(thread);
 1017                 threads[i] = thread;
 1018         }
 1019         assert(queue_end(&task->thread_list, (queue_entry_t) thread));
 1020 
 1021         /* can unlock task now that we've got the thread refs */
 1022         task_unlock(task);
 1023 
 1024         if (actual == 0) {
 1025                 /* no threads, so return null pointer and deallocate memory */
 1026 
 1027                 *thread_list = 0;
 1028                 *count = 0;
 1029 
 1030                 if (size != 0)
 1031                         kfree(addr, size);
 1032         } else {
 1033                 /* if we allocated too much, must copy */
 1034 
 1035                 if (size_needed < size) {
 1036                         vm_offset_t newaddr;
 1037 
 1038                         newaddr = kalloc(size_needed);
 1039                         if (newaddr == 0) {
 1040                                 for (i = 0; i < actual; i++)
 1041                                         thread_deallocate(threads[i]);
 1042                                 kfree(addr, size);
 1043                                 return KERN_RESOURCE_SHORTAGE;
 1044                         }
 1045 
 1046                         bcopy((void *) addr, (void *) newaddr, size_needed);
 1047                         kfree(addr, size);
 1048                         threads = (thread_t *) newaddr;
 1049                 }
 1050 
 1051                 *thread_list = (mach_port_t *) threads;
 1052                 *count = actual;
 1053 
 1054                 /* do the conversion that Mig should handle */
 1055 
 1056                 for (i = 0; i < actual; i++)
 1057                         ((ipc_port_t *) threads)[i] =
 1058                                 convert_thread_to_port(threads[i]);
 1059         }
 1060 
 1061         return KERN_SUCCESS;
 1062 }
 1063 
 1064 kern_return_t task_suspend(
 1065         register task_t task)
 1066 {
 1067         register boolean_t      hold;
 1068 
 1069         if (task == TASK_NULL)
 1070                 return KERN_INVALID_ARGUMENT;
 1071 
 1072         hold = FALSE;
 1073         task_lock(task);
 1074         if ((task->user_stop_count)++ == 0)
 1075                 hold = TRUE;
 1076         task_unlock(task);
 1077 
 1078         /*
 1079          *      If the stop count was positive, the task is
 1080          *      already stopped and we can exit.
 1081          */
 1082         if (!hold) {
 1083                 return KERN_SUCCESS;
 1084         }
 1085 
 1086         /*
 1087          *      Hold all of the threads in the task, and wait for
 1088          *      them to stop.  If the current thread is within
 1089          *      this task, hold it separately so that all of the
 1090          *      other threads can stop first.
 1091          */
 1092 
 1093         if (task_hold(task) != KERN_SUCCESS)
 1094                 return KERN_FAILURE;
 1095 
 1096         if (task_dowait(task, FALSE) != KERN_SUCCESS)
 1097                 return KERN_FAILURE;
 1098 
 1099         if (current_task() == task) {
 1100                 spl_t s;
 1101 
 1102                 thread_hold(current_thread());
 1103                 /*
 1104                  *      We want to call thread_block on our way out,
 1105                  *      to stop running.
 1106                  */
 1107                 s = splsched();
 1108                 ast_on(cpu_number(), AST_BLOCK);
 1109                 splx(s);
 1110         }
 1111 
 1112         return KERN_SUCCESS;
 1113 }
 1114 
 1115 kern_return_t task_resume(
 1116         register task_t task)
 1117 {
 1118         register boolean_t      release;
 1119 
 1120         if (task == TASK_NULL)
 1121                 return KERN_INVALID_ARGUMENT;
 1122 
 1123         release = FALSE;
 1124         task_lock(task);
 1125         if (task->user_stop_count > 0) {
 1126                 if (--(task->user_stop_count) == 0)
 1127                         release = TRUE;
 1128         }
 1129         else {
 1130                 task_unlock(task);
 1131                 return KERN_FAILURE;
 1132         }
 1133         task_unlock(task);
 1134 
 1135         /*
 1136          *      Release the task if necessary.
 1137          */
 1138         if (release)
 1139                 return task_release(task);
 1140 
 1141         return KERN_SUCCESS;
 1142 }
 1143 
 1144 kern_return_t task_info(
 1145         task_t                  task,
 1146         int                     flavor,
 1147         task_info_t             task_info_out,  /* pointer to OUT array */
 1148         natural_t               *task_info_count)       /* IN/OUT */
 1149 {
 1150         vm_map_t                map;
 1151 
 1152         if (task == TASK_NULL)
 1153                 return KERN_INVALID_ARGUMENT;
 1154 
 1155         switch (flavor) {
 1156             case TASK_BASIC_INFO:
 1157             {
 1158                 register task_basic_info_t      basic_info;
 1159 
 1160                 if (*task_info_count < TASK_BASIC_INFO_COUNT) {
 1161                     return KERN_INVALID_ARGUMENT;
 1162                 }
 1163 
 1164                 basic_info = (task_basic_info_t) task_info_out;
 1165 
 1166                 map = (task == kernel_task) ? kernel_map : task->map;
 1167 
 1168                 basic_info->virtual_size  = map->size;
 1169                 basic_info->resident_size = pmap_resident_count(map->pmap)
 1170                                                    * PAGE_SIZE;
 1171 
 1172                 task_lock(task);
 1173                 switch (task->sched_policy->name) {
 1174                     case POLICY_BACKGROUND:
 1175                         basic_info->base_priority = 32;         /* XXX */
 1176                         break;
 1177 
 1178                     case POLICY_TIMESHARE:
 1179                     {
 1180                         struct policy_info_timeshare    info;
 1181                         natural_t                       count;
 1182 
 1183                         count = POLICY_INFO_TIMESHARE_COUNT;
 1184                         (void) TASK_GET_PARAM(task,
 1185                                               (policy_param_t) &info,
 1186                                               &count);
 1187                         basic_info->base_priority = info.base_priority;
 1188                         break;
 1189                     }
 1190 
 1191                     default:
 1192                         basic_info->base_priority = 0;
 1193                         break;
 1194                 }
 1195 
 1196                 basic_info->suspend_count = task->user_stop_count;
 1197                 basic_info->user_time.seconds
 1198                                 = task->total_user_time.seconds;
 1199                 basic_info->user_time.microseconds
 1200                                 = task->total_user_time.nanoseconds / 1000;
 1201                 basic_info->system_time.seconds
 1202                                 = task->total_system_time.seconds;
 1203                 basic_info->system_time.microseconds 
 1204                                 = task->total_system_time.nanoseconds / 1000;
 1205                 task_unlock(task);
 1206 
 1207                 *task_info_count = TASK_BASIC_INFO_COUNT;
 1208                 break;
 1209             }
 1210 
 1211             case TASK_THREAD_TIMES_INFO:
 1212             {
 1213                 register task_thread_times_info_t times_info;
 1214                 register thread_t       thread;
 1215                 time_spec_t             total_user_time;
 1216                 time_spec_t             total_system_time;
 1217 
 1218                 if (*task_info_count < TASK_THREAD_TIMES_INFO_COUNT) {
 1219                     return KERN_INVALID_ARGUMENT;
 1220                 }
 1221 
 1222                 times_info = (task_thread_times_info_t) task_info_out;
 1223 
 1224                 total_user_time.seconds = 0;
 1225                 total_user_time.nanoseconds = 0;
 1226                 total_system_time.seconds = 0;
 1227                 total_system_time.nanoseconds = 0;
 1228 
 1229                 task_lock(task);
 1230                 queue_iterate(&task->thread_list, thread,
 1231                               thread_t, thread_list)
 1232                 {
 1233                     time_spec_t user_time, system_time;
 1234                     spl_t       s;
 1235 
 1236                     s = splsched();
 1237                     thread_sched_lock(thread);
 1238 
 1239                     thread_read_times(thread, &user_time, &system_time);
 1240 
 1241                     thread_sched_unlock(thread);
 1242                     splx(s);
 1243 
 1244                     time_spec_add(total_user_time, user_time);
 1245                     time_spec_add(total_system_time, system_time);
 1246                 }
 1247                 task_unlock(task);
 1248 
 1249                 times_info->user_time.seconds = total_user_time.seconds;
 1250                 times_info->user_time.microseconds =
 1251                                         total_user_time.nanoseconds / 1000;
 1252                 times_info->system_time.seconds = total_system_time.seconds;
 1253                 times_info->system_time.microseconds =
 1254                                         total_system_time.nanoseconds / 1000;
 1255 
 1256                 *task_info_count = TASK_THREAD_TIMES_INFO_COUNT;
 1257                 break;
 1258             }
 1259 
 1260             default:
 1261                 return KERN_INVALID_ARGUMENT;
 1262         }
 1263 
 1264         return KERN_SUCCESS;
 1265 }
 1266 
 1267 #if     MACH_HOST
 1268 /*
 1269  *      task_assign:
 1270  *
 1271  *      Change the assigned processor set for the task
 1272  */
 1273 kern_return_t
 1274 task_assign(
 1275         task_t          task,
 1276         processor_set_t new_pset,
 1277         boolean_t       assign_threads)
 1278 {
 1279         kern_return_t           ret = KERN_SUCCESS;
 1280         register thread_t       thread, prev_thread;
 1281         register queue_head_t   *list;
 1282         register processor_set_t        pset;
 1283 
 1284         if (task == TASK_NULL || new_pset == PROCESSOR_SET_NULL) {
 1285                 return KERN_INVALID_ARGUMENT;
 1286         }
 1287 
 1288         /*
 1289          *      Freeze task`s assignment.  Prelude to assigning
 1290          *      task.  Only one freeze may be held per task.
 1291          */
 1292 
 1293         task_lock(task);
 1294         while (task->may_assign == FALSE) {
 1295                 task->assign_wait = TRUE;
 1296                 assert_wait((event_t)&task->processor_set, TRUE);
 1297                 task_unlock(task);
 1298                 thread_block(CONTINUE_NULL);
 1299                 task_lock(task);
 1300         }
 1301 
 1302         /*
 1303          *      Avoid work if task already in this processor set.
 1304          */
 1305         if (task->processor_set == new_pset)  {
 1306                 /*
 1307                  *      No need for task->assign_wait wakeup:
 1308                  *      task->may_assign is still TRUE.
 1309                  */
 1310                 task_unlock(task);
 1311                 return KERN_SUCCESS;
 1312         }
 1313 
 1314         task->may_assign = FALSE;
 1315 
 1316         /*
 1317          *      Safe to get the task`s pset: it cannot change while
 1318          *      task is frozen.
 1319          */
 1320         pset = task->processor_set;
 1321 
 1322         /*
 1323          *      Lock both psets now.  Use ordering to avoid deadlock.
 1324          */
 1325     Restart:
 1326         if ((vm_offset_t) pset < (vm_offset_t) new_pset) {
 1327             pset_lock(pset);
 1328             pset_lock(new_pset);
 1329         }
 1330         else {
 1331             pset_lock(new_pset);
 1332             pset_lock(pset);
 1333         }
 1334 
 1335         /*
 1336          *      Check if new_pset is ok to assign to.  If not,
 1337          *      reassign to default_pset.
 1338          */
 1339         if (!new_pset->active) {
 1340             pset_unlock(pset);
 1341             pset_unlock(new_pset);
 1342             new_pset = &default_pset;
 1343             goto Restart;
 1344         }
 1345 
 1346         pset_reference(new_pset);
 1347 
 1348         /*
 1349          *      Now grab the task lock and move the task.
 1350          */
 1351 
 1352         pset_remove_task(pset, task);
 1353         pset_add_task(new_pset, task);
 1354 
 1355         pset_unlock(pset);
 1356         pset_unlock(new_pset);
 1357 
 1358         if (assign_threads == FALSE) {
 1359                 /*
 1360                  *      We leave existing threads at their
 1361                  *      old assignments.  Unfreeze task`s
 1362                  *      assignment.
 1363                  */
 1364                 task->may_assign = TRUE;
 1365                 if (task->assign_wait) {
 1366                         task->assign_wait = FALSE;
 1367                         thread_wakeup((event_t) &task->processor_set);
 1368                 }
 1369                 task_unlock(task);
 1370                 pset_deallocate(pset);
 1371                 return KERN_SUCCESS;
 1372         }
 1373 
 1374         /*
 1375          *      Iterate down the thread list reassigning all the threads.
 1376          *      New threads pick up task's new processor set automatically.
 1377          *      Do current thread last because new pset may be empty.
 1378          */
 1379         list = &task->thread_list;
 1380         prev_thread = THREAD_NULL;
 1381         queue_iterate(list, thread, thread_t, thread_list) {
 1382                 if (!(task->active)) {
 1383                         ret = KERN_FAILURE;
 1384                         break;
 1385                 }
 1386                 if (thread != current_thread()) {
 1387                         thread_reference(thread);
 1388                         task_unlock(task);
 1389                         if (prev_thread != THREAD_NULL)
 1390                             thread_deallocate(prev_thread); /* may block */
 1391                         (void) thread_assign(thread,new_pset);
 1392                                                             /* may block */
 1393                         prev_thread = thread;
 1394                         task_lock(task);
 1395                 }
 1396         }
 1397 
 1398         /*
 1399          *      Done, wakeup anyone waiting for us.
 1400          */
 1401         task->may_assign = TRUE;
 1402         if (task->assign_wait) {
 1403                 task->assign_wait = FALSE;
 1404                 thread_wakeup((event_t)&task->processor_set);
 1405         }
 1406         task_unlock(task);
 1407         if (prev_thread != THREAD_NULL)
 1408                 thread_deallocate(prev_thread);         /* may block */
 1409 
 1410         /*
 1411          *      Finish assignment of current thread.
 1412          */
 1413         if (current_thread()->task == task)
 1414                 (void) thread_assign(current_thread(), new_pset);
 1415 
 1416         pset_deallocate(pset);
 1417 
 1418         return ret;
 1419 }
 1420 #else   /* MACH_HOST */
 1421 /*
 1422  *      task_assign:
 1423  *
 1424  *      Change the assigned processor set for the task
 1425  */
 1426 kern_return_t
 1427 task_assign(
 1428         task_t          task,
 1429         processor_set_t new_pset,
 1430         boolean_t       assign_threads)
 1431 {
 1432         return KERN_FAILURE;
 1433 }
 1434 #endif  /* MACH_HOST */
 1435         
 1436 
 1437 /*
 1438  *      task_assign_default:
 1439  *
 1440  *      Version of task_assign to assign to default processor set.
 1441  */
 1442 kern_return_t
 1443 task_assign_default(
 1444         task_t          task,
 1445         boolean_t       assign_threads)
 1446 {
 1447         return task_assign(task, &default_pset, assign_threads);
 1448 }
 1449 
 1450 /*
 1451  *      task_get_assignment
 1452  *
 1453  *      Return name of processor set that task is assigned to.
 1454  */
 1455 kern_return_t task_get_assignment(
 1456         task_t          task,
 1457         processor_set_t *pset)
 1458 {
 1459         if (!task->active)
 1460                 return KERN_FAILURE;
 1461 
 1462         *pset = task->processor_set;
 1463         pset_reference(*pset);
 1464         return KERN_SUCCESS;
 1465 }
 1466 
 1467 /*
 1468  *      [ obsolete ]
 1469  *      task_priority
 1470  *
 1471  *      Set priority of task; used only for newly created threads.
 1472  *      Optionally change priorities of threads.
 1473  *
 1474  *      Does nothing.
 1475  */
 1476 kern_return_t
 1477 task_priority(
 1478         task_t          task,
 1479         int             priority,
 1480         boolean_t       change_threads)
 1481 {
 1482         kern_return_t   ret;
 1483         thread_t        thread;
 1484         struct policy_param_timeshare   param;
 1485 
 1486         if (task == TASK_NULL)
 1487                 return KERN_INVALID_ARGUMENT;
 1488 
 1489         if (!change_threads)
 1490             return KERN_SUCCESS;        /* ignored */
 1491 
 1492         ret = KERN_SUCCESS;
 1493         param.priority = priority;
 1494 
 1495         /*
 1496          *      change_threads still works: on timesharing
 1497          *      threads only.
 1498          */
 1499         task_lock(task);
 1500         queue_iterate(&task->thread_list, thread, thread_t, thread_list) {
 1501             if (thread->sched_policy->name == POLICY_TIMESHARE) {
 1502                 if (thread_set_policy_param(thread,
 1503                                             FALSE,
 1504                                             (policy_param_t)&param,
 1505                                             POLICY_PARAM_TIMESHARE_COUNT)
 1506                     != KERN_SUCCESS)
 1507                 {
 1508                     ret = KERN_FAILURE;
 1509                 }
 1510             }
 1511         }
 1512         task_unlock(task);
 1513 
 1514         return ret;
 1515 }
 1516 
 1517 /*
 1518  *      task_collect_scan:
 1519  *
 1520  *      Attempt to free resources owned by tasks.
 1521  */
 1522 
 1523 void task_collect_scan(void)
 1524 {
 1525         register task_t         task, prev_task;
 1526         processor_set_t         pset, prev_pset;
 1527 
 1528         prev_task = TASK_NULL;
 1529         prev_pset = PROCESSOR_SET_NULL;
 1530 
 1531         simple_lock(&all_psets_lock);
 1532         queue_iterate(&all_psets, pset, processor_set_t, all_psets) {
 1533                 pset_lock(pset);
 1534                 queue_iterate(&pset->tasks, task, task_t, pset_tasks) {
 1535                         task_reference(task);
 1536                         pset_reference(pset);
 1537                         pset_unlock(pset);
 1538                         simple_unlock(&all_psets_lock);
 1539 
 1540                         pmap_collect(task->map->pmap);
 1541 
 1542                         if (prev_task != TASK_NULL)
 1543                                 task_deallocate(prev_task);
 1544                         prev_task = task;
 1545 
 1546                         if (prev_pset != PROCESSOR_SET_NULL)
 1547                                 pset_deallocate(prev_pset);
 1548                         prev_pset = pset;
 1549 
 1550                         simple_lock(&all_psets_lock);
 1551                         pset_lock(pset);
 1552                 }
 1553                 pset_unlock(pset);
 1554         }
 1555         simple_unlock(&all_psets_lock);
 1556 
 1557         if (prev_task != TASK_NULL)
 1558                 task_deallocate(prev_task);
 1559         if (prev_pset != PROCESSOR_SET_NULL)
 1560                 pset_deallocate(prev_pset);
 1561 }
 1562 
 1563 boolean_t task_collect_allowed = TRUE;
 1564 unsigned task_collect_last_tick = 0;
 1565 unsigned task_collect_max_rate = 0;             /* in ticks */
 1566 
 1567 /*
 1568  *      consider_task_collect:
 1569  *
 1570  *      Called by the pageout daemon when the system needs more free pages.
 1571  */
 1572 
 1573 void consider_task_collect(void)
 1574 {
 1575         /*
 1576          *      By default, don't attempt task collection more frequently
 1577          *      than once a minute.
 1578          */
 1579 
 1580         if (task_collect_max_rate == 0)
 1581                 task_collect_max_rate = 60;
 1582 
 1583         if (task_collect_allowed &&
 1584             (sched_tick > (task_collect_last_tick + task_collect_max_rate))) {
 1585                 task_collect_last_tick = sched_tick;
 1586                 task_collect_scan();
 1587         }
 1588 }
 1589 
 1590 kern_return_t
 1591 task_ras_control(
 1592         task_t task,
 1593         vm_offset_t pc,
 1594         vm_offset_t endpc,
 1595         int flavor)
 1596 {
 1597 #if     FAST_TAS
 1598         int i;
 1599         kern_return_t   ret = KERN_SUCCESS;
 1600 
 1601         task_lock(task);
 1602         switch (flavor) {
 1603             case TASK_RAS_CONTROL_PURGE_ALL:
 1604                 /* remove all RAS */
 1605 
 1606                 for (i = 0; i < TASK_FAST_TAS_NRAS; i++) {
 1607                     task->fast_tas_base[i] = 0;
 1608                     task->fast_tas_end[i] = 0;
 1609                 }
 1610                 break;
 1611 
 1612             case TASK_RAS_CONTROL_PURGE_ONE:
 1613                 /* remove this RAS, collapse remaining */
 1614 
 1615                 for (i = 0; i < TASK_FAST_TAS_NRAS; i++)  {
 1616                     if (task->fast_tas_base[i] == pc &&
 1617                         task->fast_tas_end[i] == endpc)
 1618                     {
 1619                         while (i < TASK_FAST_TAS_NRAS-1)  {
 1620                             task->fast_tas_base[i] = task->fast_tas_base[i+1];
 1621                             task->fast_tas_end[i] = task->fast_tas_end[i+1];
 1622                             i++;
 1623                         }
 1624                         task->fast_tas_base[TASK_FAST_TAS_NRAS-1] = 0;
 1625                         task->fast_tas_end[TASK_FAST_TAS_NRAS-1] = 0;
 1626                         break;
 1627                     }
 1628                 }
 1629                 if (i == TASK_FAST_TAS_NRAS) {
 1630                     ret = KERN_INVALID_ADDRESS;
 1631                 }
 1632                 break;
 1633 
 1634             case TASK_RAS_CONTROL_PURGE_ALL_AND_INSTALL_ONE: 
 1635                 /* remove all RAS and install this RAS */
 1636 
 1637                 for (i = 0; i < TASK_FAST_TAS_NRAS; i++) {
 1638                     task->fast_tas_base[i] = 0;
 1639                     task->fast_tas_end[i] = 0;
 1640                 }
 1641                 /* FALL THROUGH */
 1642 
 1643             case TASK_RAS_CONTROL_INSTALL_ONE:
 1644                 /* install this RAS */
 1645 
 1646                 for (i = 0; i < TASK_FAST_TAS_NRAS; i++) {
 1647                     if (task->fast_tas_base[i] == pc &&
 1648                         task->fast_tas_end[i] == endpc)
 1649                     {
 1650                         /* already installed */
 1651                         break;
 1652                     }
 1653                     if (task->fast_tas_base[i] == 0 &&
 1654                         task->fast_tas_end[i] == 0) 
 1655                     {
 1656                         task->fast_tas_base[i] = pc;
 1657                         task->fast_tas_end[i] = endpc;
 1658                         break;
 1659                     }
 1660                 }
 1661                 if (i == TASK_FAST_TAS_NRAS) {
 1662                     ret = KERN_RESOURCE_SHORTAGE;
 1663                 }
 1664                 break;
 1665 
 1666             default:
 1667                 ret = KERN_INVALID_VALUE;
 1668                 break;
 1669         }
 1670 
 1671         task_unlock(task);
 1672         return ret;
 1673 #else   /* FAST_TAS */
 1674         return KERN_FAILURE;            /* not implemented */
 1675 #endif  /* FAST_TAS */
 1676 }

Cache object: f663aeb0c5faabd9f77ab56fb90e8dfc


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.