The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/kern/task.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1993-1988 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        task.c,v $
   29  * Revision 2.26  93/08/10  15:12:10  mrt
   30  *      Conditionalized atm hooks.
   31  *      [93/07/30            cmaeda]
   32  *      Included network interface hooks.
   33  *      [93/06/09  15:43:04  jcb]
   34  * 
   35  * Revision 2.25  93/08/03  12:31:22  mrt
   36  *      [93/07/30  10:30:30  bershad]
   37  * 
   38  *      Change way in which kernel tasks share the same kernel map to avoid
   39  *      fault on current thread reference during bootstrap.
   40  *      [93/07/30  10:24:25  bershad]
   41  * 
   42  * Revision 2.24  93/05/15  18:47:52  mrt
   43  *      machparam.h -> machspl.h
   44  * 
   45  * Revision 2.23  93/01/24  13:19:56  danner
   46  *      We must explicitly set "new_thread->pc_sample.buffer = 0;" so
   47  *      that we don't think we have a sampling buffer.
   48  *      [93/01/13            rvb]
   49  * 
   50  * Revision 2.22  93/01/21  12:22:15  danner
   51  *      fast tas changes.
   52  *      [93/01/20            bershad]
   53  * 
   54  * Revision 2.21  93/01/14  17:36:41  danner
   55  *      Added ANSI function prototypes.
   56  *      [92/12/29            dbg]
   57  * 
   58  *      Fixed pset locking.  Pset lock must be taken before task or
   59  *      thread lock.
   60  *      [92/10/28            dbg]
   61  *      Proper spl typing. 64bit cleanup.
   62  *      [92/12/01            af]
   63  * 
   64  *      Fixed pset locking.  Pset lock must be taken before task or
   65  *      thread lock.
   66  *      [92/10/28            dbg]
   67  * 
   68  * Revision 2.20  92/08/03  17:39:45  jfriedl
   69  *      removed silly prototypes
   70  *      [92/08/02            jfriedl]
   71  * 
   72  * Revision 2.19  92/07/20  13:32:53  cmaeda
   73  *      Added fast tas support:
   74  *              Added task_set_ras_pc.
   75  *              Inherit ras addresses when forking.
   76  *      [92/05/11  14:36:17  cmaeda]
   77  * 
   78  * Revision 2.18  92/05/21  17:16:22  jfriedl
   79  *      tried prototypes.
   80  *      [92/05/20            jfriedl]
   81  * 
   82  * Revision 2.17  92/04/01  10:54:11  rpd
   83  *      Initialize kernel_task to TASK_NULL to support ddb use before the
   84  *       bss is zeroed. Remove duplicate include of machine/machparam.h.
   85  *      Update copyright.
   86  *      [92/03/21            danner]
   87  * 
   88  * Revision 2.16  91/12/11  08:42:30  jsb
   89  *      Fixed assert_wait/thread_wakeup rendezvous in task_assign.
   90  *      [91/11/26            rpd]
   91  * 
   92  * Revision 2.15  91/11/15  14:11:59  rpd
   93  *      NORMA_TASK: initialize new child_node field in task upon creation.
   94  *      [91/09/23  09:20:23  jsb]
   95  * 
   96  * Revision 2.14  91/06/25  10:29:32  rpd
   97  *      Updated convert_thread_to_port usage.
   98  *      [91/05/27            rpd]
   99  * 
  100  * Revision 2.13  91/06/17  15:47:19  jsb
  101  *      Added norma_task hooks. See norma/kern_task.c for code.
  102  *      [91/06/17  10:53:30  jsb]
  103  * 
  104  * Revision 2.12  91/05/14  16:48:05  mrt
  105  *      Correcting copyright
  106  * 
  107  * Revision 2.11  91/03/16  14:52:24  rpd
  108  *      Can't use thread_dowait on the current thread now.
  109  *      [91/01/20            rpd]
  110  * 
  111  * Revision 2.10  91/02/05  17:29:55  mrt
  112  *      Changed to new Mach copyright
  113  *      [91/02/01  16:19:00  mrt]
  114  * 
  115  * Revision 2.9  91/01/08  15:17:44  rpd
  116  *      Added consider_task_collect, task_collect_scan.
  117  *      [91/01/03            rpd]
  118  *      Added continuation argument to thread_block.
  119  *      [90/12/08            rpd]
  120  * 
  121  * Revision 2.8  90/10/25  14:45:26  rwd
  122  *      From OSF: Add thread_block() to loop that forcibly terminates
  123  *      threads in task_terminate() to fix livelock.  Also hold
  124  *      reference to thread when calling thread_force_terminate().
  125  *      [90/10/19            rpd]
  126  * 
  127  * Revision 2.7  90/06/19  22:59:41  rpd
  128  *      Fixed task_info to return the correct base_priority.
  129  *      [90/06/18            rpd]
  130  * 
  131  * Revision 2.6  90/06/02  14:56:40  rpd
  132  *      Moved trap versions of kernel calls to kern/ipc_mig.c.
  133  *      [90/05/31            rpd]
  134  * 
  135  *      Removed references to kernel_vm_space, keep_wired_memory.
  136  *      [90/04/29            rpd]
  137  *      Converted to new IPC and scheduling technology.
  138  *      [90/03/26  22:22:19  rpd]
  139  * 
  140  * Revision 2.5  90/05/29  18:36:51  rwd
  141  *      Added trap versions of task routines from rfr.
  142  *      [90/04/20            rwd]
  143  *      Add TASK_THREAD_TIMES_INFO flavor to task_info, to get times for
  144  *      all live threads.
  145  *      [90/04/03            dbg]
  146  * 
  147  *      Use kmem_alloc_wired instead of vm_allocate in task_threads.
  148  *      [90/03/28            dbg]
  149  * 
  150  * Revision 2.4  90/05/03  15:46:58  dbg
  151  *      Add TASK_THREAD_TIMES_INFO flavor to task_info, to get times for
  152  *      all live threads.
  153  *      [90/04/03            dbg]
  154  * 
  155  *      Use kmem_alloc_wired instead of vm_allocate in task_threads.
  156  *      [90/03/28            dbg]
  157  * 
  158  * Revision 2.3  90/01/11  11:44:17  dbg
  159  *      Removed task_halt (unused).  De-linted.
  160  *      [89/12/12            dbg]
  161  * 
  162  * Revision 2.2  89/09/08  11:26:37  dbg
  163  *      Initialize keep_wired_memory in task_create.
  164  *      [89/07/17            dbg]
  165  * 
  166  * 19-May-89  David Golub (dbg) at Carnegie-Mellon University
  167  *      Changed task_info to check for kernel_task, not first_task.
  168  *
  169  * 19-Oct-88  David Golub (dbg) at Carnegie-Mellon University
  170  *      Moved all syscall_emulation routine calls here.  Removed
  171  *      all non-MACH data structures.  Added routine to create
  172  *      new tasks running in the kernel.  Changed kernel_task
  173  *      creation to create it as a normal task.
  174  *
  175  * Revision 2.6  88/10/11  10:21:38  rpd
  176  *      Changed includes to the new style.
  177  *      Rewrote task_threads; the old version could return
  178  *      an inconsistent picture of the task.
  179  *      [88/10/05  10:28:13  rpd]
  180  * 
  181  * Revision 2.5  88/08/06  18:25:53  rpd
  182  * Changed to use ipc_task_lock/ipc_task_unlock macros.
  183  * Eliminated use of kern/mach_ipc_defs.h.
  184  * Enable kernel_task for IPC access.  (See hack in task_by_unix_pid to
  185  * allow a user to get the kernel_task's port.)
  186  * Made kernel_task's ref_count > 0, so that task_reference/task_deallocate
  187  * works on it.  (Previously the task_deallocate would try to destroy it.)
  188  * 
  189  * Revision 2.4  88/07/20  16:40:17  rpd
  190  * Removed task_ports (replaced by port_names).
  191  * Didn't leave xxx form, because it wasn't implemented.
  192  * 
  193  * Revision 2.3  88/07/17  17:55:52  mwyoung
  194  * Split up uses of task.kernel_only field.  Condensed history.
  195  * 
  196  * Revision 2.2.1.1  88/06/28  20:46:20  mwyoung
  197  * Split up uses of task.kernel_only field.  Condensed history.
  198  * 
  199  * 21-Jun-88  Michael Young (mwyoung) at Carnegie-Mellon University.
  200  *      Split up uses of task.kernel_only field.
  201  *
  202  * 21-Jun-88  David Golub (dbg) at Carnegie-Mellon University
  203  *      Loop in task_terminate to terminate threads was incorrect; if
  204  *      another component of the system had a reference to the thread,
  205  *      the thread would remain in the thread_list for the task, and the
  206  *      loop would never terminate.  Rewrote it to run down the list
  207  *      like task_hold.  Thread_create terminates new thread if
  208  *      task_terminate occurs simultaneously.
  209  *
  210  * 27-Jan-88  Douglas Orr (dorr) at Carnegie-Mellon University
  211  *      Init user space library structures.
  212  *
  213  * 21-Jan-88  David Golub (dbg) at Carnegie-Mellon University
  214  *      Task_create no longer returns the data port.  Task_status and
  215  *      task_set_notify are obsolete (use task_{get,set}_special_port).
  216  *
  217  * 21-Jan-88  Karl Hauth (hauth) at Carnegie-Mellon University
  218  *      task_info(kernel_task, ...) now looks explicitly in the
  219  *      kernel_map, so it actually returns useful numbers.
  220  *
  221  * 17-Jan-88  David Golub (dbg) at Carnegie-Mellon University
  222  *      Added new task interfaces: task_suspend, task_resume,
  223  *      task_info, task_get_special_port, task_set_special_port.
  224  *      Old interfaces remain (temporarily) for binary
  225  *      compatibility, prefixed with 'xxx_'.
  226  *
  227  * 29-Dec-87  David Golub (dbg) at Carnegie-Mellon University
  228  *      Delinted.
  229  *
  230  * 23-Dec-87  David Golub (dbg) at Carnegie-Mellon University
  231  *      Added task_halt to halt all threads in a task.
  232  *
  233  * 15-Dec-87  David Golub (dbg) at Carnegie-Mellon University
  234  *      Check for null task pointer in task_reference and
  235  *      task_deallocate.
  236  *
  237  *  9-Dec-87  David Golub (dbg) at Carnegie-Mellon University
  238  *      Removed extra thread reference from task_terminate for new thread
  239  *      termination code.
  240  *
  241  *  8-Dec-87  David Black (dlb) at Carnegie-Mellon University
  242  *      Added call to ipc_task_disable.
  243  *
  244  *  3-Dec-87  David Black (dlb) at Carnegie-Mellon University
  245  *      Implemented better task termination base on task active field:
  246  *              1.  task_terminate sets active field to false.
  247  *              2.  All but the most simple task operations check the
  248  *                      active field and abort if it is false.
  249  *              3.  task_{hold, dowait, release} now return kern_return_t's.
  250  *              4.  task_dowait has a second parameter to ignore active
  251  *                      field if called from task_terminate.
  252  *      Task terminate acquires extra reference to current thread before
  253  *      terminating it (see thread_terminate()).
  254  *
  255  * 19-Nov-87  Avadis Tevanian (avie) at Carnegie-Mellon University
  256  *      Eliminated TT conditionals.
  257  *
  258  * 13-Oct-87  David Black (dlb) at Carnegie-Mellon University
  259  *      Use counts for suspend and resume primitives.
  260  *
  261  * 13-Oct-87  David Golub (dbg) at Carnegie-Mellon University
  262  *      Added port reference counting to task_set_notify.
  263  *
  264  *  5-Oct-87  David Golub (dbg) at Carnegie-Mellon University
  265  *      Completely replaced old scheduling state machine.
  266  *
  267  * 14-Sep-87  Michael Young (mwyoung) at Carnegie-Mellon University
  268  *      De-linted.
  269  *
  270  * 25-Aug-87  Robert Baron (rvb) at Carnegie-Mellon University
  271  *      Must initialize the kernel_task->lock (at least on the Sequent)
  272  *
  273  *  6-Aug-87  David Golub (dbg) at Carnegie-Mellon University
  274  *      Moved ipc_task_terminate to task_terminate, to shut down other
  275  *      threads that are manipulating the task via its task_port.
  276  *      Changed task_terminate to terminate all threads in the task.
  277  *
  278  * 29-Jul-87  David Golub (dbg) at Carnegie-Mellon University
  279  *      Fix task_suspend not to hold the task if the task has been
  280  *      resumed.  Change task_hold/task_wait so that if the current
  281  *      thread is in the task, it is not held until after all of the
  282  *      other threads in the task have stopped.  Make task_terminate be
  283  *      able to terminate the current task.
  284  *
  285  *  9-Jul-87  Karl Hauth (hauth) at Carnegie-Mellon University
  286  *      Modified task_statistics to reflect changes in the structure.
  287  *
  288  * 10-Jun-87  Karl Hauth (hauth) at Carnegie-Mellon University
  289  *      Added code to fill in the task_statistics structure with
  290  *      zeros and to make mig happier by returning something.
  291  *
  292  *  1-Jun-87  Avadis Tevanian (avie) at Carnegie-Mellon University
  293  *      Added task_statistics stub.
  294  *
  295  * 27-Apr-87  Michael Young (mwyoung) at Carnegie-Mellon University
  296  *      Move ipc_task_init into task_create; it *should* return
  297  *      the data port (with a reference) at some point.
  298  *
  299  * 20-Apr-87  David Black (dlb) at Carnegie-Mellon University
  300  *      Fixed task_suspend to ignore multiple suspends.
  301  *      Fixed task_dowait to work if current thread is in the affected task.
  302  *
  303  * 24-Feb-87  Avadis Tevanian (avie) at Carnegie-Mellon University
  304  *      Rewrote task_suspend/task_hold and added task_wait for new user
  305  *      synchronization paradigm.
  306  *
  307  * 10-Feb-87  Michael Young (mwyoung) at Carnegie-Mellon University
  308  *      Add task.kernel_only initialization.
  309  *
  310  * 31-Jan-87  Avadis Tevanian (avie) at Carnegie-Mellon University
  311  *      Merged in my changes for real thread implementation.
  312  *
  313  *  7-Nov-86  Michael Young (mwyoung) at Carnegie-Mellon University
  314  *      Fixed up stubs for eventual task calls.
  315  *
  316  * 30-Sep-86  Avadis Tevanian (avie) at Carnegie-Mellon University
  317  *      Make floating u-area work, add all_task list management.
  318  *
  319  * 26-Sep-86  Michael Young (mwyoung) at Carnegie-Mellon University
  320  *      Added argument to ipc_task_init to get parent.
  321  *
  322  *  1-Aug-86  Michael Young (mwyoung) at Carnegie-Mellon University
  323  *      Added initialization for Mach IPC.
  324  *
  325  * 20-Jul-86  Michael Young (mwyoung) at Carnegie-Mellon University
  326  *      Added kernel_task.
  327  */
  328 /*
  329  *      File:   kern/task.c
  330  *      Author: Avadis Tevanian, Jr., Michael Wayne Young, David Golub,
  331  *              David Black
  332  *
  333  *      Task management primitives implementation.
  334  */
  335 
  336 #include <mach_host.h>
  337 #include <norma_task.h>
  338 #include <fast_tas.h>
  339 #include <net_atm.h>
  340 
  341 #include <mach/machine/vm_types.h>
  342 #include <mach/vm_param.h>
  343 #include <mach/task_info.h>
  344 #include <mach/task_special_ports.h>
  345 #include <ipc/ipc_space.h>
  346 #include <kern/mach_param.h>
  347 #include <kern/task.h>
  348 #include <kern/thread.h>
  349 #include <kern/zalloc.h>
  350 #include <kern/kalloc.h>
  351 #include <kern/processor.h>
  352 #include <kern/sched_prim.h>    /* for thread_wakeup */
  353 #include <kern/ipc_tt.h>
  354 #include <vm/vm_kern.h>         /* for kernel_map, ipc_kernel_map */
  355 #include <machine/machspl.h>    /* for splsched */
  356 
  357 #if     NET_ATM
  358 #include <chips/nw_mk.h>
  359 #endif
  360 
  361 #if     NORMA_TASK
  362 #define task_create     task_create_local
  363 #endif  /* NORMA_TASK */
  364 
  365 task_t  kernel_task = TASK_NULL;
  366 zone_t  task_zone;
  367 
  368 extern void eml_init(void);
  369 extern void eml_task_reference(task_t, task_t);
  370 extern void eml_task_deallocate(task_t);
  371 
  372 void task_init(void)
  373 {
  374         task_zone = zinit(
  375                         sizeof(struct task),
  376                         TASK_MAX * sizeof(struct task),
  377                         TASK_CHUNK * sizeof(struct task),
  378                         FALSE, "tasks");
  379 
  380         eml_init();
  381 
  382         /*
  383          * Create the kernel task as the first task.
  384          * Task_create must assign to kernel_task as a side effect,
  385          * for other initialization. (:-()
  386          */
  387         (void) task_create(TASK_NULL, FALSE, &kernel_task);
  388 }
  389 
  390 /*
  391  * Create a task running in the kernel address space.  It may
  392  * have its own map of size mem_size (if 0, it uses the kernel map),
  393  * and may have ipc privileges.
  394  */
  395 task_t  kernel_task_create(
  396         task_t          parent_task,
  397         vm_size_t       map_size)
  398 {
  399         task_t          new_task;
  400         vm_offset_t     min, max;
  401 
  402         /*
  403          * Create the task.
  404          */
  405         (void) task_create(parent_task, FALSE, &new_task);
  406 
  407         /*
  408          * Task_create creates the task with a user-space map.
  409          * Remove the map and replace it with the kernel map
  410          * or a submap of the kernel map.
  411          */
  412         vm_map_deallocate(new_task->map);
  413         if (map_size == 0)
  414             new_task->map = kernel_map;
  415         else
  416             new_task->map = kmem_suballoc(kernel_map, &min, &max,
  417                                           map_size, FALSE);
  418 
  419         return new_task;
  420 }
  421 
  422 kern_return_t task_create(
  423         task_t          parent_task,
  424         boolean_t       inherit_memory,
  425         task_t          *child_task)            /* OUT */
  426 {
  427         register task_t new_task;
  428         register processor_set_t        pset;
  429         int i;
  430 
  431         new_task = (task_t) zalloc(task_zone);
  432         if (new_task == TASK_NULL) {
  433                 panic("task_create: no memory for task structure");
  434         }
  435 
  436         /* one ref for just being alive; one for our caller */
  437         new_task->ref_count = 2;
  438 
  439         if (child_task == &kernel_task)  {
  440                 new_task->map = kernel_map; 
  441         } else if (inherit_memory) {
  442                 new_task->map = vm_map_fork(parent_task->map);
  443         } else {
  444                 new_task->map = vm_map_create(pmap_create(0),
  445                                         round_page(VM_MIN_ADDRESS),
  446                                         trunc_page(VM_MAX_ADDRESS), TRUE);
  447         }
  448 
  449         simple_lock_init(&new_task->lock);
  450         queue_init(&new_task->thread_list);
  451         new_task->suspend_count = 0;
  452         new_task->active = TRUE;
  453         new_task->user_stop_count = 0;
  454         new_task->thread_count = 0;
  455 
  456         eml_task_reference(new_task, parent_task);
  457 
  458         ipc_task_init(new_task, parent_task);
  459 
  460 #if     NET_ATM
  461         new_task->nw_ep_owned = 0;
  462 #endif
  463 
  464         new_task->total_user_time.seconds = 0;
  465         new_task->total_user_time.microseconds = 0;
  466         new_task->total_system_time.seconds = 0;
  467         new_task->total_system_time.microseconds = 0;
  468 
  469         if (parent_task != TASK_NULL) {
  470                 task_lock(parent_task);
  471                 pset = parent_task->processor_set;
  472                 if (!pset->active)
  473                         pset = &default_pset;
  474                 pset_reference(pset);
  475                 new_task->priority = parent_task->priority;
  476                 task_unlock(parent_task);
  477         }
  478         else {
  479                 pset = &default_pset;
  480                 pset_reference(pset);
  481                 new_task->priority = BASEPRI_USER;
  482         }
  483         pset_lock(pset);
  484         pset_add_task(pset, new_task);
  485         pset_unlock(pset);
  486 
  487         new_task->may_assign = TRUE;
  488         new_task->assign_active = FALSE;
  489 
  490         new_task->pc_sample.buffer = 0;
  491 #if     FAST_TAS
  492         for (i = 0; i < TASK_FAST_TAS_NRAS; i++)  {
  493             if (inherit_memory) {
  494                 new_task->fast_tas_base[i] = parent_task->fast_tas_base[i];
  495                 new_task->fast_tas_end[i]  = parent_task->fast_tas_end[i];
  496             } else {
  497                 new_task->fast_tas_base[i] = (vm_offset_t)0;
  498                 new_task->fast_tas_end[i]  = (vm_offset_t)0;
  499             }
  500         }
  501 #endif  /* FAST_TAS */
  502  
  503         ipc_task_enable(new_task);
  504 
  505 #if     NORMA_TASK
  506         new_task->child_node = -1;
  507 #endif  /* NORMA_TASK */
  508 
  509         *child_task = new_task;
  510         return KERN_SUCCESS;
  511 }
  512 
  513 /*
  514  *      task_deallocate:
  515  *
  516  *      Give up a reference to the specified task and destroy it if there
  517  *      are no other references left.  It is assumed that the current thread
  518  *      is never in this task.
  519  */
  520 void task_deallocate(
  521         register task_t task)
  522 {
  523         register int c;
  524         register processor_set_t pset;
  525 
  526         if (task == TASK_NULL)
  527                 return;
  528 
  529         task_lock(task);
  530         c = --(task->ref_count);
  531         task_unlock(task);
  532         if (c != 0)
  533                 return;
  534 
  535 #if     NORMA_TASK
  536         if (task->map == VM_MAP_NULL) {
  537                 /* norma placeholder task */
  538                 zfree(task_zone, (vm_offset_t) task);
  539                 return;
  540         }
  541 #endif  /* NORMA_TASK */
  542 
  543         eml_task_deallocate(task);
  544 
  545         pset = task->processor_set;
  546         pset_lock(pset);
  547         pset_remove_task(pset,task);
  548         pset_unlock(pset);
  549         pset_deallocate(pset);
  550         vm_map_deallocate(task->map);
  551         is_release(task->itk_space);
  552         zfree(task_zone, (vm_offset_t) task);
  553 }
  554 
  555 void task_reference(
  556         register task_t task)
  557 {
  558         if (task == TASK_NULL)
  559                 return;
  560 
  561         task_lock(task);
  562         task->ref_count++;
  563         task_unlock(task);
  564 }
  565 
  566 /*
  567  *      task_terminate:
  568  *
  569  *      Terminate the specified task.  See comments on thread_terminate
  570  *      (kern/thread.c) about problems with terminating the "current task."
  571  */
  572 kern_return_t task_terminate(
  573         register task_t task)
  574 {
  575         register thread_t       thread, cur_thread;
  576         register queue_head_t   *list;
  577         register task_t         cur_task;
  578         spl_t                   s;
  579 
  580         if (task == TASK_NULL)
  581                 return KERN_INVALID_ARGUMENT;
  582 
  583         list = &task->thread_list;
  584         cur_task = current_task();
  585         cur_thread = current_thread();
  586 
  587 #if     NET_ATM
  588         /*
  589          *      Shut down networking.
  590          */
  591         mk_endpoint_collect(task);
  592 #endif
  593 
  594         /*
  595          *      Deactivate task so that it can't be terminated again,
  596          *      and so lengthy operations in progress will abort.
  597          *
  598          *      If the current thread is in this task, remove it from
  599          *      the task's thread list to keep the thread-termination
  600          *      loop simple.
  601          */
  602         if (task == cur_task) {
  603                 task_lock(task);
  604                 if (!task->active) {
  605                         /*
  606                          *      Task is already being terminated.
  607                          */
  608                         task_unlock(task);
  609                         return KERN_FAILURE;
  610                 }
  611                 /*
  612                  *      Make sure current thread is not being terminated.
  613                  */
  614                 s = splsched();
  615                 thread_lock(cur_thread);
  616                 if (!cur_thread->active) {
  617                         thread_unlock(cur_thread);
  618                         (void) splx(s);
  619                         task_unlock(task);
  620                         thread_terminate(cur_thread);
  621                         return KERN_FAILURE;
  622                 }
  623                 task->active = FALSE;
  624                 queue_remove(list, cur_thread, thread_t, thread_list);
  625                 thread_unlock(cur_thread);
  626                 (void) splx(s);
  627                 task_unlock(task);
  628 
  629                 /*
  630                  *      Shut down this thread's ipc now because it must
  631                  *      be left alone to terminate the task.
  632                  */
  633                 ipc_thread_disable(cur_thread);
  634                 ipc_thread_terminate(cur_thread);
  635         }
  636         else {
  637                 /*
  638                  *      Lock both current and victim task to check for
  639                  *      potential deadlock.
  640                  */
  641                 if ((vm_offset_t)task < (vm_offset_t)cur_task) {
  642                         task_lock(task);
  643                         task_lock(cur_task);
  644                 }
  645                 else {
  646                         task_lock(cur_task);
  647                         task_lock(task);
  648                 }
  649                 /*
  650                  *      Check if current thread or task is being terminated.
  651                  */
  652                 s = splsched();
  653                 thread_lock(cur_thread);
  654                 if ((!cur_task->active) ||(!cur_thread->active)) {
  655                         /*
  656                          * Current task or thread is being terminated.
  657                          */
  658                         thread_unlock(cur_thread);
  659                         (void) splx(s);
  660                         task_unlock(task);
  661                         task_unlock(cur_task);
  662                         thread_terminate(cur_thread);
  663                         return KERN_FAILURE;
  664                 }
  665                 thread_unlock(cur_thread);
  666                 (void) splx(s);
  667                 task_unlock(cur_task);
  668 
  669                 if (!task->active) {
  670                         /*
  671                          *      Task is already being terminated.
  672                          */
  673                         task_unlock(task);
  674                         return KERN_FAILURE;
  675                 }
  676                 task->active = FALSE;
  677                 task_unlock(task);
  678         }
  679 
  680         /*
  681          *      Prevent further execution of the task.  ipc_task_disable
  682          *      prevents further task operations via the task port.
  683          *      If this is the current task, the current thread will
  684          *      be left running.
  685          */
  686         ipc_task_disable(task);
  687         (void) task_hold(task);
  688         (void) task_dowait(task,TRUE);                  /* may block */
  689 
  690         /*
  691          *      Terminate each thread in the task.
  692          *
  693          *      The task_port is closed down, so no more thread_create
  694          *      operations can be done.  Thread_force_terminate closes the
  695          *      thread port for each thread; when that is done, the
  696          *      thread will eventually disappear.  Thus the loop will
  697          *      terminate.  Call thread_force_terminate instead of
  698          *      thread_terminate to avoid deadlock checks.  Need
  699          *      to call thread_block() inside loop because some other
  700          *      thread (e.g., the reaper) may have to run to get rid
  701          *      of all references to the thread; it won't vanish from
  702          *      the task's thread list until the last one is gone.
  703          */
  704         task_lock(task);
  705         while (!queue_empty(list)) {
  706                 thread = (thread_t) queue_first(list);
  707                 thread_reference(thread);
  708                 task_unlock(task);
  709                 thread_force_terminate(thread);
  710                 thread_deallocate(thread);
  711                 thread_block((void (*)()) 0);
  712                 task_lock(task);
  713         }
  714         task_unlock(task);
  715 
  716         /*
  717          *      Shut down IPC.
  718          */
  719         ipc_task_terminate(task);
  720 
  721 
  722         /*
  723          *      Deallocate the task's reference to itself.
  724          */
  725         task_deallocate(task);
  726 
  727         /*
  728          *      If the current thread is in this task, it has not yet
  729          *      been terminated (since it was removed from the task's
  730          *      thread-list).  Put it back in the thread list (for
  731          *      completeness), and terminate it.  Since it holds the
  732          *      last reference to the task, terminating it will deallocate
  733          *      the task.
  734          */
  735         if (cur_thread->task == task) {
  736                 task_lock(task);
  737                 s = splsched();
  738                 queue_enter(list, cur_thread, thread_t, thread_list);
  739                 (void) splx(s);
  740                 task_unlock(task);
  741                 (void) thread_terminate(cur_thread);
  742         }
  743 
  744         return KERN_SUCCESS;
  745 }
  746 
  747 /*
  748  *      task_hold:
  749  *
  750  *      Suspend execution of the specified task.
  751  *      This is a recursive-style suspension of the task, a count of
  752  *      suspends is maintained.
  753  */
  754 kern_return_t task_hold(
  755         register task_t task)
  756 {
  757         register queue_head_t   *list;
  758         register thread_t       thread, cur_thread;
  759 
  760         cur_thread = current_thread();
  761 
  762         task_lock(task);
  763         if (!task->active) {
  764                 task_unlock(task);
  765                 return KERN_FAILURE;
  766         }
  767 
  768         task->suspend_count++;
  769 
  770         /*
  771          *      Iterate through all the threads and hold them.
  772          *      Do not hold the current thread if it is within the
  773          *      task.
  774          */
  775         list = &task->thread_list;
  776         queue_iterate(list, thread, thread_t, thread_list) {
  777                 if (thread != cur_thread)
  778                         thread_hold(thread);
  779         }
  780         task_unlock(task);
  781         return KERN_SUCCESS;
  782 }
  783 
  784 /*
  785  *      task_dowait:
  786  *
  787  *      Wait until the task has really been suspended (all of the threads
  788  *      are stopped).  Skip the current thread if it is within the task.
  789  *
  790  *      If task is deactivated while waiting, return a failure code unless
  791  *      must_wait is true.
  792  */
  793 kern_return_t task_dowait(
  794         register task_t task,
  795         boolean_t must_wait)
  796 {
  797         register queue_head_t   *list;
  798         register thread_t       thread, cur_thread, prev_thread;
  799         register kern_return_t  ret = KERN_SUCCESS;
  800 
  801         /*
  802          *      Iterate through all the threads.
  803          *      While waiting for each thread, we gain a reference to it
  804          *      to prevent it from going away on us.  This guarantees
  805          *      that the "next" thread in the list will be a valid thread.
  806          *
  807          *      We depend on the fact that if threads are created while
  808          *      we are looping through the threads, they will be held
  809          *      automatically.  We don't care about threads that get
  810          *      deallocated along the way (the reference prevents it
  811          *      from happening to the thread we are working with).
  812          *
  813          *      If the current thread is in the affected task, it is skipped.
  814          *
  815          *      If the task is deactivated before we're done, and we don't
  816          *      have to wait for it (must_wait is FALSE), just bail out.
  817          */
  818         cur_thread = current_thread();
  819 
  820         list = &task->thread_list;
  821         prev_thread = THREAD_NULL;
  822         task_lock(task);
  823         queue_iterate(list, thread, thread_t, thread_list) {
  824                 if (!(task->active) && !(must_wait)) {
  825                         ret = KERN_FAILURE;
  826                         break;
  827                 }
  828                 if (thread != cur_thread) {
  829                         thread_reference(thread);
  830                         task_unlock(task);
  831                         if (prev_thread != THREAD_NULL)
  832                                 thread_deallocate(prev_thread);
  833                                                         /* may block */
  834                         (void) thread_dowait(thread, TRUE);  /* may block */
  835                         prev_thread = thread;
  836                         task_lock(task);
  837                 }
  838         }
  839         task_unlock(task);
  840         if (prev_thread != THREAD_NULL)
  841                 thread_deallocate(prev_thread);         /* may block */
  842         return ret;
  843 }
  844 
  845 kern_return_t task_release(
  846         register task_t task)
  847 {
  848         register queue_head_t   *list;
  849         register thread_t       thread, next;
  850 
  851         task_lock(task);
  852         if (!task->active) {
  853                 task_unlock(task);
  854                 return KERN_FAILURE;
  855         }
  856 
  857         task->suspend_count--;
  858 
  859         /*
  860          *      Iterate through all the threads and release them
  861          */
  862         list = &task->thread_list;
  863         thread = (thread_t) queue_first(list);
  864         while (!queue_end(list, (queue_entry_t) thread)) {
  865                 next = (thread_t) queue_next(&thread->thread_list);
  866                 thread_release(thread);
  867                 thread = next;
  868         }
  869         task_unlock(task);
  870         return KERN_SUCCESS;
  871 }
  872 
  873 kern_return_t task_threads(
  874         task_t          task,
  875         thread_array_t  *thread_list,
  876         natural_t       *count)
  877 {
  878         unsigned int actual;    /* this many threads */
  879         thread_t thread;
  880         thread_t *threads;
  881         int i;
  882 
  883         vm_size_t size, size_needed;
  884         vm_offset_t addr;
  885 
  886         if (task == TASK_NULL)
  887                 return KERN_INVALID_ARGUMENT;
  888 
  889         size = 0; addr = 0;
  890 
  891         for (;;) {
  892                 task_lock(task);
  893                 if (!task->active) {
  894                         task_unlock(task);
  895                         return KERN_FAILURE;
  896                 }
  897 
  898                 actual = task->thread_count;
  899 
  900                 /* do we have the memory we need? */
  901 
  902                 size_needed = actual * sizeof(mach_port_t);
  903                 if (size_needed <= size)
  904                         break;
  905 
  906                 /* unlock the task and allocate more memory */
  907                 task_unlock(task);
  908 
  909                 if (size != 0)
  910                         kfree(addr, size);
  911 
  912                 assert(size_needed > 0);
  913                 size = size_needed;
  914 
  915                 addr = kalloc(size);
  916                 if (addr == 0)
  917                         return KERN_RESOURCE_SHORTAGE;
  918         }
  919 
  920         /* OK, have memory and the task is locked & active */
  921 
  922         threads = (thread_t *) addr;
  923 
  924         for (i = 0, thread = (thread_t) queue_first(&task->thread_list);
  925              i < actual;
  926              i++, thread = (thread_t) queue_next(&thread->thread_list)) {
  927                 /* take ref for convert_thread_to_port */
  928                 thread_reference(thread);
  929                 threads[i] = thread;
  930         }
  931         assert(queue_end(&task->thread_list, (queue_entry_t) thread));
  932 
  933         /* can unlock task now that we've got the thread refs */
  934         task_unlock(task);
  935 
  936         if (actual == 0) {
  937                 /* no threads, so return null pointer and deallocate memory */
  938 
  939                 *thread_list = 0;
  940                 *count = 0;
  941 
  942                 if (size != 0)
  943                         kfree(addr, size);
  944         } else {
  945                 /* if we allocated too much, must copy */
  946 
  947                 if (size_needed < size) {
  948                         vm_offset_t newaddr;
  949 
  950                         newaddr = kalloc(size_needed);
  951                         if (newaddr == 0) {
  952                                 for (i = 0; i < actual; i++)
  953                                         thread_deallocate(threads[i]);
  954                                 kfree(addr, size);
  955                                 return KERN_RESOURCE_SHORTAGE;
  956                         }
  957 
  958                         bcopy((char *) addr, (char *) newaddr, size_needed);
  959                         kfree(addr, size);
  960                         threads = (thread_t *) newaddr;
  961                 }
  962 
  963                 *thread_list = (mach_port_t *) threads;
  964                 *count = actual;
  965 
  966                 /* do the conversion that Mig should handle */
  967 
  968                 for (i = 0; i < actual; i++)
  969                         ((ipc_port_t *) threads)[i] =
  970                                 convert_thread_to_port(threads[i]);
  971         }
  972 
  973         return KERN_SUCCESS;
  974 }
  975 
  976 kern_return_t task_suspend(
  977         register task_t task)
  978 {
  979         register boolean_t      hold;
  980 
  981         if (task == TASK_NULL)
  982                 return KERN_INVALID_ARGUMENT;
  983 
  984         hold = FALSE;
  985         task_lock(task);
  986         if ((task->user_stop_count)++ == 0)
  987                 hold = TRUE;
  988         task_unlock(task);
  989 
  990         /*
  991          *      If the stop count was positive, the task is
  992          *      already stopped and we can exit.
  993          */
  994         if (!hold) {
  995                 return KERN_SUCCESS;
  996         }
  997 
  998         /*
  999          *      Hold all of the threads in the task, and wait for
 1000          *      them to stop.  If the current thread is within
 1001          *      this task, hold it separately so that all of the
 1002          *      other threads can stop first.
 1003          */
 1004 
 1005         if (task_hold(task) != KERN_SUCCESS)
 1006                 return KERN_FAILURE;
 1007 
 1008         if (task_dowait(task, FALSE) != KERN_SUCCESS)
 1009                 return KERN_FAILURE;
 1010 
 1011         if (current_task() == task) {
 1012                 spl_t s;
 1013 
 1014                 thread_hold(current_thread());
 1015                 /*
 1016                  *      We want to call thread_block on our way out,
 1017                  *      to stop running.
 1018                  */
 1019                 s = splsched();
 1020                 ast_on(cpu_number(), AST_BLOCK);
 1021                 (void) splx(s);
 1022         }
 1023 
 1024         return KERN_SUCCESS;
 1025 }
 1026 
 1027 kern_return_t task_resume(
 1028         register task_t task)
 1029 {
 1030         register boolean_t      release;
 1031 
 1032         if (task == TASK_NULL)
 1033                 return KERN_INVALID_ARGUMENT;
 1034 
 1035         release = FALSE;
 1036         task_lock(task);
 1037         if (task->user_stop_count > 0) {
 1038                 if (--(task->user_stop_count) == 0)
 1039                         release = TRUE;
 1040         }
 1041         else {
 1042                 task_unlock(task);
 1043                 return KERN_FAILURE;
 1044         }
 1045         task_unlock(task);
 1046 
 1047         /*
 1048          *      Release the task if necessary.
 1049          */
 1050         if (release)
 1051                 return task_release(task);
 1052 
 1053         return KERN_SUCCESS;
 1054 }
 1055 
 1056 kern_return_t task_info(
 1057         task_t                  task,
 1058         int                     flavor,
 1059         task_info_t             task_info_out,  /* pointer to OUT array */
 1060         natural_t               *task_info_count)       /* IN/OUT */
 1061 {
 1062         vm_map_t                map;
 1063 
 1064         if (task == TASK_NULL)
 1065                 return KERN_INVALID_ARGUMENT;
 1066 
 1067         switch (flavor) {
 1068             case TASK_BASIC_INFO:
 1069             {
 1070                 register task_basic_info_t      basic_info;
 1071 
 1072                 if (*task_info_count < TASK_BASIC_INFO_COUNT) {
 1073                     return KERN_INVALID_ARGUMENT;
 1074                 }
 1075 
 1076                 basic_info = (task_basic_info_t) task_info_out;
 1077 
 1078                 map = (task == kernel_task) ? kernel_map : task->map;
 1079 
 1080                 basic_info->virtual_size  = map->size;
 1081                 basic_info->resident_size = pmap_resident_count(map->pmap)
 1082                                                    * PAGE_SIZE;
 1083 
 1084                 task_lock(task);
 1085                 basic_info->base_priority = task->priority;
 1086                 basic_info->suspend_count = task->user_stop_count;
 1087                 basic_info->user_time.seconds
 1088                                 = task->total_user_time.seconds;
 1089                 basic_info->user_time.microseconds
 1090                                 = task->total_user_time.microseconds;
 1091                 basic_info->system_time.seconds
 1092                                 = task->total_system_time.seconds;
 1093                 basic_info->system_time.microseconds 
 1094                                 = task->total_system_time.microseconds;
 1095                 task_unlock(task);
 1096 
 1097                 *task_info_count = TASK_BASIC_INFO_COUNT;
 1098                 break;
 1099             }
 1100 
 1101             case TASK_THREAD_TIMES_INFO:
 1102             {
 1103                 register task_thread_times_info_t times_info;
 1104                 register thread_t       thread;
 1105 
 1106                 if (*task_info_count < TASK_THREAD_TIMES_INFO_COUNT) {
 1107                     return KERN_INVALID_ARGUMENT;
 1108                 }
 1109 
 1110                 times_info = (task_thread_times_info_t) task_info_out;
 1111                 times_info->user_time.seconds = 0;
 1112                 times_info->user_time.microseconds = 0;
 1113                 times_info->system_time.seconds = 0;
 1114                 times_info->system_time.microseconds = 0;
 1115 
 1116                 task_lock(task);
 1117                 queue_iterate(&task->thread_list, thread,
 1118                               thread_t, thread_list)
 1119                 {
 1120                     time_value_t user_time, system_time;
 1121                     spl_t                s;
 1122 
 1123                     s = splsched();
 1124                     thread_lock(thread);
 1125 
 1126                     thread_read_times(thread, &user_time, &system_time);
 1127 
 1128                     thread_unlock(thread);
 1129                     splx(s);
 1130 
 1131                     time_value_add(&times_info->user_time, &user_time);
 1132                     time_value_add(&times_info->system_time, &system_time);
 1133                 }
 1134                 task_unlock(task);
 1135 
 1136                 *task_info_count = TASK_THREAD_TIMES_INFO_COUNT;
 1137                 break;
 1138             }
 1139 
 1140             default:
 1141                 return KERN_INVALID_ARGUMENT;
 1142         }
 1143 
 1144         return KERN_SUCCESS;
 1145 }
 1146 
 1147 #if     MACH_HOST
 1148 /*
 1149  *      task_assign:
 1150  *
 1151  *      Change the assigned processor set for the task
 1152  */
 1153 kern_return_t
 1154 task_assign(
 1155         task_t          task,
 1156         processor_set_t new_pset,
 1157         boolean_t       assign_threads)
 1158 {
 1159         kern_return_t           ret = KERN_SUCCESS;
 1160         register thread_t       thread, prev_thread;
 1161         register queue_head_t   *list;
 1162         register processor_set_t        pset;
 1163 
 1164         if (task == TASK_NULL || new_pset == PROCESSOR_SET_NULL) {
 1165                 return KERN_INVALID_ARGUMENT;
 1166         }
 1167 
 1168         /*
 1169          *      Freeze task`s assignment.  Prelude to assigning
 1170          *      task.  Only one freeze may be held per task.
 1171          */
 1172 
 1173         task_lock(task);
 1174         while (task->may_assign == FALSE) {
 1175                 task->assign_active = TRUE;
 1176                 assert_wait((event_t)&task->assign_active, TRUE);
 1177                 task_unlock(task);
 1178                 thread_block((void (*)()) 0);
 1179                 task_lock(task);
 1180         }
 1181 
 1182         /*
 1183          *      Avoid work if task already in this processor set.
 1184          */
 1185         if (task->processor_set == new_pset)  {
 1186                 /*
 1187                  *      No need for task->assign_active wakeup:
 1188                  *      task->may_assign is still TRUE.
 1189                  */
 1190                 task_unlock(task);
 1191                 return KERN_SUCCESS;
 1192         }
 1193 
 1194         task->may_assign = FALSE;
 1195         task_unlock(task);
 1196 
 1197         /*
 1198          *      Safe to get the task`s pset: it cannot change while
 1199          *      task is frozen.
 1200          */
 1201         pset = task->processor_set;
 1202 
 1203         /*
 1204          *      Lock both psets now.  Use ordering to avoid deadlock.
 1205          */
 1206     Restart:
 1207         if ((vm_offset_t) pset < (vm_offset_t) new_pset) {
 1208             pset_lock(pset);
 1209             pset_lock(new_pset);
 1210         }
 1211         else {
 1212             pset_lock(new_pset);
 1213             pset_lock(pset);
 1214         }
 1215 
 1216         /*
 1217          *      Check if new_pset is ok to assign to.  If not,
 1218          *      reassign to default_pset.
 1219          */
 1220         if (!new_pset->active) {
 1221             pset_unlock(pset);
 1222             pset_unlock(new_pset);
 1223             new_pset = &default_pset;
 1224             goto Restart;
 1225         }
 1226 
 1227         pset_reference(new_pset);
 1228 
 1229         /*
 1230          *      Now grab the task lock and move the task.
 1231          */
 1232 
 1233         task_lock(task);
 1234         pset_remove_task(pset, task);
 1235         pset_add_task(new_pset, task);
 1236 
 1237         pset_unlock(pset);
 1238         pset_unlock(new_pset);
 1239 
 1240         if (assign_threads == FALSE) {
 1241                 /*
 1242                  *      We leave existing threads at their
 1243                  *      old assignments.  Unfreeze task`s
 1244                  *      assignment.
 1245                  */
 1246                 task->may_assign = TRUE;
 1247                 if (task->assign_active) {
 1248                         task->assign_active = FALSE;
 1249                         thread_wakeup((event_t) &task->assign_active);
 1250                 }
 1251                 task_unlock(task);
 1252                 pset_deallocate(pset);
 1253                 return KERN_SUCCESS;
 1254         }
 1255 
 1256         /*
 1257          *      If current thread is in task, freeze its assignment.
 1258          */
 1259         if (current_thread()->task == task) {
 1260                 task_unlock(task);
 1261                 thread_freeze(current_thread());
 1262                 task_lock(task);
 1263         }
 1264 
 1265         /*
 1266          *      Iterate down the thread list reassigning all the threads.
 1267          *      New threads pick up task's new processor set automatically.
 1268          *      Do current thread last because new pset may be empty.
 1269          */
 1270         list = &task->thread_list;
 1271         prev_thread = THREAD_NULL;
 1272         queue_iterate(list, thread, thread_t, thread_list) {
 1273                 if (!(task->active)) {
 1274                         ret = KERN_FAILURE;
 1275                         break;
 1276                 }
 1277                 if (thread != current_thread()) {
 1278                         thread_reference(thread);
 1279                         task_unlock(task);
 1280                         if (prev_thread != THREAD_NULL)
 1281                             thread_deallocate(prev_thread); /* may block */
 1282                         thread_assign(thread,new_pset);     /* may block */
 1283                         prev_thread = thread;
 1284                         task_lock(task);
 1285                 }
 1286         }
 1287 
 1288         /*
 1289          *      Done, wakeup anyone waiting for us.
 1290          */
 1291         task->may_assign = TRUE;
 1292         if (task->assign_active) {
 1293                 task->assign_active = FALSE;
 1294                 thread_wakeup((event_t)&task->assign_active);
 1295         }
 1296         task_unlock(task);
 1297         if (prev_thread != THREAD_NULL)
 1298                 thread_deallocate(prev_thread);         /* may block */
 1299 
 1300         /*
 1301          *      Finish assignment of current thread.
 1302          */
 1303         if (current_thread()->task == task)
 1304                 thread_doassign(current_thread(), new_pset, TRUE);
 1305 
 1306         pset_deallocate(pset);
 1307 
 1308         return ret;
 1309 }
 1310 #else   /* MACH_HOST */
 1311 /*
 1312  *      task_assign:
 1313  *
 1314  *      Change the assigned processor set for the task
 1315  */
 1316 kern_return_t
 1317 task_assign(
 1318         task_t          task,
 1319         processor_set_t new_pset,
 1320         boolean_t       assign_threads)
 1321 {
 1322         return KERN_FAILURE;
 1323 }
 1324 #endif  /* MACH_HOST */
 1325         
 1326 
 1327 /*
 1328  *      task_assign_default:
 1329  *
 1330  *      Version of task_assign to assign to default processor set.
 1331  */
 1332 kern_return_t
 1333 task_assign_default(
 1334         task_t          task,
 1335         boolean_t       assign_threads)
 1336 {
 1337         return task_assign(task, &default_pset, assign_threads);
 1338 }
 1339 
 1340 /*
 1341  *      task_get_assignment
 1342  *
 1343  *      Return name of processor set that task is assigned to.
 1344  */
 1345 kern_return_t task_get_assignment(
 1346         task_t          task,
 1347         processor_set_t *pset)
 1348 {
 1349         if (!task->active)
 1350                 return KERN_FAILURE;
 1351 
 1352         *pset = task->processor_set;
 1353         pset_reference(*pset);
 1354         return KERN_SUCCESS;
 1355 }
 1356 
 1357 /*
 1358  *      task_priority
 1359  *
 1360  *      Set priority of task; used only for newly created threads.
 1361  *      Optionally change priorities of threads.
 1362  */
 1363 kern_return_t
 1364 task_priority(
 1365         task_t          task,
 1366         int             priority,
 1367         boolean_t       change_threads)
 1368 {
 1369         kern_return_t   ret = KERN_SUCCESS;
 1370 
 1371         if (task == TASK_NULL || invalid_pri(priority))
 1372                 return KERN_INVALID_ARGUMENT;
 1373 
 1374         task_lock(task);
 1375         task->priority = priority;
 1376 
 1377         if (change_threads) {
 1378                 register thread_t       thread;
 1379                 register queue_head_t   *list;
 1380 
 1381                 list = &task->thread_list;
 1382                 queue_iterate(list, thread, thread_t, thread_list) {
 1383                         if (thread_priority(thread, priority, FALSE)
 1384                                 != KERN_SUCCESS)
 1385                                         ret = KERN_FAILURE;
 1386                 }
 1387         }
 1388 
 1389         task_unlock(task);
 1390         return ret;
 1391 }
 1392 
 1393 /*
 1394  *      task_collect_scan:
 1395  *
 1396  *      Attempt to free resources owned by tasks.
 1397  */
 1398 
 1399 void task_collect_scan(void)
 1400 {
 1401         register task_t         task, prev_task;
 1402         processor_set_t         pset, prev_pset;
 1403 
 1404         prev_task = TASK_NULL;
 1405         prev_pset = PROCESSOR_SET_NULL;
 1406 
 1407         simple_lock(&all_psets_lock);
 1408         queue_iterate(&all_psets, pset, processor_set_t, all_psets) {
 1409                 pset_lock(pset);
 1410                 queue_iterate(&pset->tasks, task, task_t, pset_tasks) {
 1411                         task_reference(task);
 1412                         pset_reference(pset);
 1413                         pset_unlock(pset);
 1414                         simple_unlock(&all_psets_lock);
 1415 
 1416                         pmap_collect(task->map->pmap);
 1417 
 1418                         if (prev_task != TASK_NULL)
 1419                                 task_deallocate(prev_task);
 1420                         prev_task = task;
 1421 
 1422                         if (prev_pset != PROCESSOR_SET_NULL)
 1423                                 pset_deallocate(prev_pset);
 1424                         prev_pset = pset;
 1425 
 1426                         simple_lock(&all_psets_lock);
 1427                         pset_lock(pset);
 1428                 }
 1429                 pset_unlock(pset);
 1430         }
 1431         simple_unlock(&all_psets_lock);
 1432 
 1433         if (prev_task != TASK_NULL)
 1434                 task_deallocate(prev_task);
 1435         if (prev_pset != PROCESSOR_SET_NULL)
 1436                 pset_deallocate(prev_pset);
 1437 }
 1438 
 1439 boolean_t task_collect_allowed = TRUE;
 1440 unsigned task_collect_last_tick = 0;
 1441 unsigned task_collect_max_rate = 0;             /* in ticks */
 1442 
 1443 /*
 1444  *      consider_task_collect:
 1445  *
 1446  *      Called by the pageout daemon when the system needs more free pages.
 1447  */
 1448 
 1449 void consider_task_collect(void)
 1450 {
 1451         /*
 1452          *      By default, don't attempt task collection more frequently
 1453          *      than once a second.
 1454          */
 1455 
 1456         if (task_collect_max_rate == 0)
 1457                 task_collect_max_rate = hz;
 1458 
 1459         if (task_collect_allowed &&
 1460             (sched_tick > (task_collect_last_tick + task_collect_max_rate))) {
 1461                 task_collect_last_tick = sched_tick;
 1462                 task_collect_scan();
 1463         }
 1464 }
 1465 
 1466 kern_return_t
 1467 task_ras_control(
 1468         task_t task,
 1469         vm_offset_t pc,
 1470         vm_offset_t endpc,
 1471         int flavor)
 1472 {
 1473     kern_return_t ret = KERN_FAILURE;
 1474         
 1475 #if     FAST_TAS
 1476     int i;
 1477 
 1478     ret = KERN_SUCCESS;
 1479     task_lock(task);
 1480     switch (flavor)  {
 1481     case TASK_RAS_CONTROL_PURGE_ALL:  /* remove all RAS */
 1482         for (i = 0; i < TASK_FAST_TAS_NRAS; i++) {
 1483             task->fast_tas_base[i] = task->fast_tas_end[i] = 0;
 1484         }
 1485         break;
 1486     case TASK_RAS_CONTROL_PURGE_ONE:  /* remove this RAS, collapse remaining */
 1487         for (i = 0; i < TASK_FAST_TAS_NRAS; i++)  {
 1488             if ( (task->fast_tas_base[i] == pc)
 1489                 && (task->fast_tas_end[i] == endpc))  {
 1490                         while (i < TASK_FAST_TAS_NRAS-1)  {
 1491                           task->fast_tas_base[i] = task->fast_tas_base[i+1];
 1492                           task->fast_tas_end[i] = task->fast_tas_end[i+1];
 1493                           i++;
 1494                          }
 1495                         task->fast_tas_base[TASK_FAST_TAS_NRAS-1] = 0;
 1496                         task->fast_tas_end[TASK_FAST_TAS_NRAS-1] = 0;
 1497                         break;
 1498              }
 1499         }
 1500         if (i == TASK_FAST_TAS_NRAS) {
 1501             ret = KERN_INVALID_ADDRESS;
 1502         }
 1503         break;
 1504     case TASK_RAS_CONTROL_PURGE_ALL_AND_INSTALL_ONE: 
 1505         /* remove all RAS an install this RAS */
 1506         for (i = 0; i < TASK_FAST_TAS_NRAS; i++) {
 1507             task->fast_tas_base[i] = task->fast_tas_end[i] = 0;
 1508         }
 1509         /* FALL THROUGH */
 1510     case TASK_RAS_CONTROL_INSTALL_ONE: /* install this RAS */
 1511         for (i = 0; i < TASK_FAST_TAS_NRAS; i++)  {
 1512             if ( (task->fast_tas_base[i] == pc)
 1513             && (task->fast_tas_end[i] == endpc))   {
 1514                 /* already installed */
 1515                 break;
 1516             }
 1517             if ((task->fast_tas_base[i] == 0) && (task->fast_tas_end[i] == 0)){
 1518                 task->fast_tas_base[i] = pc;
 1519                 task->fast_tas_end[i] = endpc;
 1520                 break;
 1521             }
 1522         }
 1523         if (i == TASK_FAST_TAS_NRAS)  {
 1524             ret = KERN_RESOURCE_SHORTAGE;
 1525         } 
 1526         break;
 1527     default: ret = KERN_INVALID_VALUE;
 1528         break;
 1529     }
 1530     task_unlock(task);
 1531 #endif
 1532     return ret;
 1533 }

Cache object: ab127e328cdbd0345389b37621a32714


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.