The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/kern/processor.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1993-1988 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        processor.c,v $
   29  * Revision 2.13  93/11/17  17:18:10  dbg
   30  *      pset_add_processor needs to move the correct 'last' runq value
   31  *      to the processor runq always.  Add a special check for threads
   32  *      on the bound runq for the processor.
   33  *      [93/08/20            dbg]
   34  * 
   35  *      Use runq pointers in processor if NCPUS > 1 - processor_shutdown
   36  *      does not depend on MACH_HOST.
   37  *      [93/07/21            dbg]
   38  * 
   39  *      Old processor_set_policy_{enable,disable} operate as if kernel
   40  *      were compiled with MACH_FIXPRI == 0.
   41  *      [93/07/20            dbg]
   42  * 
   43  *      Change order of locking to thread_lock -> pset_lock ->
   44  *      thread_ref_lock.  Pset now holds explicit references to tasks
   45  *      and threads.
   46  *      [93/05/26            dbg]
   47  * 
   48  *      Replaced processor_set_policy_enable,
   49  *      processor_set_policy_disable, processor_set_max_priority by
   50  *      per-policy routines.
   51  *      [93/05/10            dbg]
   52  * 
   53  *      Changed to a more flexible run queue structure, that also
   54  *      doubles as a list of scheduling policies enabled on the
   55  *      processor set.  Moved processor_set_policy_enable and
   56  *      processor_set_policy_disable to kern/sched_policy.c.
   57  *      [93/04/09            dbg]
   58  * 
   59  *      Always enable fixed-priority threads.
   60  *      [93/03/27            dbg]
   61  * 
   62  *      Pick up default scheduler policy in pset_init.
   63  *      [93/01/28            dbg]
   64  * 
   65  * Revision 2.12  93/01/14  17:35:48  danner
   66  *      Fixed indirection in cpu_control call.
   67  *      [93/01/12            danner]
   68  *      Fixed type of count argument to processor_control.
   69  *      [93/01/12            danner]
   70  *      Explicitly include <kern/task.h>
   71  *      [93/01/12            dbg]
   72  *      64bit cleanup.
   73  *      [92/12/01            af]
   74  * 
   75  * 28-Oct-92  David Golub (dbg) at Carnegie-Mellon University
   76  *      Added separate pset_ref_lock, only governing reference count, to
   77  *      fix lock ordering to avoid deadlocks.  Converted function declarations
   78  *      to use prototypes.
   79  *
   80  * Revision 2.11  92/08/03  17:38:44  jfriedl
   81  *      removed silly prototypes
   82  *      [92/08/02            jfriedl]
   83  * 
   84  * Revision 2.10  92/05/21  17:15:08  jfriedl
   85  *      Added void to functions that still needed it.
   86  *      [92/05/16            jfriedl]
   87  * 
   88  * Revision 2.9  91/05/14  16:45:27  mrt
   89  *      Correcting copyright
   90  * 
   91  * Revision 2.8  91/05/08  12:48:02  dbg
   92  *      Changed pset_sys_init to give each processor a control port,
   93  *      even when it is not running.  Without a control port, there is
   94  *      no way to start an inactive processor.
   95  *      [91/04/26  14:42:59  dbg]
   96  * 
   97  * Revision 2.7  91/02/05  17:28:27  mrt
   98  *      Changed to new Mach copyright
   99  *      [91/02/01  16:15:59  mrt]
  100  * 
  101  * Revision 2.6  90/09/09  14:32:26  rpd
  102  *      Removed pset_is_garbage, do_pset_scan.
  103  *      Changed processor_set_create to return the actual processor set.
  104  *      [90/08/30            rpd]
  105  * 
  106  * Revision 2.5  90/08/27  22:03:17  dbg
  107  *      Fixed processor_set_info to return the correct count.
  108  *      [90/08/23            rpd]
  109  * 
  110  * Revision 2.4  90/08/27  11:52:07  dbg
  111  *      Fix type mismatch in processor_set_create.
  112  *      [90/07/18            dbg]
  113  * 
  114  * Revision 2.3  90/06/19  22:59:17  rpd
  115  *      Fixed bug in processor_set_things.
  116  *      [90/06/14            rpd]
  117  * 
  118  * Revision 2.2  90/06/02  14:55:32  rpd
  119  *      Created for new host/processor technology.
  120  *      [90/03/26  23:49:46  rpd]
  121  * 
  122  *      Move includes
  123  *      [89/08/02            dlb]
  124  *      Eliminate interrupt protection for pset locks.
  125  *      Add init for quantum_adj_lock.
  126  *      [89/06/14            dlb]
  127  * 
  128  *      Add processor_set_{tasks,threads}.  Use common internals.
  129  *      Maintain all_psets_count for host_processor_sets.
  130  *      [89/06/09            dlb]
  131  * 
  132  *      Add processor_set_policy, sched flavor of processor_set_info.
  133  *      [89/05/18            dlb]
  134  * 
  135  *      Add processor_set_max_priority.
  136  *      [89/05/12            dlb]
  137  * 
  138  *      Add wait argument to processor_assign call.
  139  *      [89/05/10            dlb]
  140  *      Move processor reassignment to processor_set_destroy from
  141  *      pset_deallocate.
  142  *      [89/03/13            dlb]
  143  * 
  144  *      Fix interrupt protection in processor_set_{create,destroy}.
  145  *      [89/03/09            dlb]
  146  *      Remove reference count decrements from pset_remove_{task,thread}.
  147  *      Callers must explicitly call pset_deallocate().
  148  *      [89/02/17            dlb]
  149  *      Add load factor/average inits.  Make info available to users.
  150  *      [89/02/09            dlb]
  151  * 
  152  *      24-Sep-1988  David Black (dlb) at Carnegie-Mellon University
  153  * 
  154  * Revision 2.5.2.2  90/02/22  23:20:24  rpd
  155  *      Changed to use kalloc/kfree instead of ipc_kernel_map.
  156  *      Fixed calls to convert_task_to_port/convert_thread_to_port.
  157  * 
  158  * Revision 2.5.2.1  90/02/20  22:21:47  rpd
  159  *      Revised for new IPC.
  160  *      [90/02/19  23:36:11  rpd]
  161  * 
  162  * Revision 2.5  89/12/22  15:52:54  rpd
  163  *      Changes to implement pset garbage collection:
  164  *      1.  Add pset_is_garbage to detect abandoned processor sets.
  165  *              Add do_pset_scan to look for them.
  166  *      2.  Pass back the actual ports from processor_set_create, so they
  167  *              will always have extra references; this way a newly created
  168  *              processor set never looks like garbage.
  169  * 
  170  *      Also optimized processor_set_destroy.
  171  *      [89/12/15            dlb]
  172  *      Put all fixed priority support code under MACH_FIXPRI switch.
  173  *      Add thread_change_psets for use by thread_assign.
  174  *      [89/11/10            dlb]
  175  *      Check for null processor set in pset_deallocate.
  176  *      [89/11/06            dlb]
  177  * 
  178  * Revision 2.4  89/11/20  11:23:45  mja
  179  *      Put all fixed priority support code under MACH_FIXPRI switch.
  180  *      Add thread_change_psets for use by thread_assign.
  181  *      [89/11/10            dlb]
  182  *      Check for null processor set in pset_deallocate.
  183  *      [89/11/06            dlb]
  184  * 
  185  * Revision 2.3  89/10/15  02:05:04  rpd
  186  *      Minor cleanups.
  187  * 
  188  * Revision 2.2  89/10/11  14:20:11  dlb
  189  *      Add processor_set_{tasks,threads}.  Use common internals.
  190  *      Maintain all_psets_count for host_processor_sets.
  191  *      Add processor_set_policy, sched flavor of processor_set_info.
  192  *      Add processor_set_max_priority.
  193  *      Remove reference count decrements from pset_remove_{task,thread}.
  194  *      Callers must explicitly call pset_deallocate().
  195  *      Add load factor/average inits.  Make info available to users.
  196  * 
  197  *      Created
  198  */
  199 
  200 /*
  201  *      processor.c: processor and processor_set manipulation routines.
  202  */
  203 
  204 #include <cpus.h>
  205 #include <mach_host.h>
  206 
  207 #include <mach/policy.h>
  208 #include <mach/processor_info.h>
  209 #include <mach/vm_param.h>
  210 #include <kern/cpu_number.h>
  211 #include <kern/host.h>
  212 #include <kern/lock.h>
  213 #include <kern/machine.h>
  214 #include <kern/memory.h>
  215 #include <kern/processor.h>
  216 #include <kern/task.h>
  217 #include <kern/thread.h>
  218 #include <kern/ipc_host.h>
  219 #include <kern/ipc_tt.h>
  220 #include <kern/quantum.h>
  221 #include <ipc/ipc_port.h>
  222 #include <machine/machspl.h>
  223 
  224 #if     MACH_HOST
  225 #include <kern/zalloc.h>
  226 #include <sched_policy/bp.h>
  227 zone_t  pset_zone;
  228 #endif  /* MACH_HOST */
  229 
  230 /*
  231  *      Exported variables.
  232  */
  233 struct processor_set default_pset;
  234 struct processor processor_array[NCPUS];
  235 
  236 queue_head_t            all_psets;
  237 int                     all_psets_count;
  238 decl_simple_lock_data(, all_psets_lock);
  239 
  240 processor_t     master_processor;
  241 processor_t     processor_ptr[NCPUS];
  242 
  243 /*
  244  * Forward declarations.
  245  */
  246 void pset_init(processor_set_t);
  247 void processor_init(processor_t, int);
  248 
  249 /*
  250  *      Bootstrap the processor/pset system so the scheduler can run.
  251  */
  252 void pset_sys_bootstrap(void)
  253 {
  254         register int    i;
  255 
  256         pset_init(&default_pset);
  257         default_pset.empty = FALSE;
  258         for (i = 0; i < NCPUS; i++) {
  259                 /*
  260                  *      Initialize processor data structures.
  261                  *      Note that cpu_to_processor(i) is processor_ptr[i].
  262                  */
  263                 processor_ptr[i] = &processor_array[i];
  264                 processor_init(processor_ptr[i], i);
  265         }
  266         master_processor = cpu_to_processor(master_cpu);
  267         queue_init(&all_psets);
  268         simple_lock_init(&all_psets_lock);
  269         queue_enter(&all_psets, &default_pset, processor_set_t, all_psets);
  270         all_psets_count = 1;
  271         default_pset.active = TRUE;
  272         default_pset.empty = FALSE;
  273 
  274         /*
  275          *      Note: the default_pset has a max_priority of BASEPRI_USER.
  276          *      Internal kernel threads override this in kernel_thread.
  277          */
  278 
  279 }
  280 
  281 #if     MACH_HOST
  282 /*
  283  *      Rest of pset system initializations.
  284  */
  285 void pset_sys_init(void)
  286 {
  287         register int    i;
  288         register processor_t    processor;
  289 
  290         /*
  291          * Allocate the zone for processor sets.
  292          */
  293         pset_zone = zinit(sizeof(struct processor_set), 128*PAGE_SIZE,
  294                 PAGE_SIZE, FALSE, "processor sets");
  295 
  296         /*
  297          * Give each processor a control port.
  298          * The master processor already has one.
  299          */
  300         for (i = 0; i < NCPUS; i++) {
  301             processor = cpu_to_processor(i);
  302             if (processor != master_processor &&
  303                 machine_slot[i].is_cpu)
  304             {
  305                 ipc_processor_init(processor);
  306             }
  307         }
  308 }
  309 #endif  /* MACH_HOST */
  310 
  311 /*
  312  *      Initialize the given processor_set structure.
  313  */
  314 
  315 void pset_init(
  316         register processor_set_t        pset)
  317 {
  318 #if     NCPUS > 1
  319         int     i;
  320 #endif
  321 
  322         run_queue_head_init(&pset->runq);
  323 #if     NCPUS > 1
  324         queue_init(&pset->idle_queue);
  325 #endif
  326         pset->idle_count = 0;
  327         simple_lock_init(&pset->idle_lock);
  328         queue_init(&pset->processors);
  329         pset->processor_count = 0;
  330         pset->empty = TRUE;
  331         queue_init(&pset->tasks);
  332         pset->task_count = 0;
  333         queue_init(&pset->threads);
  334         pset->thread_count = 0;
  335         pset->ref_count = 1;
  336         simple_lock_init(&pset->ref_lock);
  337         queue_init(&pset->all_psets);
  338         pset->active = FALSE;
  339         simple_lock_init(&pset->lock);
  340         pset->pset_self = IP_NULL;
  341         pset->pset_name_self = IP_NULL;
  342         pset->set_quantum = min_quantum;
  343 #if     NCPUS > 1
  344         pset->quantum_adj_index = 0;
  345         simple_lock_init(&pset->quantum_adj_lock);
  346 
  347         for (i = 0; i <= NCPUS; i++) {
  348             pset->machine_quantum[i] = min_quantum;
  349         }
  350 #endif  /* NCPUS > 1 */
  351         pset->mach_factor = 0;
  352         pset->load_average = 0;
  353         pset->sched_load = SCHED_SCALE;         /* i.e. 1 */
  354 
  355         processor_set_default_policies(pset);
  356 }
  357 
  358 /*
  359  *      Initialize the given processor structure for the processor in
  360  *      the slot specified by slot_num.
  361  */
  362 
  363 void processor_init(
  364         register processor_t pr,
  365         int             slot_num)
  366 {
  367 #if     MACH_IO_BINDING
  368         run_queue_head_init(&pr->runq);
  369 #else
  370 #if     NCPUS > 1
  371         pr->runq.runqs[BOUND_POLICY_INDEX] = bp_run_queue_alloc(pr);
  372         pr->runq.last = -1;     /* none */
  373 #endif  /* NCPUS > 1 */
  374 #endif  /* MACH_IO_BINDING */
  375 
  376         queue_init(&pr->processor_queue);
  377         pr->state = PROCESSOR_OFF_LINE;
  378         pr->next_thread = THREAD_NULL;
  379         pr->idle_thread = THREAD_NULL;
  380         pr->quantum = 0;
  381         pr->first_quantum = FALSE;
  382         pr->last_quantum = 0;
  383         pr->processor_set = PROCESSOR_SET_NULL;
  384         pr->processor_set_next = PROCESSOR_SET_NULL;
  385         queue_init(&pr->processors);
  386         simple_lock_init(&pr->lock);
  387         pr->processor_self = IP_NULL;
  388         pr->slot_num = slot_num;
  389 }
  390 
  391 /*
  392  *      pset_remove_processor() removes a processor from a processor_set.
  393  *      It can only be called on the current processor.  Caller must
  394  *      hold lock on current processor and processor set.
  395  */
  396 
  397 void pset_remove_processor(
  398         processor_set_t pset,
  399         processor_t     processor)
  400 {
  401         if (pset != processor->processor_set)
  402                 panic("pset_remove_processor: wrong pset");
  403 
  404         queue_remove(&pset->processors, processor, processor_t, processors);
  405         processor->processor_set = PROCESSOR_SET_NULL;
  406         pset->processor_count--;
  407         quantum_set(pset);
  408 }
  409 
  410 /*
  411  *      pset_add_processor() adds a  processor to a processor_set.
  412  *      It can only be called on the current processor.  Caller must
  413  *      hold lock on curent processor and on pset.  No reference counting on
  414  *      processors.  Processor reference to pset is implicit.
  415  */
  416 
  417 void pset_add_processor(
  418         processor_set_t pset,
  419         processor_t     processor)
  420 {
  421         queue_enter(&pset->processors, processor, processor_t, processors);
  422         processor->processor_set = pset;
  423         pset->processor_count++;
  424         quantum_set(pset);
  425 
  426 #if     NCPUS > 1 && !MACH_IO_BINDNG
  427         /*
  428          *      Copy run queue pointers and last runq hint
  429          *      into processor structure.  Don`t disturb bound
  430          *      thread runq pointer.
  431          */
  432         {
  433             int i;
  434 
  435             for (i = 0; i < BOUND_POLICY_INDEX; i++)
  436                 processor->runq.runqs[i] = pset->runq.runqs[i];
  437             if (processor->runq.last < BOUND_POLICY_INDEX)
  438                 processor->runq.last = pset->runq.last;
  439         }
  440 #endif
  441 }
  442 
  443 /*
  444  *      pset_remove_task() removes a task from a processor_set.
  445  *      Caller must hold locks on pset and task.  Pset reference count
  446  *      is not decremented; caller must explicitly pset_deallocate.
  447  *      Pset`s reference to task is removed.
  448  */
  449 
  450 void pset_remove_task(
  451         processor_set_t pset,
  452         task_t          task)
  453 {
  454         if (pset != task->processor_set)
  455                 return;
  456 
  457         queue_remove(&pset->tasks, task, task_t, pset_tasks);
  458         task->processor_set = PROCESSOR_SET_NULL;
  459         pset->task_count--;
  460         task_deallocate(task);
  461 }
  462 
  463 /*
  464  *      pset_add_task() adds a task to a processor_set.
  465  *      Caller must hold locks on pset and task.  Adds
  466  *      a new reference to the task.
  467  */
  468 
  469 void pset_add_task(
  470         processor_set_t pset,
  471         task_t          task)
  472 {
  473         queue_enter(&pset->tasks, task, task_t, pset_tasks);
  474         task->processor_set = pset;
  475         pset->task_count++;
  476         task_reference(task);
  477 }
  478 
  479 /*
  480  *      pset_remove_thread() removes a thread from a processor_set.
  481  *      Caller must hold locks on pset and thread.  Pset reference count
  482  *      is not decremented; caller must explicitly pset_deallocate.
  483  *      Pset`s reference to thread is removed.
  484  */
  485 
  486 void pset_remove_thread(
  487         processor_set_t pset,
  488         thread_t        thread)
  489 {
  490         queue_remove(&pset->threads, thread, thread_t, pset_threads);
  491         thread->processor_set = PROCESSOR_SET_NULL;
  492         pset->thread_count--;
  493         thread_deallocate(thread);
  494 }
  495 
  496 /*
  497  *      pset_add_thread() adds a  thread to a processor_set.
  498  *      Caller must hold locks on pset and thread.  Adds a
  499  *      new reference to the thread.
  500  */
  501 
  502 void pset_add_thread(
  503         processor_set_t pset,
  504         thread_t        thread)
  505 {
  506         queue_enter(&pset->threads, thread, thread_t, pset_threads);
  507         thread->processor_set = pset;
  508         pset->thread_count++;
  509         thread_reference(thread);
  510 }
  511 
  512 /*
  513  *      thread_change_psets() changes the pset of a thread.  Caller must
  514  *      hold locks on both psets and thread.  The old pset must be
  515  *      explicitly pset_deallocat()'ed by caller.  Thread reference moves
  516  *      from old pset to new pset.
  517  */
  518 
  519 void thread_change_psets(
  520         thread_t        thread,
  521         processor_set_t old_pset,
  522         processor_set_t new_pset)
  523 {
  524         queue_remove(&old_pset->threads, thread, thread_t, pset_threads);
  525         old_pset->thread_count--;
  526         queue_enter(&new_pset->threads, thread, thread_t, pset_threads);
  527         thread->processor_set = new_pset;
  528         new_pset->thread_count++;
  529 }       
  530 
  531 /*
  532  *      pset_deallocate:
  533  *
  534  *      Remove one reference to the processor set.  Destroy processor_set
  535  *      if this was the last reference.
  536  */
  537 void pset_deallocate(
  538         processor_set_t pset)
  539 {
  540         if (pset == PROCESSOR_SET_NULL)
  541                 return;
  542 
  543         pset_ref_lock(pset);
  544         if (--pset->ref_count > 0) {
  545                 pset_ref_unlock(pset);
  546                 return;
  547         }
  548 #if     !MACH_HOST
  549         panic("pset_deallocate: default_pset destroyed");
  550 #endif  /* !MACH_HOST */
  551 
  552 #if     MACH_HOST
  553         /*
  554          *      Reference count is zero, however the all_psets list
  555          *      holds an implicit reference and may make new ones.
  556          *      Its lock also dominates the pset lock.  To check for this,
  557          *      temporarily restore one reference, and then lock the
  558          *      other structures in the right order.
  559          */
  560         pset->ref_count = 1;
  561         pset_ref_unlock(pset);
  562         
  563         simple_lock(&all_psets_lock);
  564         pset_ref_lock(pset);
  565         if (--pset->ref_count > 0) {
  566                 /*
  567                  *      Made an extra reference.
  568                  */
  569                 pset_ref_unlock(pset);
  570                 simple_unlock(&all_psets_lock);
  571                 return;
  572         }
  573 
  574         /*
  575          *      Ok to destroy pset.  Make a few paranoia checks.
  576          */
  577 
  578         if ((pset == &default_pset) || (pset->thread_count > 0) ||
  579             (pset->task_count > 0) || pset->processor_count > 0) {
  580                 panic("pset_deallocate: destroy default or active pset");
  581         }
  582         /*
  583          *      Remove from all_psets queue.
  584          */
  585         queue_remove(&all_psets, pset, processor_set_t, all_psets);
  586         all_psets_count--;
  587 
  588         pset_ref_unlock(pset);
  589         simple_unlock(&all_psets_lock);
  590 
  591         /*
  592          *      That's it, free data structure.
  593          */
  594         zfree(pset_zone, (vm_offset_t)pset);
  595 #endif  /* MACH_HOST */
  596 }
  597 
  598 /*
  599  *      pset_reference:
  600  *
  601  *      Add one reference to the processor set.
  602  */
  603 void pset_reference(
  604         processor_set_t pset)
  605 {
  606         pset_ref_lock(pset);
  607         pset->ref_count++;
  608         pset_ref_unlock(pset);
  609 }
  610 
  611 kern_return_t
  612 processor_info(
  613         register processor_t    processor,
  614         int                     flavor,
  615         host_t                  *host,
  616         processor_info_t        info,
  617         natural_t               *count)
  618 {
  619         register int    slot_num, state;
  620         register processor_basic_info_t         basic_info;
  621 
  622         if (processor == PROCESSOR_NULL)
  623                 return KERN_INVALID_ARGUMENT;
  624 
  625         if (flavor != PROCESSOR_BASIC_INFO ||
  626                 *count < PROCESSOR_BASIC_INFO_COUNT)
  627                         return KERN_FAILURE;
  628 
  629         basic_info = (processor_basic_info_t) info;
  630 
  631         slot_num = processor->slot_num;
  632         basic_info->cpu_type = machine_slot[slot_num].cpu_type;
  633         basic_info->cpu_subtype = machine_slot[slot_num].cpu_subtype;
  634         state = processor->state;
  635         if (state == PROCESSOR_SHUTDOWN || state == PROCESSOR_OFF_LINE)
  636                 basic_info->running = FALSE;
  637         else
  638                 basic_info->running = TRUE;
  639         basic_info->slot_num = slot_num;
  640         if (processor == master_processor) 
  641                 basic_info->is_master = TRUE;
  642         else
  643                 basic_info->is_master = FALSE;
  644 
  645         *count = PROCESSOR_BASIC_INFO_COUNT;
  646         *host = &realhost;
  647         return KERN_SUCCESS;
  648 }
  649 
  650 kern_return_t processor_start(
  651         processor_t     processor)
  652 {
  653         if (processor == PROCESSOR_NULL)
  654                 return KERN_INVALID_ARGUMENT;
  655 #if     NCPUS > 1
  656         return cpu_start(processor->slot_num);
  657 #else   /* NCPUS > 1 */
  658         return KERN_FAILURE;
  659 #endif  /* NCPUS > 1 */
  660 }
  661 
  662 kern_return_t processor_exit(
  663         processor_t     processor)
  664 {
  665         if (processor == PROCESSOR_NULL)
  666                 return KERN_INVALID_ARGUMENT;
  667 
  668 #if     NCPUS > 1
  669         return processor_shutdown(processor);
  670 #else   /* NCPUS > 1 */
  671         return KERN_FAILURE;
  672 #endif  /* NCPUS > 1 */
  673 }
  674 
  675 kern_return_t
  676 processor_control(
  677         processor_t     processor,
  678         processor_info_t info,
  679         natural_t        count)
  680 {
  681         if (processor == PROCESSOR_NULL)
  682                 return KERN_INVALID_ARGUMENT;
  683 
  684 #if     NCPUS > 1
  685         return cpu_control(processor->slot_num, info, count);
  686 #else   /* NCPUS > 1 */
  687         return KERN_FAILURE;
  688 #endif  /* NCPUS > 1 */
  689 }
  690 
  691 #if     MACH_HOST
  692 /*
  693  *      processor_set_create:
  694  *
  695  *      Create and return a new processor set.
  696  */
  697 
  698 kern_return_t
  699 processor_set_create(
  700         host_t          host,
  701         processor_set_t *new_set,
  702         processor_set_t *new_name)
  703 {
  704         processor_set_t pset;
  705 
  706         if (host == HOST_NULL)
  707                 return KERN_INVALID_ARGUMENT;
  708 
  709         pset = (processor_set_t) zalloc(pset_zone);
  710         pset_init(pset);
  711         pset_reference(pset);   /* for new_set out argument */
  712         pset_reference(pset);   /* for new_name out argument */
  713         ipc_pset_init(pset);
  714         pset->active = TRUE;
  715 
  716         simple_lock(&all_psets_lock);
  717         queue_enter(&all_psets, pset, processor_set_t, all_psets);
  718         all_psets_count++;
  719         simple_unlock(&all_psets_lock);
  720 
  721         ipc_pset_enable(pset);
  722 
  723         *new_set = pset;
  724         *new_name = pset;
  725         return KERN_SUCCESS;
  726 }
  727 
  728 /*
  729  *      processor_set_destroy:
  730  *
  731  *      destroy a processor set.  Any tasks, threads or processors
  732  *      currently assigned to it are reassigned to the default pset.
  733  */
  734 kern_return_t processor_set_destroy(
  735         processor_set_t pset)
  736 {
  737         register queue_entry_t  elem;
  738         register queue_head_t   *list;
  739 
  740         if (pset == PROCESSOR_SET_NULL || pset == &default_pset)
  741                 return KERN_INVALID_ARGUMENT;
  742 
  743         /*
  744          *      Handle multiple termination race.  First one through sets
  745          *      active to FALSE and disables ipc access.
  746          */
  747         pset_lock(pset);
  748         if (!(pset->active)) {
  749                 pset_unlock(pset);
  750                 return KERN_FAILURE;
  751         }
  752 
  753         pset->active = FALSE;
  754         ipc_pset_disable(pset);
  755 
  756 
  757         /*
  758          *      Now reassign everything in this set to the default set.
  759          *      Since pset is inactive, nothing further can be assigned
  760          *      to it.
  761          */
  762 
  763         if (pset->task_count > 0) {
  764             list = &pset->tasks;
  765             while (!queue_empty(list)) {
  766                 elem = queue_first(list);
  767                 task_reference((task_t) elem);
  768                 pset_unlock(pset);
  769                 task_assign((task_t) elem, &default_pset, FALSE);
  770                 task_deallocate((task_t) elem);
  771                 pset_lock(pset);
  772             }
  773         }
  774 
  775         if (pset->thread_count > 0) {
  776             list = &pset->threads;
  777             while (!queue_empty(list)) {
  778                 elem = queue_first(list);
  779                 thread_reference((thread_t) elem);
  780                 pset_unlock(pset);
  781                 thread_assign((thread_t) elem, &default_pset);
  782                 thread_deallocate((thread_t) elem);
  783                 pset_lock(pset);
  784             }
  785         }
  786         
  787         if (pset->processor_count > 0) {
  788             list = &pset->processors;
  789             while(!queue_empty(list)) {
  790                 elem = queue_first(list);
  791                 pset_unlock(pset);
  792                 processor_assign((processor_t) elem, &default_pset, TRUE);
  793                 pset_lock(pset);
  794             }
  795         }
  796 
  797         pset_unlock(pset);
  798 
  799         /*
  800          *      Deallocate run queues.
  801          */
  802         run_queue_head_dealloc(&pset->runq);
  803 
  804         /*
  805          *      Destroy ipc state.
  806          */
  807         ipc_pset_terminate(pset);
  808 
  809         /*
  810          *      Deallocate pset's reference to itself.
  811          */
  812         pset_deallocate(pset);
  813         return KERN_SUCCESS;
  814 }
  815 
  816 #else   /* MACH_HOST */
  817             
  818 kern_return_t
  819 processor_set_create(
  820         host_t          host,
  821         processor_set_t *new_set,
  822         processor_set_t *new_name)
  823 {
  824 #ifdef  lint
  825         host++; new_set++; new_name++;
  826 #endif  /* lint */
  827         return KERN_FAILURE;
  828 }
  829 
  830 kern_return_t processor_set_destroy(
  831         processor_set_t pset)
  832 {
  833 #ifdef  lint
  834         pset++;
  835 #endif  /* lint */
  836         return KERN_FAILURE;
  837 }
  838 
  839 #endif  /* MACH_HOST */
  840 
  841 kern_return_t
  842 processor_get_assignment(
  843         processor_t     processor,
  844         processor_set_t *pset)
  845 {
  846         int state;
  847 
  848         state = processor->state;
  849         if (state == PROCESSOR_SHUTDOWN || state == PROCESSOR_OFF_LINE)
  850                 return KERN_FAILURE;
  851 
  852         *pset = processor->processor_set;
  853         pset_reference(*pset);
  854         return KERN_SUCCESS;
  855 }
  856 
  857 kern_return_t
  858 processor_set_info(
  859         processor_set_t         pset,
  860         int                     flavor,
  861         host_t                  *host,
  862         processor_set_info_t    info,
  863         natural_t               *count)
  864 {
  865         if (pset == PROCESSOR_SET_NULL)
  866                 return KERN_INVALID_ARGUMENT;
  867 
  868         if (flavor == PROCESSOR_SET_BASIC_INFO) {
  869                 register processor_set_basic_info_t     basic_info;
  870 
  871                 if (*count < PROCESSOR_SET_BASIC_INFO_COUNT)
  872                         return KERN_FAILURE;
  873 
  874                 basic_info = (processor_set_basic_info_t) info;
  875 
  876                 pset_lock(pset);
  877                 basic_info->processor_count = pset->processor_count;
  878                 basic_info->task_count = pset->task_count;
  879                 basic_info->thread_count = pset->thread_count;
  880                 basic_info->mach_factor = pset->mach_factor;
  881                 basic_info->load_average = pset->load_average;
  882                 pset_unlock(pset);
  883 
  884                 *count = PROCESSOR_SET_BASIC_INFO_COUNT;
  885                 *host = &realhost;
  886                 return KERN_SUCCESS;
  887         }
  888         else if (flavor == PROCESSOR_SET_SCHED_INFO) {
  889                 register processor_set_sched_info_t     sched_info;
  890                 sched_policy_t                          ts_policy;
  891                 run_queue_t                             rq;
  892 
  893                 if (*count < PROCESSOR_SET_SCHED_INFO_COUNT)
  894                         return KERN_FAILURE;
  895 
  896                 sched_info = (processor_set_sched_info_t) info;
  897 
  898                 pset_lock(pset);
  899                 sched_info->policies = pset->runq.last; /* XXX */
  900                 ts_policy = sched_policy_lookup(POLICY_TIMESHARE);
  901                 if ((rq = pset->runq.runqs[ts_policy->rank]) != 0) {
  902                     struct policy_param_timeshare       pi;
  903                     natural_t                           pi_count;
  904                     pi_count = POLICY_PARAM_TIMESHARE_COUNT;
  905                     (void) RUNQ_GET_LIMIT(rq, (policy_param_t)&pi, &pi_count);
  906                     sched_info->max_priority = pi.priority;
  907                 }
  908                 else {
  909                     sched_info->max_priority = 0;
  910                 }
  911                 pset_unlock(pset);
  912 
  913                 *count = PROCESSOR_SET_SCHED_INFO_COUNT;
  914                 *host = &realhost;
  915                 return KERN_SUCCESS;
  916         }
  917         else {
  918                 /* XXXX */
  919         }
  920 
  921         *host = HOST_NULL;
  922         return KERN_INVALID_ARGUMENT;
  923 }
  924 
  925 /*
  926  *      [ obsolete ]
  927  *      processor_set_max_priority:
  928  *
  929  *      Specify max priority permitted on processor set.  This affects
  930  *      newly created and assigned threads.  Optionally change existing
  931  *      ones.
  932  *
  933  *      This call affects only timesharing threads.
  934  */
  935 kern_return_t
  936 processor_set_max_priority(
  937         processor_set_t pset,
  938         int             max_priority,
  939         boolean_t       change_threads)
  940 {
  941         struct policy_param_timeshare   limit;
  942 
  943         limit.priority = max_priority;
  944         return processor_set_policy_limit(pset,
  945                                           POLICY_TIMESHARE,
  946                                           (policy_param_t)&limit,
  947                                           POLICY_PARAM_TIMESHARE_COUNT,
  948                                           change_threads);
  949 }
  950 
  951 /*
  952  *      [ obsolete ]
  953  *      processor_set_policy_enable:
  954  *
  955  *      Allow indicated policy on processor set.
  956  *
  957  *      Only allows timesharing: old fixed-priority policy
  958  *      is no longer supported.
  959  */
  960 
  961 kern_return_t
  962 processor_set_policy_enable(
  963         processor_set_t pset,
  964         int             policy)
  965 {
  966         if (pset == PROCESSOR_SET_NULL)
  967             return KERN_INVALID_ARGUMENT;
  968 
  969         if (policy == POLICY_TIMESHARE)
  970             return KERN_SUCCESS;
  971         else
  972             return KERN_FAILURE;
  973 }
  974 
  975 /*
  976  *      [ obsolete ]
  977  *      processor_set_policy_disable:
  978  *
  979  *      Forbid indicated policy on processor set.  Time sharing cannot
  980  *      be forbidden.
  981  *
  982  *      Since the old fixed-priority policy is no longer supported,
  983  *      this does nothing.
  984  */
  985 
  986 kern_return_t
  987 processor_set_policy_disable(
  988         processor_set_t pset,
  989         int             policy,
  990         boolean_t       change_threads)
  991 {
  992         if (pset == PROCESSOR_SET_NULL || policy != POLICY_FIXEDPRI)
  993                 return KERN_INVALID_ARGUMENT;
  994 
  995         return KERN_SUCCESS;
  996 }
  997 
  998 #define THING_TASK      0
  999 #define THING_THREAD    1
 1000 
 1001 /*
 1002  *      processor_set_things:
 1003  *
 1004  *      Common internals for processor_set_{threads,tasks}
 1005  */
 1006 kern_return_t
 1007 processor_set_things(
 1008         processor_set_t pset,
 1009         mach_port_t     **thing_list,
 1010         natural_t       *count,
 1011         int             type)
 1012 {
 1013         unsigned int actual;    /* this many things */
 1014         int i;
 1015 
 1016         vm_size_t size, size_needed;
 1017         vm_offset_t addr;
 1018 
 1019         if (pset == PROCESSOR_SET_NULL)
 1020                 return KERN_INVALID_ARGUMENT;
 1021 
 1022         size = 0; addr = 0;
 1023 
 1024         for (;;) {
 1025                 pset_lock(pset);
 1026                 if (!pset->active) {
 1027                         pset_unlock(pset);
 1028                         return KERN_FAILURE;
 1029                 }
 1030 
 1031                 if (type == THING_TASK)
 1032                         actual = pset->task_count;
 1033                 else
 1034                         actual = pset->thread_count;
 1035 
 1036                 /* do we have the memory we need? */
 1037 
 1038                 size_needed = actual * sizeof(mach_port_t);
 1039                 if (size_needed <= size)
 1040                         break;
 1041 
 1042                 /* unlock the pset and allocate more memory */
 1043                 pset_unlock(pset);
 1044 
 1045                 if (size != 0)
 1046                         kfree(addr, size);
 1047 
 1048                 assert(size_needed > 0);
 1049                 size = size_needed;
 1050 
 1051                 addr = kalloc(size);
 1052                 if (addr == 0)
 1053                         return KERN_RESOURCE_SHORTAGE;
 1054         }
 1055 
 1056         /* OK, have memory and the processor_set is locked & active */
 1057 
 1058         switch (type) {
 1059             case THING_TASK: {
 1060                 task_t *tasks = (task_t *) addr;
 1061                 task_t task;
 1062 
 1063                 for (i = 0, task = (task_t) queue_first(&pset->tasks);
 1064                      i < actual;
 1065                      i++, task = (task_t) queue_next(&task->pset_tasks)) {
 1066                         /* take ref for convert_task_to_port */
 1067                         task_reference(task);
 1068                         tasks[i] = task;
 1069                 }
 1070                 assert(queue_end(&pset->tasks, (queue_entry_t) task));
 1071                 break;
 1072             }
 1073 
 1074             case THING_THREAD: {
 1075                 thread_t *threads = (thread_t *) addr;
 1076                 thread_t thread;
 1077 
 1078                 for (i = 0, thread = (thread_t) queue_first(&pset->threads);
 1079                      i < actual;
 1080                      i++,
 1081                      thread = (thread_t) queue_next(&thread->pset_threads)) {
 1082                         /* take ref for convert_thread_to_port */
 1083                         thread_reference(thread);
 1084                         threads[i] = thread;
 1085                 }
 1086                 assert(queue_end(&pset->threads, (queue_entry_t) thread));
 1087                 break;
 1088             }
 1089         }
 1090 
 1091         /* can unlock processor set now that we have the task/thread refs */
 1092         pset_unlock(pset);
 1093 
 1094         if (actual == 0) {
 1095                 /* no things, so return null pointer and deallocate memory */
 1096                 *thing_list = 0;
 1097                 *count = 0;
 1098 
 1099                 if (size != 0)
 1100                         kfree(addr, size);
 1101         } else {
 1102                 /* if we allocated too much, must copy */
 1103 
 1104                 if (size_needed < size) {
 1105                         vm_offset_t newaddr;
 1106 
 1107                         newaddr = kalloc(size_needed);
 1108                         if (newaddr == 0) {
 1109                                 switch (type) {
 1110                                     case THING_TASK: {
 1111                                         task_t *tasks = (task_t *) addr;
 1112 
 1113                                         for (i = 0; i < actual; i++)
 1114                                                 task_deallocate(tasks[i]);
 1115                                         break;
 1116                                     }
 1117 
 1118                                     case THING_THREAD: {
 1119                                         thread_t *threads = (thread_t *) addr;
 1120 
 1121                                         for (i = 0; i < actual; i++)
 1122                                                 thread_deallocate(threads[i]);
 1123                                         break;
 1124                                     }
 1125                                 }
 1126                                 kfree(addr, size);
 1127                                 return KERN_RESOURCE_SHORTAGE;
 1128                         }
 1129 
 1130                         bcopy((void *) addr, (void *) newaddr, size_needed);
 1131                         kfree(addr, size);
 1132                         addr = newaddr;
 1133                 }
 1134 
 1135                 *thing_list = (mach_port_t *) addr;
 1136                 *count = actual;
 1137 
 1138                 /* do the conversion that Mig should handle */
 1139 
 1140                 switch (type) {
 1141                     case THING_TASK: {
 1142                         task_t *tasks = (task_t *) addr;
 1143 
 1144                         for (i = 0; i < actual; i++)
 1145                             ((mach_port_t *) tasks)[i] =
 1146                                 (mach_port_t)convert_task_to_port(tasks[i]);
 1147                         break;
 1148                     }
 1149 
 1150                     case THING_THREAD: {
 1151                         thread_t *threads = (thread_t *) addr;
 1152 
 1153                         for (i = 0; i < actual; i++)
 1154                             ((mach_port_t *) threads)[i] =
 1155                                 (mach_port_t)convert_thread_to_port(threads[i]);
 1156                         break;
 1157                     }
 1158                 }
 1159         }
 1160 
 1161         return KERN_SUCCESS;
 1162 }
 1163 
 1164 
 1165 /*
 1166  *      processor_set_tasks:
 1167  *
 1168  *      List all tasks in the processor set.
 1169  */
 1170 kern_return_t
 1171 processor_set_tasks(
 1172         processor_set_t pset,
 1173         task_array_t    *task_list,
 1174         natural_t       *count)
 1175 {
 1176         return processor_set_things(pset, task_list, count, THING_TASK);
 1177 }
 1178 
 1179 /*
 1180  *      processor_set_threads:
 1181  *
 1182  *      List all threads in the processor set.
 1183  */
 1184 kern_return_t
 1185 processor_set_threads(
 1186         processor_set_t pset,
 1187         thread_array_t  *thread_list,
 1188         natural_t       *count)
 1189 {
 1190         return processor_set_things(pset, thread_list, count, THING_THREAD);
 1191 }

Cache object: f5c8165463f957a72df7e730b8da1898


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.