The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/kern/ast.c

Version: -  FREEBSD  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-2  -  FREEBSD-11-1  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-4  -  FREEBSD-10-3  -  FREEBSD-10-2  -  FREEBSD-10-1  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-3  -  FREEBSD-9-2  -  FREEBSD-9-1  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-4  -  FREEBSD-8-3  -  FREEBSD-8-2  -  FREEBSD-8-1  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-4  -  FREEBSD-7-3  -  FREEBSD-7-2  -  FREEBSD-7-1  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-4  -  FREEBSD-6-3  -  FREEBSD-6-2  -  FREEBSD-6-1  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-5  -  FREEBSD-5-4  -  FREEBSD-5-3  -  FREEBSD-5-2  -  FREEBSD-5-1  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  linux-2.6  -  linux-2.4.22  -  MK83  -  MK84  -  PLAN9  -  DFBSD  -  NETBSD  -  NETBSD5  -  NETBSD4  -  NETBSD3  -  NETBSD20  -  OPENBSD  -  xnu-517  -  xnu-792  -  xnu-792.6.70  -  xnu-1228  -  xnu-1456.1.26  -  xnu-1699.24.8  -  xnu-2050.18.24  -  OPENSOLARIS  -  minix-3-1-1 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1991,1990,1989,1988,1987 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /* 
   27  * HISTORY
   28  * $Log:        ast.c,v $
   29  * Revision 2.14  93/05/15  18:53:51  mrt
   30  *      machparam.h -> machspl.h
   31  * 
   32  * Revision 2.13  93/01/14  17:33:33  danner
   33  *      Proper spl typing.
   34  *      [92/11/30            af]
   35  * 
   36  * Revision 2.12  92/08/03  17:36:32  jfriedl
   37  *      removed silly prototypes
   38  *      [92/08/02            jfriedl]
   39  * 
   40  * Revision 2.11  92/05/21  17:12:45  jfriedl
   41  *      tried prototypes.
   42  *      [92/05/20            jfriedl]
   43  * 
   44  * Revision 2.10  91/08/28  11:14:16  jsb
   45  *      Renamed AST_CLPORT to AST_NETIPC.
   46  *      [91/08/14  21:39:25  jsb]
   47  * 
   48  * Revision 2.9  91/06/17  15:46:48  jsb
   49  *      Renamed NORMA conditionals.
   50  *      [91/06/17  10:48:46  jsb]
   51  * 
   52  * Revision 2.8  91/06/06  17:06:43  jsb
   53  *      Added AST_CLPORT.
   54  *      [91/05/13  17:34:31  jsb]
   55  * 
   56  * Revision 2.7  91/05/14  16:39:48  mrt
   57  *      Correcting copyright
   58  * 
   59  * Revision 2.6  91/05/08  12:47:06  dbg
   60  *      Add volatile declarations where needed.
   61  *      [91/04/18            dbg]
   62  * 
   63  *      Add missing argument to ast_on in assign/shutdown case.
   64  *      [91/03/21            dbg]
   65  * 
   66  * Revision 2.5  91/03/16  14:49:23  rpd
   67  *      Cleanup.
   68  *      [91/02/13            rpd]
   69  *      Changed the AST interface.
   70  *      [91/01/17            rpd]
   71  * 
   72  * Revision 2.4  91/02/05  17:25:33  mrt
   73  *      Changed to new Mach copyright
   74  *      [91/02/01  16:11:01  mrt]
   75  * 
   76  * Revision 2.3  90/06/02  14:53:30  rpd
   77  *      Updated with new processor/processor-set technology.
   78  *      [90/03/26  22:02:00  rpd]
   79  * 
   80  * Revision 2.2  90/02/22  20:02:37  dbg
   81  *      Remove lint.
   82  *      [90/01/29            dbg]
   83  * 
   84  * Revision 2.1  89/08/03  15:42:10  rwd
   85  * Created.
   86  * 
   87  *  2-Feb-89  David Golub (dbg) at Carnegie-Mellon University
   88  *      Moved swtch to this file.
   89  *
   90  * 23-Nov-88  David Black (dlb) at Carnegie-Mellon University
   91  *      Hack up swtch() again.  Drop priority just low enough to run
   92  *      something else if it's runnable.  Do missing priority updates.
   93  *      Make sure to lock thread and double check whether update is needed.
   94  *      Yet more cruft until I can get around to doing it right.
   95  *
   96  *  6-Sep-88  David Golub (dbg) at Carnegie-Mellon University
   97  *      Removed all non-MACH code.
   98  *
   99  * 11-Aug-88  David Black (dlb) at Carnegie-Mellon University
  100  *      csw_check is now the csw_needed macro in sched.h.  Rewrite
  101  *      ast_check for new ast mechanism.
  102  *
  103  *  9-Aug-88  David Black (dlb) at Carnegie-Mellon University
  104  *      Rewrote swtch to check runq counts directly.
  105  *
  106  *  9-Aug-88  David Black (dlb) at Carnegie-Mellon University
  107  *      Delete runrun.  Rewrite csw_check so it can become a macro.
  108  *
  109  *  4-May-88  David Black (dlb) at Carnegie-Mellon University
  110  *      Moved cpu not running check to ast_check().
  111  *      New preempt priority logic.
  112  *      Increment runrun if ast is for context switch.
  113  *      Give absolute priority to local run queues.
  114  *
  115  * 20-Apr-88  David Black (dlb) at Carnegie-Mellon University
  116  *      New signal check logic.
  117  *
  118  * 18-Nov-87  Avadis Tevanian (avie) at Carnegie-Mellon University
  119  *      Flushed conditionals, reset history.
  120  */ 
  121 
  122 /*
  123  *
  124  *      This file contains routines to check whether an ast is needed.
  125  *
  126  *      ast_check() - check whether ast is needed for interrupt or context
  127  *      switch.  Usually called by clock interrupt handler.
  128  *
  129  */
  130 
  131 #include <cpus.h>
  132 #include <mach_fixpri.h>
  133 #include <norma_ipc.h>
  134 
  135 #include <kern/ast.h>
  136 #include <kern/counters.h>
  137 #include <kern/cpu_number.h>
  138 #include <kern/queue.h>
  139 #include <kern/sched.h>
  140 #include <kern/sched_prim.h>
  141 #include <kern/thread.h>
  142 #include <kern/processor.h>
  143 #include <machine/machspl.h>    /* for splsched */
  144 
  145 #if     MACH_FIXPRI
  146 #include <mach/policy.h>
  147 #endif  MACH_FIXPRI
  148 
  149 
  150 volatile ast_t need_ast[NCPUS];
  151 
  152 void
  153 ast_init()
  154 {
  155 #ifndef MACHINE_AST
  156         register int i;
  157 
  158         for (i=0; i<NCPUS; i++)
  159                 need_ast[i] = 0;
  160 #endif  MACHINE_AST
  161 }
  162 
  163 void
  164 ast_taken()
  165 {
  166         register thread_t self = current_thread();
  167         register ast_t reasons;
  168 
  169         /*
  170          *      Interrupts are still disabled.
  171          *      We must clear need_ast and then enable interrupts.
  172          */
  173 
  174         reasons = need_ast[cpu_number()];
  175         need_ast[cpu_number()] = AST_ZILCH;
  176         (void) spl0();
  177 
  178         /*
  179          *      These actions must not block.
  180          */
  181 
  182         if (reasons & AST_NETWORK)
  183                 net_ast();
  184 
  185 #if     NORMA_IPC
  186         if (reasons & AST_NETIPC)
  187                 netipc_ast();
  188 #endif  NORMA_IPC
  189 
  190         /*
  191          *      Make darn sure that we don't call thread_halt_self
  192          *      or thread_block from the idle thread.
  193          */
  194 
  195         if (self != current_processor()->idle_thread) {
  196                 while (thread_should_halt(self))
  197                         thread_halt_self();
  198 
  199                 /*
  200                  *      One of the previous actions might well have
  201                  *      woken a high-priority thread, so we use
  202                  *      csw_needed in addition to AST_BLOCK.
  203                  */
  204 
  205                 if ((reasons & AST_BLOCK) ||
  206                     csw_needed(self, current_processor())) {
  207                         counter(c_ast_taken_block++);
  208                         thread_block(thread_exception_return);
  209                 }
  210         }
  211 }
  212 
  213 void
  214 ast_check()
  215 {
  216         register int            mycpu = cpu_number();
  217         register processor_t    myprocessor;
  218         register thread_t       thread = current_thread();
  219         register run_queue_t    rq;
  220         spl_t                   s = splsched();
  221 
  222         /*
  223          *      Check processor state for ast conditions.
  224          */
  225         myprocessor = cpu_to_processor(mycpu);
  226         switch(myprocessor->state) {
  227             case PROCESSOR_OFF_LINE:
  228             case PROCESSOR_IDLE:
  229             case PROCESSOR_DISPATCHING:
  230                 /*
  231                  *      No ast.
  232                  */
  233                 break;
  234 
  235 #if     NCPUS > 1
  236             case PROCESSOR_ASSIGN:
  237             case PROCESSOR_SHUTDOWN:
  238                 /*
  239                  *      Need ast to force action thread onto processor.
  240                  *
  241                  * XXX  Should check if action thread is already there.
  242                  */
  243                 ast_on(mycpu, AST_BLOCK);
  244                 break;
  245 #endif  NCPUS > 1
  246 
  247             case PROCESSOR_RUNNING:
  248 
  249                 /*
  250                  *      Propagate thread ast to processor.  If we already
  251                  *      need an ast, don't look for more reasons.
  252                  */
  253                 ast_propagate(thread, mycpu);
  254                 if (ast_needed(mycpu))
  255                         break;
  256 
  257                 /*
  258                  *      Context switch check.  The csw_needed macro isn't
  259                  *      used here because the rq->low hint may be wrong,
  260                  *      and fixing it here avoids an extra ast.
  261                  *      First check the easy cases.
  262                  */
  263                 if (thread->state & TH_SUSP || myprocessor->runq.count > 0) {
  264                         ast_on(mycpu, AST_BLOCK);
  265                         break;
  266                 }
  267 
  268                 /*
  269                  *      Update lazy evaluated runq->low if only timesharing.
  270                  */
  271 #if     MACH_FIXPRI
  272                 if (myprocessor->processor_set->policies & POLICY_FIXEDPRI) {
  273                     if (csw_needed(thread,myprocessor)) {
  274                         ast_on(mycpu, AST_BLOCK);
  275                         break;
  276                     }
  277                     else {
  278                         /*
  279                          *      For fixed priority threads, set first_quantum
  280                          *      so entire new quantum is used.
  281                          */
  282                         if (thread->policy == POLICY_FIXEDPRI)
  283                             myprocessor->first_quantum = TRUE;
  284                     }
  285                 }
  286                 else {
  287 #endif  MACH_FIXPRI                     
  288                 rq = &(myprocessor->processor_set->runq);
  289                 if (!(myprocessor->first_quantum) && (rq->count > 0)) {
  290                     register queue_t    q;
  291                     /*
  292                      *  This is not the first quantum, and there may
  293                      *  be something in the processor_set runq.
  294                      *  Check whether low hint is accurate.
  295                      */
  296                     q = rq->runq + *(volatile int *)&rq->low;
  297                     if (queue_empty(q)) {
  298                         register int i;
  299 
  300                         /*
  301                          *      Need to recheck and possibly update hint.
  302                          */
  303                         simple_lock(&rq->lock);
  304                         q = rq->runq + rq->low;
  305                         if (rq->count > 0) {
  306                             for (i = rq->low; i < NRQS; i++) {
  307                                 if(!(queue_empty(q)))
  308                                     break;
  309                                 q++;
  310                             }
  311                             rq->low = i;
  312                         }
  313                         simple_unlock(&rq->lock);
  314                     }
  315 
  316                     if (rq->low <= thread->sched_pri) {
  317                         ast_on(mycpu, AST_BLOCK);
  318                         break;
  319                     }
  320                 }
  321 #if     MACH_FIXPRI
  322                 }
  323 #endif  MACH_FIXPRI
  324                 break;
  325 
  326             default:
  327                 panic("ast_check: Bad processor state");
  328         }
  329 
  330         (void) splx(s);
  331 }

Cache object: a6dab1d10ce60097f9f829e76e61ed4e


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.