The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/kern/mach_clock.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1993-1988 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        mach_clock.c,v $
   29  * Revision 2.27  93/11/17  17:14:19  dbg
   30  *      Maintain clock->check_seconds to allow reading time without
   31  *      locking clock.  Eliminate old 'time' variable.
   32  *      [93/06/18            dbg]
   33  * 
   34  *      Just use division to update old mapped-time... pay the
   35  *      consequences later...
   36  *      [93/06/07            dbg]
   37  * 
   38  *      Added correction_delta, correction_count.  Moved some common
   39  *      routines to device/clock_dev.c.
   40  *      [93/06/02            dbg]
   41  * 
   42  *      Change mmap routines to return physical address instead of
   43  *      physical page number.
   44  *      [93/05/24            dbg]
   45  * 
   46  *      Changes for new clocks and timers:
   47  *      . Accommodate multiple clocks.
   48  *      . Time is now measured in seconds/nanoseconds.
   49  *      . Timeout routines are run from ASTs, not softclock
   50  *        interrupts.
   51  *      . Use per-scheduling-policy routine to update thread`s
   52  *        usage and scheduling parameters at clock tick.
   53  *      Added ANSI function prototypes.
   54  *      [93/05/21            dbg]
   55  * 
   56  * Revision 2.26  93/08/03  12:31:12  mrt
   57  *      Flavor support for sampling.
   58  *      [93/07/30  10:21:52  bershad]
   59  * 
   60  * Revision 2.25  93/05/15  18:53:40  mrt
   61  *      machparam.h -> machspl.h
   62  * 
   63  * Revision 2.24  93/05/10  17:47:45  rvb
   64  *      Rudy asked for this change for xntp and dbg thought it would
   65  *      do no harm.  (I think that HZ is pretty always 100 so that this
   66  *      code and the previous version always go the tickadj = 1; route.)
   67  *      [93/05/10  15:52:26  rvb]
   68  * 
   69  * Revision 2.23  93/03/09  10:55:07  danner
   70  *      Removed gratuitous casts to ints.
   71  *      [93/03/05            af]
   72  * 
   73  * Revision 2.22  93/01/27  09:33:55  danner
   74  *      take_pc_sample() is void.
   75  *      [93/01/25            jfriedl]
   76  * 
   77  * Revision 2.21  93/01/24  13:19:29  danner
   78  *      Add pc sampling from C Maeda.  Make it conditional on thread or
   79  *      task sampling being enabled.
   80  *      [93/01/12            rvb]
   81  * 
   82  * Revision 2.20  93/01/14  17:35:12  danner
   83  *      Proper spl typing.
   84  *      [92/12/01            af]
   85  * 
   86  * Revision 2.19  92/08/03  17:38:09  jfriedl
   87  *      removed silly prototypes
   88  *      [92/08/02            jfriedl]
   89  * 
   90  * Revision 2.18  92/05/21  17:14:33  jfriedl
   91  *      Added void to fcns that yet needed it.
   92  *      [92/05/16            jfriedl]
   93  * 
   94  * Revision 2.17  92/03/10  16:26:41  jsb
   95  *      Removed NORMA_IPC code.
   96  *      [92/01/17  11:38:55  jsb]
   97  * 
   98  * Revision 2.16  91/08/03  18:18:56  jsb
   99  *      NORMA_IPC: added call to netipc_timeout in hardclock.
  100  *      [91/07/24  22:30:22  jsb]
  101  * 
  102  * Revision 2.15  91/07/31  17:45:57  dbg
  103  *      Fixed timeout race.  Implemented host_adjust_time.
  104  *      [91/07/30  17:03:54  dbg]
  105  * 
  106  * Revision 2.14  91/05/18  14:32:29  rpd
  107  *      Fixed timeout/untimeout to use a fixed-size array of timers
  108  *      instead of a zone.
  109  *      [91/03/31            rpd]
  110  *      Fixed host_set_time to update the mapped time value.
  111  *      Changed the mapped time value to include a check field.
  112  *      [91/03/19            rpd]
  113  * 
  114  * Revision 2.13  91/05/14  16:44:06  mrt
  115  *      Correcting copyright
  116  * 
  117  * Revision 2.12  91/03/16  14:50:45  rpd
  118  *      Updated for new kmem_alloc interface.
  119  *      [91/03/03            rpd]
  120  *      Use counter macros to track thread and stack usage.
  121  *      [91/03/01  17:43:15  rpd]
  122  * 
  123  * Revision 2.11  91/02/05  17:27:45  mrt
  124  *      Changed to new Mach copyright
  125  *      [91/02/01  16:14:47  mrt]
  126  * 
  127  * Revision 2.10  91/01/08  15:16:22  rpd
  128  *      Added continuation argument to thread_block.
  129  *      [90/12/08            rpd]
  130  * 
  131  * Revision 2.9  90/11/05  14:31:27  rpd
  132  *      Unified untimeout and untimeout_try.
  133  *      [90/10/29            rpd]
  134  * 
  135  * Revision 2.8  90/10/12  18:07:29  rpd
  136  *      Fixed calls to thread_bind in host_set_time.
  137  *      Fix from Philippe Bernadat.
  138  *      [90/10/10            rpd]
  139  * 
  140  * Revision 2.7  90/09/09  14:32:18  rpd
  141  *      Use decl_simple_lock_data.
  142  *      [90/08/30            rpd]
  143  * 
  144  * Revision 2.6  90/08/27  22:02:48  dbg
  145  *      Add untimeout_try for multiprocessors.  Reduce lint.
  146  *      [90/07/17            dbg]
  147  * 
  148  * Revision 2.5  90/06/02  14:55:04  rpd
  149  *      Converted to new IPC and new host port technology.
  150  *      [90/03/26  22:10:04  rpd]
  151  * 
  152  * Revision 2.4  90/01/11  11:43:31  dbg
  153  *      Switch to master CPU in host_set_time.
  154  *      [90/01/03            dbg]
  155  * 
  156  * Revision 2.3  89/08/09  14:33:09  rwd
  157  *      Include mach/vm_param.h and use PAGE_SIZE instead of NBPG.
  158  *      [89/08/08            rwd]
  159  *      Removed timemmap to machine/model_dep.c
  160  *      [89/08/08            rwd]
  161  * 
  162  * Revision 2.2  89/08/05  16:07:11  rwd
  163  *      Added mappable time code.
  164  *      [89/08/02            rwd]
  165  * 
  166  * 14-Jan-89  David Golub (dbg) at Carnegie-Mellon University
  167  *      Split into two new files: mach_clock (for timing) and priority
  168  *      (for priority calculation).
  169  *
  170  *  8-Dec-88  David Golub (dbg) at Carnegie-Mellon University
  171  *      Use sentinel for root of timer queue, to speed up search loops.
  172  *
  173  * 30-Jun-88  David Golub (dbg) at Carnegie-Mellon University
  174  *      Created.
  175  *
  176  */ 
  177 /*
  178  *      File:   clock_prim.c
  179  *      Author: Avadis Tevanian, Jr.
  180  *      Date:   1986
  181  *
  182  *      Clock primitives.
  183  */
  184 #include <cpus.h>
  185 #include <mach_pcsample.h>
  186 #include <stat_time.h>
  187 
  188 #include <mach/boolean.h>
  189 #include <mach/time_value.h>
  190 #include <mach/vm_param.h>
  191 #include <mach/vm_prot.h>
  192 
  193 #include <kern/clock.h>
  194 #include <kern/counters.h>
  195 #include <kern/cpu_number.h>
  196 #include <kern/host.h>
  197 #include <kern/lock.h>
  198 #include <kern/mach_param.h>
  199 #include <kern/mach_timer.h>
  200 #include <kern/memory.h>
  201 #include <kern/processor.h>
  202 #include <kern/quantum.h>
  203 #include <kern/sched_policy.h>
  204 #include <kern/sched_prim.h>
  205 #include <kern/thread.h>
  206 
  207 #include <vm/vm_kern.h>
  208 #include <vm/pmap.h>
  209 
  210 #include <device/clock_dev.h>
  211 #include <device/clock_status.h>
  212 
  213 #include <machine/mach_param.h> /* HZ */
  214 #include <machine/machspl.h>
  215 
  216 #if     MACH_PCSAMPLE
  217 #include <kern/pc_sample.h>
  218 #endif
  219 
  220 /*
  221  * For debugging, count the maximum number of clock ticks skipped
  222  * between two calls to timer_ast.
  223  */
  224 #define TIMER_DEBUG     0
  225 
  226 #ifdef  TIMER_DEBUG
  227 time_spec_t     timer_ast_requested_time = { 0, 0 };
  228 time_spec_t     timer_ast_max_time_missed = { 0, 0 };
  229 time_spec_t     timer_ast_total_time_missed = { 0, 0 };
  230 unsigned long   timer_ast_calls = 0;
  231 #endif
  232 
  233 /*
  234  *      Generalized clock support.
  235  */
  236 
  237 mach_clock_t    clock_list = 0;                 /* all clocks in system */
  238 
  239 /*
  240  *      Clock queue maintainance.
  241  */
  242 /*
  243  *      Insert a timer element on its clock queue, in the
  244  *      proper place.
  245  */
  246 #define timer_elt_insque(head, telt)                            \
  247     MACRO_BEGIN                                                 \
  248         timer_elt_t     next;                                   \
  249                                                                 \
  250         for (next = (timer_elt_t) queue_first(&(head)->chain);  \
  251              ;                                                  \
  252              next = (timer_elt_t) queue_next(&next->te_chain))  \
  253         {                                                       \
  254             if (!time_spec_leq(next->te_expire_time,            \
  255                               (telt)->te_expire_time))          \
  256                 break;                                          \
  257         }                                                       \
  258                                                                 \
  259         enqueue_tail_macro((queue_t) next, (queue_entry_t)(telt)); \
  260     MACRO_END
  261 
  262 /*
  263  *      Remove a timer element from its clock queue.
  264  */
  265 #define timer_elt_remque(head, telt)                            \
  266     MACRO_BEGIN                                                 \
  267         remqueue_macro(&(head)->chain, (queue_entry_t) telt);   \
  268     MACRO_END
  269 
  270 /*
  271  *      Add a timer element to its clock queue.
  272  *      if absolute, time is absolute expiration time;
  273  *      otherwise, time is interval from clock time.
  274  *      Returns FALSE if expiration time is in the past.
  275  */
  276 boolean_t timer_elt_enqueue(
  277         timer_elt_t     elt,
  278         time_spec_t     time,
  279         boolean_t       absolute)
  280 {
  281         mach_clock_t    clock;
  282         spl_t           s;
  283 
  284         s = splsched();
  285 
  286         clock = elt->te_clock;
  287         clock_queue_lock(clock);        /* locks time value */
  288 
  289         if (absolute) {
  290             elt->te_expire_time = time;
  291             elt->te_flags |= TELT_ABSOLUTE;
  292         }
  293         else {
  294             elt->te_expire_time = clock->time;
  295             time_spec_add(elt->te_expire_time, time);
  296         }
  297         if (time_spec_leq(elt->te_expire_time, clock->time)) {
  298             clock_queue_unlock(clock);
  299             splx(s);
  300             return FALSE;
  301         }
  302 
  303         timer_elt_insque(&clock->head, elt);
  304         elt->te_flags |= TELT_SET;
  305 
  306         clock_queue_unlock(clock);
  307         splx(s);
  308         return TRUE;
  309 }
  310 
  311 /*
  312  *      Remove a timer element from its clock queue.
  313  *      Return TRUE if the timer element was on the queue.
  314  */
  315 boolean_t timer_elt_dequeue(
  316         timer_elt_t     telt)
  317 {
  318         mach_clock_t    clock;
  319         spl_t           s;
  320 
  321         s = splsched();
  322 
  323         clock = telt->te_clock;
  324         clock_queue_lock(clock);
  325 
  326         if (telt->te_flags & TELT_SET) {
  327             timer_elt_remque(&clock->head, (queue_entry_t)telt);
  328             telt->te_flags &= ~(TELT_SET | TELT_ALLOC);
  329             clock_queue_unlock(clock);
  330             splx(s);
  331             return TRUE;
  332         }
  333         else {
  334             clock_queue_unlock(clock);
  335             splx(s);
  336             return FALSE;
  337         }
  338 }
  339 
  340 /*
  341  * If a clock`s time is changed, adjust the time of all relative
  342  * timers on that clock.
  343  *
  344  * Called with the clock locked.
  345  */
  346 void clock_timer_adjust(
  347         mach_clock_t    clock,
  348         time_spec_t     delta)
  349 {
  350         timer_elt_t     elt;
  351 
  352         queue_iterate(&clock->head.chain, elt, timer_elt_t, te_chain) {
  353             if ((elt->te_flags & TELT_ABSOLUTE) == 0) {
  354                 time_spec_add(elt->te_expire_time, delta);
  355             }
  356         }
  357 }
  358 
  359 /*
  360  * Service timers on one clock.
  361  */
  362 void timer_service(
  363         mach_clock_t    clock)
  364 {
  365         void            (*funct)(void *);
  366         void            *param;
  367         timer_elt_t     telt;
  368         spl_t           s;
  369 
  370         /*
  371          * Lock the clock queue.
  372          */
  373         s = splsched();
  374         clock_queue_lock(clock);
  375 
  376         /*
  377          * Handle all of the timer elements that have expired.
  378          */
  379         while (telt = (timer_elt_t) queue_first(&clock->head.chain),
  380                time_spec_leq(telt->te_expire_time, clock->time))
  381         {
  382             /*
  383              * Remove expired element from the queue.
  384              * If it is periodic, increment its expiration
  385              * time and re-add it.  Otherwise, mark it
  386              * as not on the clock queue.
  387              */
  388             timer_elt_remque(&clock->head, (queue_entry_t)telt);
  389 
  390             if (telt->te_flags & TELT_PERIODIC) {
  391                 time_spec_add(telt->te_expire_time, telt->te_period);
  392                 timer_elt_insque(&clock->head, telt);
  393             }
  394             else {
  395                 telt->te_flags = TELT_UNSET;
  396             }
  397 
  398             /*
  399              * Drop lock and interrupt protection around
  400              * call to timeout routine.  Save the function
  401              * and parameter; the timer element is not
  402              * accessible once the lock is dropped.
  403              */
  404             funct = telt->te_fcn;
  405             param = telt->te_param;
  406 
  407             clock_queue_unlock(clock);
  408             splx(s);
  409 
  410             (*funct)(param);
  411 
  412             s = splsched();
  413             clock_queue_lock(clock);
  414         }
  415 
  416         clock_queue_unlock(clock);
  417         splx(s);
  418 }
  419 
  420 /*
  421  *      Set a new period for the clock.
  422  *      Called with clock locked, at splsched.
  423  */
  424 void set_new_clock_period(
  425         mach_clock_t    clock)
  426 {
  427         clock->resolution = clock->new_resolution;
  428         clock->skew       = clock->new_skew;
  429 
  430         clock->new_resolution = 0;
  431         clock->skew = 0;
  432 
  433         /*
  434          *      Set the hardware clock.
  435          */
  436         (*clock->ops->set_resolution)(clock);
  437 }
  438 
  439 /*
  440  *      Initialize a clock.
  441  */
  442 void clock_init(
  443         mach_clock_t    clock,
  444         struct clock_ops *ops)
  445 {
  446         clock->time.seconds     = 0;
  447         clock->time.nanoseconds = 0;
  448         clock->check_seconds    = 0;
  449         clock->resolution       = 0;
  450         clock->skew             = 0;
  451         clock->correction_delta = 0;
  452         clock->correction_count = 0;
  453         queue_init(&clock->head.chain);
  454         time_spec_set_infinite(clock->head.expire_time);
  455                                 /* sentinel at end of timer list */
  456         simple_lock_init(&clock->queue_lock);
  457         clock->new_resolution   = 0;
  458         clock->new_skew         = 0;
  459         clock->mtime            = 0;
  460         clock->ops              = ops;
  461 
  462         /*
  463          *      Add the clock to the list.
  464          *      No locking - we find all clocks at system startup,
  465          *      and never remove them.
  466          */
  467         clock->next = clock_list;
  468         clock_list = clock;
  469 }
  470 
  471 /*
  472  *      Clock interrupt for standard clock (not system clock).
  473  *      Update the clock time, and check for pending timeouts.
  474  */
  475 void clock_interrupt(
  476         mach_clock_t    clock)
  477 {
  478         spl_t   s;
  479 
  480         s = splsched();
  481         clock_queue_lock(clock);
  482 
  483         /*
  484          *      If seconds do not change, only update nanoseconds.
  485          *      Otherwise, update check_seconds first so that
  486          *      clock can be read without locking.
  487          */
  488     {
  489         register unsigned int   nsec, sec;
  490 
  491         nsec = clock->time.nanoseconds +
  492                         (clock->resolution + clock->correction_delta);
  493         if (nsec < NANOSEC_PER_SEC) {
  494             clock->time.nanoseconds = nsec;
  495         }
  496         else {
  497             nsec -= NANOSEC_PER_SEC;
  498             sec   = clock->time.seconds + 1;
  499 
  500             clock->check_seconds = sec;
  501             clock->time.nanoseconds = nsec;
  502             clock->time.seconds = sec;
  503         }
  504         if (clock->correction_count != 0) {
  505             if (--clock->correction_count == 0)
  506                 clock->correction_delta = 0;
  507         }
  508     }
  509 
  510         /*
  511          *      Request a timer AST if timeouts are pending.
  512          */
  513     {
  514         register timer_elt_t    telt;
  515 
  516         telt = (timer_elt_t)queue_first(&clock->head.chain);
  517         if (time_spec_leq(telt->te_expire_time, clock->time)) {
  518             int         my_cpu = cpu_number();
  519             ast_on(my_cpu, AST_TIMER);
  520 #ifdef  TIMER_DEBUG
  521             timer_ast_requested_time = sys_clock->time;
  522 #endif
  523         }
  524     }
  525 
  526         /*
  527          *      Update mapped time.
  528          */
  529         clock_set_mtime(clock);
  530 
  531         /*
  532          *      Change the clock resolution here if
  533          *      a change was requested.
  534          */
  535         if (clock->new_resolution != 0)
  536             set_new_clock_period(clock);
  537 
  538         clock_queue_unlock(clock);
  539         splx(s);
  540 }
  541 
  542 /*
  543  *      Run timeout code.
  544  *      Called from AST level.
  545  */
  546 void timer_ast(void)
  547 {
  548         /*
  549          *      Handle timeouts.
  550          */
  551         mach_clock_t    clock;
  552 
  553 #ifdef  TIMER_DEBUG
  554         /*
  555          * Find the maximum number of ticks lost.
  556          */
  557         {
  558             spl_t       s;
  559             time_spec_t diff;
  560 
  561             s = splsched();
  562             diff = sys_clock->time;
  563             time_spec_subtract(diff, timer_ast_requested_time);
  564             if (!time_spec_leq(diff, timer_ast_max_time_missed))
  565                 timer_ast_max_time_missed = diff;
  566             timer_ast_calls++;
  567             time_spec_add(timer_ast_total_time_missed, diff);
  568             splx(s);
  569         }
  570 #endif
  571         
  572         for (clock = clock_list; clock; clock = clock->next)
  573             timer_service(clock);
  574 }
  575 
  576 
  577 /*
  578  ****************************************************************
  579  *                                                              *
  580  *      System clock                                            *
  581  *                                                              *
  582  ****************************************************************
  583  */
  584 mach_clock_t    sys_clock = 0;          /* machine-dependent code finds it */
  585 
  586 /*
  587  *      This update protocol, with a check value, allows
  588  *              do {
  589  *                      secs = mtime->seconds;
  590  *                      usecs = mtime->microseconds;
  591  *              } while (secs != mtime->check_seconds);
  592  *      to read the time correctly.  (On a multiprocessor this assumes
  593  *      that processors see each other's writes in the correct order.
  594  *      We may have to insert fence operations.)
  595  */
  596 
  597 mapped_time_value_t *mtime = 0;
  598 
  599 #define update_mapped_time(clock)                                       \
  600 MACRO_BEGIN                                                             \
  601         if (mtime != 0) {                                               \
  602             time_spec_t cur_time;                                       \
  603             clock_read(cur_time, (clock));                              \
  604             mtime->check_seconds = cur_time.seconds;                    \
  605             mtime->microseconds = cur_time.nanoseconds / 1000;          \
  606             mtime->seconds = cur_time.seconds;                          \
  607         }                                                               \
  608 MACRO_END
  609 
  610 
  611 /*
  612  *      Handle clock interrupt for system clock.
  613  *
  614  *      This clock maintains the thread run times (if
  615  *      statistical timing is in use) and the quantum
  616  *      for the currently running thread, as well as
  617  *      the seconds/microseconds clock used by old code.
  618  *      It also runs the system timeout list.
  619  *
  620  *      If there are multiple CPUS, this interrupt handler
  621  *      must be called on each CPU at the same clock rate.
  622  */
  623 
  624 void sys_clock_interrupt(
  625         boolean_t       usermode)       /* executing user code */
  626 {
  627         register int            my_cpu = cpu_number();
  628         register thread_t       thread = current_thread();
  629 
  630         counter(c_clock_ticks++);
  631         counter(c_threads_total += c_threads_current);
  632         counter(c_stacks_total += c_stacks_current);
  633 
  634 #if     STAT_TIME
  635         /*
  636          *      Increment the thread time, if using
  637          *      statistical timing.
  638          */
  639         if (usermode) {
  640             timer_bump(&thread->user_timer, sys_clock->resolution);
  641         }
  642         else {
  643             timer_bump(&thread->system_timer, sys_clock->resolution);
  644         }
  645 #endif  /* STAT_TIME */
  646 
  647         /*
  648          *      Adjust the thread`s priority and check for
  649          *      quantum expiration.
  650          */
  651 
  652         /*
  653          *      We assume that the clock interrupts no more
  654          *      frequently than 1/microsecond...
  655          */
  656 
  657         clock_quantum_update(thread, my_cpu, sys_clock->resolution >> 10);
  658 
  659 #if     MACH_PCSAMPLE > 0
  660         /*
  661          * Take a sample of pc for the user if required.
  662          * This had better be MP safe.  It might be interesting
  663          * to keep track of cpu in the sample.
  664          */
  665         if (usermode)
  666             take_pc_sample_macro(thread, SAMPLED_PC_PERIODIC);
  667 
  668 #endif  /* MACH_PCSAMPLE > 0 */
  669 
  670         /*
  671          *      Time-of-day and time-out list are updated only
  672          *      on the master CPU.
  673          */
  674         if (my_cpu != master_cpu) {
  675             return;
  676         }
  677 
  678         /*
  679          *      Update the time and handle timeouts.
  680          */
  681         clock_interrupt(sys_clock);
  682 
  683         /*
  684          *      Set the old time-of-day clocks
  685          *      (seconds/microseconds)
  686          */
  687 
  688         update_mapped_time(sys_clock);
  689 }
  690 
  691 /*
  692  *      Allow clock interrupts on the current CPU.
  693  */
  694 void enable_clock_interrupts(void)
  695 {
  696         (*sys_clock->ops->enable_interrupts)(sys_clock);
  697 }
  698 
  699 /*
  700  * Read the time.
  701  */
  702 kern_return_t
  703 host_get_time(
  704         host_t          host,
  705         time_value_t    *current_time)  /* OUT */
  706 {
  707         time_spec_t temp;
  708 
  709         if (host == HOST_NULL)
  710             return KERN_INVALID_HOST;
  711 
  712         clock_read(temp, sys_clock);
  713 
  714         current_time->seconds = temp.seconds;
  715         current_time->microseconds = temp.nanoseconds / 1000;
  716 
  717         return KERN_SUCCESS;
  718 }
  719 
  720 /*
  721  * Set the time.  Only available to privileged users.
  722  */
  723 kern_return_t
  724 host_set_time(
  725         host_t          host,
  726         time_value_t    new_time)
  727 {
  728         time_spec_t     temp;
  729 
  730         if (host == HOST_NULL)
  731             return KERN_INVALID_HOST;
  732 
  733         temp.seconds = new_time.seconds;
  734         temp.nanoseconds = new_time.microseconds * 1000;
  735 
  736         (void) clock_setstat(sys_clock,
  737                              CLOCK_TIME,
  738                              (dev_status_t) &temp,
  739                              CLOCK_TIME_COUNT);
  740 
  741         return KERN_SUCCESS;
  742 }
  743 
  744 /*
  745  *      Adjust-time parameters
  746  */
  747 
  748 #if     HZ > 500
  749 int             tickadj = 1;            /* can adjust HZ usecs per second */
  750 #else
  751 int             tickadj = 500 / HZ;     /* can adjust 500 usecs per second */
  752 #endif
  753 int             bigadj = 1000000;       /* adjust 10*tickadj if adjustment
  754                                            > bigadj */
  755 
  756 /*
  757  * Adjust the time gradually.
  758  */
  759 kern_return_t
  760 host_adjust_time(
  761         host_t          host,
  762         time_value_t    new_adjustment,
  763         time_value_t    *old_adjustment)        /* OUT */
  764 {
  765         int             ndelta, odelta, tickdelta;
  766         clock_correction_data_t old_correction, new_correction;
  767         clock_resolution_data_t sys_clock_resolution;
  768         natural_t       count;
  769 
  770         if (host == HOST_NULL)
  771             return KERN_INVALID_HOST;
  772 
  773         count = CLOCK_RESOLUTION_COUNT;
  774         (void) clock_getstat(sys_clock,
  775                              CLOCK_RESOLUTION,
  776                              (dev_status_t) &sys_clock_resolution,
  777                              &count);
  778 
  779         ndelta = new_adjustment.seconds * 1000000
  780                 + new_adjustment.microseconds;
  781 
  782         if (ndelta > bigadj)
  783             tickdelta = 10 * tickadj;
  784         else
  785             tickdelta = tickadj;
  786 
  787         new_correction.delta = tickdelta * 1000;        /* usec -> nsec */
  788         new_correction.count = ndelta / sys_clock_resolution.resolution;
  789 
  790         count = CLOCK_CORRECTION_COUNT;
  791         (void) clock_getstat(sys_clock,
  792                              CLOCK_CORRECTION,
  793                              (dev_status_t) &old_correction,
  794                              &count);
  795         count = CLOCK_CORRECTION_COUNT;
  796         (void) clock_setstat(sys_clock,
  797                              CLOCK_CORRECTION,
  798                              (dev_status_t) &new_correction,
  799                              count);
  800 
  801         odelta = (old_correction.delta / 1000) * old_correction.count;
  802                         /* nanos -> micros */
  803         old_adjustment->seconds = odelta / 1000000;
  804         old_adjustment->microseconds = odelta % 1000000;
  805 
  806         return KERN_SUCCESS;
  807 }
  808 
  809 int timeopen()
  810 {
  811         kern_return_t   kr;
  812         vm_offset_t     temp;
  813         kr = kmem_alloc_wired(kernel_map,
  814                              (vm_offset_t *) &temp,
  815                              PAGE_SIZE);
  816         if (kr != KERN_SUCCESS)
  817             return kr;
  818         bzero((void *) temp, PAGE_SIZE);
  819         mtime = (mapped_time_value_t *) temp;
  820         update_mapped_time(sys_clock);
  821 
  822         return 0;
  823 }
  824 
  825 int timeclose()
  826 {
  827         vm_offset_t     temp;
  828 
  829         temp = (vm_offset_t) mtime;
  830         mtime = 0;
  831 
  832         (void) kmem_free(kernel_map,
  833                          (vm_offset_t) temp,
  834                          PAGE_SIZE);
  835         return 0;
  836 }
  837 
  838 vm_offset_t timemmap(
  839         int     dev,
  840         vm_offset_t offset,
  841         vm_prot_t prot)
  842 {
  843         if ((prot & VM_PROT_WRITE) != 0 || offset != 0)
  844             return -1;          /* not valid */
  845 
  846         return pmap_extract(pmap_kernel(), (vm_offset_t) mtime);
  847 }
  848 
  849 /*
  850  *      Compatibility timeouts for device drivers.
  851  *      New code should use set_timeout/reset_timeout and private timers.
  852  *      These code can't use a zone to allocate timers, because
  853  *      it can be called from interrupt handlers.
  854  */
  855 
  856 #define NTIMERS         20
  857 
  858 timer_elt_data_t timeout_timers[NTIMERS];
  859 
  860 int             hz = HZ;                /* number of 'ticks' per second -
  861                                            only used for setting timeout
  862                                            interval for timeout() */
  863 
  864 /*
  865  *      Set timeout.
  866  *
  867  *      fcn:            function to call
  868  *      param:          parameter to pass to function
  869  *      interval:       timeout interval, in hz.
  870  */
  871 void timeout(
  872         void    (*fcn)(void * param),
  873         void *  param,
  874         int     interval)
  875 {
  876         spl_t   s;
  877         register timer_elt_t elt;
  878         time_spec_t     expire_time;
  879 
  880         s = splsched();
  881         clock_queue_lock(sys_clock);
  882         for (elt = &timeout_timers[0]; elt < &timeout_timers[NTIMERS]; elt++)
  883             if (elt->te_flags == TELT_UNSET)
  884                 break;
  885         if (elt == &timeout_timers[NTIMERS])
  886             panic("timeout");
  887 
  888         elt->te_fcn   = fcn;
  889         elt->te_param = param;
  890         elt->te_flags = TELT_ALLOC;
  891         elt->te_clock = sys_clock;
  892         clock_queue_unlock(sys_clock);
  893         splx(s);
  894 
  895         expire_time.seconds = interval / hz;
  896         expire_time.nanoseconds =
  897                          (interval % hz) * (NANOSEC_PER_SEC / hz);
  898 
  899         (void) timer_elt_enqueue(elt, expire_time, FALSE);
  900                         /* relative to clock */
  901 }
  902 
  903 /*
  904  * Returns a boolean indicating whether the timeout element was found
  905  * and removed.
  906  */
  907 boolean_t untimeout(
  908         register void   (*fcn)(void *),
  909         register void * param)
  910 {
  911         spl_t   s;
  912         register timer_elt_t elt;
  913 
  914         s = splsched();
  915         clock_queue_lock(sys_clock);
  916         queue_iterate(&sys_clock->head.chain, elt, timer_elt_t, te_chain) {
  917 
  918             if (fcn == elt->te_fcn && param == elt->te_param) {
  919                 /*
  920                  *      Found it.
  921                  */
  922                 remqueue(&sys_clock->head.chain, (queue_entry_t)elt);
  923                 elt->te_flags = TELT_UNSET;
  924 
  925                 clock_queue_unlock(sys_clock);
  926                 splx(s);
  927                 return TRUE;
  928             }
  929         }
  930         clock_queue_unlock(sys_clock);
  931         splx(s);
  932         return FALSE;
  933 }

Cache object: 74b298836cc1a57b5ffaa86c04cd2622


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.