The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/norma/ipc_input.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1991 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        ipc_input.c,v $
   29  * Revision 2.14  93/05/15  19:34:02  mrt
   30  *      machparam.h -> machspl.h
   31  * 
   32  * Revision 2.13  92/03/10  16:27:30  jsb
   33  *      Merged in norma branch changes as of NORMA_MK7.
   34  *      [92/03/09  12:49:08  jsb]
   35  * 
   36  * Revision 2.11.2.4  92/02/21  11:24:15  jsb
   37  *      Store msgh_id in global variable in norma_ipc_finish_receiving,
   38  *      for debugging purposes.
   39  *      [92/02/20  10:32:54  jsb]
   40  * 
   41  * Revision 2.11.2.3  92/01/21  21:51:09  jsb
   42  *      Removed global_msgh_id.
   43  *      [92/01/17  14:35:50  jsb]
   44  * 
   45  *      More de-linting.
   46  *      [92/01/17  11:39:24  jsb]
   47  * 
   48  *      More de-linting.
   49  *      [92/01/16  22:10:41  jsb]
   50  * 
   51  *      De-linted.
   52  *      [92/01/13  10:15:12  jsb]
   53  * 
   54  *      Fix from dlb to increment receiver->ip_seqno in thread_go case.
   55  *      [92/01/11  17:41:46  jsb]
   56  * 
   57  *      Moved netipc_ack status demultiplexing here.
   58  *      [92/01/11  17:08:19  jsb]
   59  * 
   60  * Revision 2.11.2.2  92/01/09  18:45:18  jsb
   61  *      Turned off copy object continuation debugging.
   62  *      [92/01/09  15:37:46  jsb]
   63  * 
   64  *      Added support for copy object continuations.
   65  *      [92/01/09  13:18:50  jsb]
   66  * 
   67  *      Replaced spls with netipc_thread_{lock,unlock}.
   68  *      [92/01/08  10:14:14  jsb]
   69  * 
   70  *      Made out-of-line ports work.
   71  *      [92/01/05  17:51:28  jsb]
   72  * 
   73  *      Parameter copy_npages replaced by page_last in norma_deliver_page.
   74  *      Removed continuation panic since continuations are coming soon.
   75  *      [92/01/05  15:58:34  jsb]
   76  * 
   77  * Revision 2.11.2.1  92/01/03  16:37:18  jsb
   78  *      Replaced norma_ipc_ack_failure with norma_ipc_ack_{dead,not_found}.
   79  *      [91/12/29  16:01:41  jsb]
   80  * 
   81  *      Added type parameter to norma_ipc_receive_migrating_dest.
   82  *      Added debugging code to remember msgh_id when creating proxies.
   83  *      [91/12/28  18:07:18  jsb]
   84  * 
   85  *      Pass remote node via kmsg->ikm_source_node to norma_ipc_receive_port
   86  *      on its way to norma_ipc_receive_rright. Now that we have a real
   87  *      ikm_source_node kmsg field, we can get rid of the ikm_remote hack.
   88  *      [91/12/27  21:37:36  jsb]
   89  * 
   90  *      Removed unused msgid (not msgh_id) parameters.
   91  *      [91/12/27  17:08:39  jsb]
   92  * 
   93  *      Queue migrated messages on atrium port.
   94  *      [91/12/26  20:37:49  jsb]
   95  * 
   96  *      Moved translation of local port to norma_receive_complex_ports.
   97  *      Moved norma_receive_complex_ports call to norma_ipc_finish_receiving.
   98  *      Added code for MACH_MSGH_BITS_MIGRATED, including call to new routine
   99  *      norma_ipc_receive_migrating_dest. 
  100  *      [91/12/25  16:54:50  jsb]
  101  * 
  102  *      Made large kmsgs work correctly. Corrected log.
  103  *      Added check for null local port in norma_deliver_kmsg.
  104  *      [91/12/24  14:33:18  jsb]
  105  * 
  106  * Revision 2.11  91/12/15  17:31:06  jsb
  107  *      Almost made large kmsgs work... now it leaks but does not crash.
  108  *      Changed debugging printfs.
  109  * 
  110  * Revision 2.10  91/12/15  10:47:09  jsb
  111  *      Added norma_ipc_finish_receiving to support large in-line msgs.
  112  *      Small clean-up of norma_deliver_page.
  113  * 
  114  * Revision 2.9  91/12/14  14:34:11  jsb
  115  *      Removed private assert definition.
  116  * 
  117  * Revision 2.8  91/12/13  13:55:01  jsb
  118  *      Fixed check for end of last copy object in norma_deliver_page.
  119  *      Moved norma_ipc_ack_xxx calls to safer places.
  120  * 
  121  * Revision 2.7  91/12/10  13:26:00  jsb
  122  *      Added support for moving receive rights.
  123  *      Use norma_ipc_ack_* upcalls (downcalls?) instead of return values
  124  *      from norma_deliver_kmsg and _page.
  125  *      Merged dlb check for continuation-needing copy objects in
  126  *      norma_deliver_page.
  127  *      Added (untested) support for multiple copy objects per message.
  128  *      [91/12/10  11:26:32  jsb]
  129  * 
  130  * Revision 2.6  91/11/19  09:40:50  rvb
  131  *      Added new_remote argument to norma_deliver_kmsg to support
  132  *      migrating receive rights.
  133  *      [91/11/00            jsb]
  134  * 
  135  * Revision 2.5  91/11/14  16:51:39  rpd
  136  *      Replaced norma_ipc_get_proxy with norma_ipc_receive_{port,dest}.
  137  *      Added check that destination port can accept message.
  138  *      Added checks on type of received rights.
  139  *      [91/09/19  13:51:21  jsb]
  140  * 
  141  * Revision 2.4  91/08/28  11:16:00  jsb
  142  *      Mark received pages as dirty and not busy.
  143  *      Initialize copy->cpy_cont and copy->cpy_cont_args.
  144  *      [91/08/16  10:44:19  jsb]
  145  * 
  146  *      Fixed reference to norma_ipc_kobject_send.
  147  *      [91/08/15  08:42:23  jsb]
  148  * 
  149  *      Renamed clport things to norma things.
  150  *      [91/08/14  21:34:13  jsb]
  151  * 
  152  *      Fixed norma_ipc_handoff code.
  153  *      Added splon/sploff redefinition hack.
  154  *      [91/08/14  19:11:07  jsb]
  155  * 
  156  * Revision 2.3  91/08/03  18:19:19  jsb
  157  *      Use MACH_MSGH_BITS_COMPLEX_DATA instead of null msgid to determine
  158  *      whether data follows kmsg.
  159  *      [91/08/01  21:57:37  jsb]
  160  * 
  161  *      Eliminated remaining old-style page list code.
  162  *      Cleaned up and corrected clport_deliver_page.
  163  *      [91/07/27  18:47:08  jsb]
  164  * 
  165  *      Moved MACH_MSGH_BITS_COMPLEX_{PORTS,DATA} to mach/message.h.
  166  *      [91/07/04  13:10:48  jsb]
  167  * 
  168  *      Use vm_map_copy_t's instead of old style page_lists.
  169  *      Still need to eliminate local conversion between formats.
  170  *      [91/07/04  10:18:11  jsb]
  171  * 
  172  * Revision 2.2  91/06/17  15:47:41  jsb
  173  *      Moved here from ipc/ipc_clinput.c.
  174  *      [91/06/17  11:02:28  jsb]
  175  * 
  176  * Revision 2.2  91/06/06  17:05:18  jsb
  177  *      Fixed copyright.
  178  *      [91/05/24  13:18:23  jsb]
  179  * 
  180  *      First checkin.
  181  *      [91/05/14  13:29:10  jsb]
  182  * 
  183  */
  184 /*
  185  *      File:   norma/ipc_input.c
  186  *      Author: Joseph S. Barrera III
  187  *      Date:   1991
  188  *
  189  *      Functions to support ipc between nodes in a single Mach cluster.
  190  */
  191 
  192 #include <machine/machspl.h>
  193 #include <vm/vm_kern.h>
  194 #include <vm/vm_page.h>
  195 #include <mach/vm_param.h>
  196 #include <mach/port.h>
  197 #include <mach/message.h>
  198 #include <kern/assert.h>
  199 #include <kern/host.h>
  200 #include <kern/sched_prim.h>
  201 #include <kern/ipc_sched.h>
  202 #include <kern/ipc_kobject.h>
  203 #include <kern/zalloc.h>
  204 #include <ipc/ipc_mqueue.h>
  205 #include <ipc/ipc_thread.h>
  206 #include <ipc/ipc_kmsg.h>
  207 #include <ipc/ipc_port.h>
  208 #include <ipc/ipc_pset.h>
  209 #include <ipc/ipc_space.h>
  210 #include <ipc/ipc_marequest.h>
  211 #include <norma/ipc_net.h>
  212 
  213 extern zone_t vm_map_copy_zone;
  214 
  215 extern vm_map_copy_t netipc_copy_grab();
  216 extern void norma_ipc_kobject_send();
  217 
  218 extern ipc_mqueue_t     norma_ipc_handoff_mqueue;
  219 extern ipc_kmsg_t       norma_ipc_handoff_msg;
  220 extern mach_msg_size_t  norma_ipc_handoff_max_size;
  221 extern mach_msg_size_t  norma_ipc_handoff_msg_size;
  222 
  223 extern ipc_port_t       norma_ipc_receive_port();
  224 
  225 ipc_kmsg_t norma_kmsg_complete;
  226 ipc_kmsg_t norma_kmsg_incomplete;
  227 
  228 int jc_handoff_fasthits = 0;
  229 int jc_handoff_hits = 0;
  230 int jc_handoff_misses = 0;
  231 int jc_handoff_m2 = 0;          /* XXX very rare (0.1 %) */
  232 int jc_handoff_m3 = 0;
  233 int jc_handoff_m4 = 0;
  234 int jc_netipc_ast = 0;
  235 
  236 #define MACH_MSGH_BITS_COMPLEX_ANYTHING \
  237         (MACH_MSGH_BITS_COMPLEX_DATA | MACH_MSGH_BITS_COMPLEX_PORTS)
  238 
  239 /*
  240  * Called from a thread context, by the receiving thread.
  241  * May replace kmsg with new kmsg.
  242  *
  243  * (What if the message stays on the queue forever, hogging resources?)
  244  *
  245  * The only places Rich and I can think of where messages are received are:
  246  *      after calling ipc_mqueue_receive
  247  *      in exception handling path
  248  *      in kobject server
  249  */
  250 int input_msgh_id = 0;
  251 norma_ipc_finish_receiving(kmsgp)
  252         ipc_kmsg_t *kmsgp;
  253 {
  254         mach_msg_header_t *msgh;
  255         mach_msg_bits_t mbits;
  256 
  257         /*
  258          * Common case: not a norma message.
  259          */
  260         if ((*kmsgp)->ikm_size != IKM_SIZE_NORMA) {
  261                 return;
  262         }
  263 
  264         /*
  265          * Translate local port, if one exists.
  266          */
  267         msgh = &(*kmsgp)->ikm_header;
  268         input_msgh_id = msgh->msgh_id;
  269         mbits = msgh->msgh_bits;
  270         if (msgh->msgh_local_port) {
  271                 /*
  272                  * We could call the correct form directly,
  273                  * eliminating the need to pass ikm_source_node.
  274                  */
  275                 assert(MACH_MSGH_BITS_LOCAL(mbits) !=
  276                        MACH_MSG_TYPE_PORT_RECEIVE);
  277                 msgh->msgh_local_port = (mach_port_t)
  278                     norma_ipc_receive_port((unsigned long)
  279                                            msgh->msgh_local_port,
  280                                            MACH_MSGH_BITS_LOCAL(mbits),
  281                                            (*kmsgp)->ikm_source_node);
  282         }
  283 
  284         /*
  285          * Common case: nothing left to do.
  286          */
  287         if ((mbits & MACH_MSGH_BITS_COMPLEX_ANYTHING) == 0) {
  288                 return;
  289         }
  290 
  291         /*
  292          * Do we need to assemble a large message?
  293          */
  294         if (mbits & MACH_MSGH_BITS_COMPLEX_DATA) {
  295                 norma_ipc_receive_complex_data(kmsgp);
  296         }
  297 
  298         /*
  299          * Do we need to process some ports?
  300          *
  301          * XXX local port handling should always be done here
  302          */
  303         if (mbits & MACH_MSGH_BITS_COMPLEX_PORTS) {
  304                 norma_ipc_receive_complex_ports(*kmsgp);
  305         }
  306 }
  307 
  308 /*
  309  * Replace fragmented kmsg with contiguous kmsg.
  310  */
  311 norma_ipc_receive_complex_data(kmsgp)
  312         ipc_kmsg_t *kmsgp;
  313 {
  314         ipc_kmsg_t old_kmsg = *kmsgp, kmsg;
  315         vm_map_copy_t copy;
  316         int i;
  317 
  318         /*
  319          * Assemble kmsg pages into one large kmsg.
  320          *
  321          * XXX
  322          * For now, we do so by copying the pages.
  323          * We could remap the kmsg instead.
  324          */
  325         kmsg = ikm_alloc(old_kmsg->ikm_header.msgh_size);
  326         if (kmsg == IKM_NULL) {
  327                 panic("norma_ipc_finish_receiving: ikm_alloc\n");
  328                 return;
  329         }
  330 
  331         /*
  332          * Copy and deallocate the first page.
  333          */
  334         assert(old_kmsg->ikm_size == IKM_SIZE_NORMA);
  335         assert(old_kmsg->ikm_header.msgh_size + IKM_OVERHEAD > PAGE_SIZE);
  336         bcopy((char *) old_kmsg, (char *) kmsg, (int) PAGE_SIZE);
  337         norma_kmsg_put(old_kmsg);
  338         ikm_init(kmsg, kmsg->ikm_header.msgh_size);
  339         kmsg->ikm_header.msgh_bits &= ~MACH_MSGH_BITS_COMPLEX_DATA;
  340 
  341         /*
  342          * Copy the other pages.
  343          */
  344         copy = kmsg->ikm_copy;
  345         for (i = 0; i < copy->cpy_npages; i++) {
  346                 int length;
  347                 vm_page_t m;
  348                 char *page;
  349 
  350                 m = copy->cpy_page_list[i];
  351                 if (i == copy->cpy_npages - 1) {
  352                         length = copy->size - i * PAGE_SIZE;
  353                 } else {
  354                         length = PAGE_SIZE;
  355                 }
  356                 assert(length <= PAGE_SIZE);
  357                 assert((i+1) * PAGE_SIZE + length <=
  358                        ikm_plus_overhead(kmsg->ikm_header.msgh_size));
  359                 page = (char *) phystokv(m->phys_addr);
  360                 bcopy((char *) page, (char *) kmsg + (i+1) * PAGE_SIZE,
  361                       (int) length);
  362         }
  363 
  364         /*
  365          * Deallocate pages; release copy object.
  366          */
  367         netipc_thread_lock();
  368         for (i = 0; i < copy->cpy_npages; i++) {
  369                 netipc_page_put(copy->cpy_page_list[i]);
  370         }
  371         netipc_copy_ungrab(copy);
  372         netipc_thread_unlock();
  373 
  374         /*
  375          * Return kmsg.
  376          */
  377         *kmsgp = kmsg;
  378 }
  379 
  380 vm_offset_t
  381 copy_to_kalloc(copy)
  382         vm_map_copy_t copy;
  383 {
  384         vm_offset_t k;
  385         int i;
  386 
  387         k = kalloc(copy->size);
  388         assert(k);
  389 
  390         /*
  391          * Copy the other pages.
  392          */
  393         for (i = 0; i < copy->cpy_npages; i++) {
  394                 int length;
  395                 vm_page_t m;
  396                 char *page;
  397 
  398                 m = copy->cpy_page_list[i];
  399                 if (i == copy->cpy_npages - 1) {
  400                         length = copy->size - i * PAGE_SIZE;
  401                 } else {
  402                         length = PAGE_SIZE;
  403                 }
  404                 assert(length <= PAGE_SIZE);
  405                 page = (char *) phystokv(m->phys_addr);
  406                 bcopy((char *) page, (char *) k + i * PAGE_SIZE, (int) length);
  407         }
  408 
  409         /*
  410          * Deallocate pages; release copy object.
  411          */
  412         netipc_thread_lock();
  413         for (i = 0; i < copy->cpy_npages; i++) {
  414                 netipc_page_put(copy->cpy_page_list[i]);
  415         }
  416         netipc_copy_ungrab(copy);
  417         netipc_thread_unlock();
  418 
  419         return k;
  420 }
  421 
  422 /*
  423  * Translate ports. Don't do anything with data.
  424  */
  425 norma_ipc_receive_complex_ports(kmsg)
  426         ipc_kmsg_t kmsg;
  427 {
  428         mach_msg_header_t *msgh = &kmsg->ikm_header;
  429         vm_offset_t saddr = (vm_offset_t) (msgh + 1);
  430         vm_offset_t eaddr = (vm_offset_t) msgh + msgh->msgh_size;
  431 
  432         msgh->msgh_bits &= ~MACH_MSGH_BITS_COMPLEX_PORTS;
  433         while (saddr < eaddr) {
  434                 mach_msg_type_long_t *type;
  435                 mach_msg_type_size_t size;
  436                 mach_msg_type_number_t number;
  437                 boolean_t is_inline, longform;
  438                 mach_msg_type_name_t type_name;
  439                 vm_size_t length;
  440 
  441                 type = (mach_msg_type_long_t *) saddr;
  442                 is_inline = type->msgtl_header.msgt_inline;
  443                 longform = type->msgtl_header.msgt_longform;
  444                 if (longform) {
  445                         type_name = type->msgtl_name;
  446                         size = type->msgtl_size;
  447                         number = type->msgtl_number;
  448                         saddr += sizeof(mach_msg_type_long_t);
  449                 } else {
  450                         type_name = type->msgtl_header.msgt_name;
  451                         size = type->msgtl_header.msgt_size;
  452                         number = type->msgtl_header.msgt_number;
  453                         saddr += sizeof(mach_msg_type_t);
  454                 }
  455 
  456                 /* calculate length of data in bytes, rounding up */
  457                 length = ((number * size) + 7) >> 3;
  458 
  459                 if (MACH_MSG_TYPE_PORT_ANY(type_name)) {
  460                         ipc_port_t *ports;
  461                         mach_msg_type_number_t i;
  462 
  463                         if (is_inline) {
  464                                 ports = (ipc_port_t *) saddr;
  465                         } else if (number > 0) {
  466                                 vm_map_copy_t copy = * (vm_map_copy_t *) saddr;
  467                                 * (vm_offset_t *) saddr = copy_to_kalloc(copy);
  468                                 ports = (ipc_port_t *) * (vm_offset_t *) saddr;
  469                         }
  470                         for (i = 0; i < number; i++) {
  471                                 if (type_name == MACH_MSG_TYPE_PORT_RECEIVE) {
  472                                         mumble("rright 0x%x\n", ports[i]);
  473                                 }
  474                                 ports[i] = (ipc_port_t)
  475                                     norma_ipc_receive_port((unsigned long)
  476                                                            ports[i],
  477                                                            type_name,
  478                                                            kmsg->
  479                                                            ikm_source_node);
  480                         }
  481                 }
  482 
  483                 if (is_inline) {
  484                         saddr += (length + 3) &~ 3;
  485                 } else {
  486                         saddr += sizeof(vm_offset_t);
  487                 }
  488         }
  489 }
  490 
  491 /*
  492  * Called in ast-mode, where it is safe to execute ipc code but not to block.
  493  * (This can actually be in an ast, or from an interrupt handler when the
  494  * processor was in the idle thread or spinning on norma_ipc_handoff_mqueue.)
  495  *
  496  * xxx verify port locking
  497  */
  498 norma_handoff_kmsg(kmsg)
  499         ipc_kmsg_t kmsg;
  500 {
  501         ipc_port_t port;
  502         ipc_mqueue_t mqueue;
  503         ipc_pset_t pset;
  504         ipc_thread_t receiver;
  505         ipc_thread_queue_t receivers;
  506         
  507         jc_handoff_fasthits++;
  508 
  509         /*
  510          * Change meaning of complex_data bits to mean a kmsg that
  511          * must be made contiguous.
  512          */
  513         if (kmsg->ikm_copy == VM_MAP_COPY_NULL) {
  514                 kmsg->ikm_header.msgh_bits &= ~MACH_MSGH_BITS_COMPLEX_DATA;
  515         } else {
  516                 kmsg->ikm_header.msgh_bits |= MACH_MSGH_BITS_COMPLEX_DATA;
  517         }
  518 
  519         /*
  520          * We must check to see whether this message is destined for a
  521          * kernel object. If it is, and if we were to call ipc_mqueue_send,
  522          * we would execute the kernel operation, possibly blocking,
  523          * which would be bad. Instead, we hand the kmsg off to a kserver
  524          * thread which does the delivery and associated kernel operation.
  525          */
  526         port = (ipc_port_t) kmsg->ikm_header.msgh_remote_port;
  527         assert(IP_VALID(port));
  528         if (port->ip_receiver == ipc_space_kernel) {
  529                 norma_ipc_kobject_send(kmsg);
  530                 return;
  531         }
  532 
  533         /*
  534          * If this is a migrating message, then just stick it
  535          * directly on the queue, and return.
  536          */
  537         if (kmsg->ikm_header.msgh_bits & MACH_MSGH_BITS_MIGRATED) {
  538                 port = port->ip_norma_atrium;
  539                 port->ip_msgcount++;
  540                 ipc_kmsg_enqueue_macro(&port->ip_messages.imq_messages, kmsg);
  541                 return;
  542         }
  543 
  544         /*
  545          * If there is no one spinning waiting for a message,
  546          * then queue this kmsg via the normal mqueue path.
  547          *
  548          * We don't have to check queue length here (or in mqueue_send)
  549          * because we already checked it in receive_dest_*.
  550          */
  551         if (norma_ipc_handoff_mqueue == IMQ_NULL) {
  552                 ipc_mqueue_send_always(kmsg);
  553                 return;
  554         }
  555 
  556         /*
  557          * Find the queue associated with this port.
  558          */
  559         ip_lock(port);
  560         port->ip_msgcount++;
  561         assert(port->ip_msgcount > 0);
  562         pset = port->ip_pset;
  563         if (pset == IPS_NULL) {
  564                 mqueue = &port->ip_messages;
  565         } else {
  566                 mqueue = &pset->ips_messages;
  567         }
  568 
  569         /*
  570          * If someone is spinning on this queue, we must release them.
  571          * However, if the message is too large for them to successfully
  572          * receive it, we continue below to find a receiver.
  573          */
  574         if (mqueue == norma_ipc_handoff_mqueue) {
  575                 norma_ipc_handoff_msg = kmsg;
  576                 if (kmsg->ikm_header.msgh_size <= norma_ipc_handoff_max_size) {
  577                         ip_unlock(port);
  578                         return;
  579                 }
  580                 norma_ipc_handoff_msg_size = kmsg->ikm_header.msgh_size;
  581         }
  582         
  583         imq_lock(mqueue);
  584         receivers = &mqueue->imq_threads;
  585         ip_unlock(port);
  586         
  587         for (;;) {
  588                 receiver = ipc_thread_queue_first(receivers);
  589                 if (receiver == ITH_NULL) {
  590                         /* no receivers; queue kmsg */
  591                         
  592                         ipc_kmsg_enqueue_macro(&mqueue->imq_messages, kmsg);
  593                         imq_unlock(mqueue);
  594                         return;
  595                 }
  596                 
  597                 ipc_thread_rmqueue_first_macro(receivers, receiver);
  598                 assert(ipc_kmsg_queue_empty(&mqueue->imq_messages));
  599                 
  600                 if (kmsg->ikm_header.msgh_size <= receiver->ith_msize) {
  601                         /* got a successful receiver */
  602                         
  603                         receiver->ith_state = MACH_MSG_SUCCESS;
  604                         receiver->ith_kmsg = kmsg;
  605                         receiver->ith_seqno = port->ip_seqno++;
  606                         imq_unlock(mqueue);
  607                         
  608                         thread_go(receiver);
  609                         return;
  610                 }
  611                 
  612                 receiver->ith_state = MACH_RCV_TOO_LARGE;
  613                 receiver->ith_msize = kmsg->ikm_header.msgh_size;
  614                 thread_go(receiver);
  615         }
  616 }
  617 
  618 /*
  619  * Called from a thread context where it's okay to lock but not to block.
  620  */
  621 void netipc_ast()
  622 {
  623         ipc_kmsg_t kmsg;
  624 
  625         netipc_thread_lock();
  626         while (kmsg = norma_kmsg_complete) {
  627                 norma_kmsg_complete = kmsg->ikm_next;
  628                 norma_handoff_kmsg(kmsg);
  629         }
  630         ast_off(cpu_number(), AST_NETIPC);
  631         netipc_thread_unlock();
  632 }
  633 
  634 norma_deliver_kmsg(kmsg, remote)
  635         ipc_kmsg_t kmsg;
  636         unsigned long remote;
  637 {
  638         register mach_msg_header_t *msgh;
  639         kern_return_t kr;
  640 
  641         assert(netipc_intr_locked());
  642 
  643         /*
  644          * Translate remote_port, and check that it can accept a message.
  645          */
  646         kmsg->ikm_copy = VM_MAP_COPY_NULL;
  647         kmsg->ikm_source_node = remote;
  648         msgh = (mach_msg_header_t *) &kmsg->ikm_header;
  649         if (msgh->msgh_bits & MACH_MSGH_BITS_MIGRATED) {
  650                 kr = norma_ipc_receive_migrating_dest(
  651                         (unsigned long) msgh->msgh_remote_port,
  652                         MACH_MSGH_BITS_REMOTE(msgh->msgh_bits),
  653                         (ipc_port_t *) &msgh->msgh_remote_port);
  654         } else {
  655                 kr = norma_ipc_receive_dest(
  656                         (unsigned long) msgh->msgh_remote_port,
  657                         MACH_MSGH_BITS_REMOTE(msgh->msgh_bits),
  658                         (ipc_port_t *) &msgh->msgh_remote_port);
  659         }
  660 
  661         /*
  662          * If failure, then acknowledge failure now.
  663          *
  664          * Should this work be done in receive_dest???
  665          */
  666         if (kr != KERN_SUCCESS) {
  667                 norma_ipc_ack(kr, (unsigned long) msgh->msgh_remote_port);
  668                 return;
  669         }
  670 
  671         /*
  672          * Mark kmsg as a norma kmsg so that it gets return to norma pool.
  673          * This is also used by norma_ipc_finish_receiving to detect that
  674          * it is a norma kmsg.
  675          */
  676         kmsg->ikm_size = IKM_SIZE_NORMA;
  677 
  678         /*
  679          * If the message is incomplete, put it on the incomplete list.
  680          */
  681         if (msgh->msgh_bits & MACH_MSGH_BITS_COMPLEX_DATA) {
  682                 kmsg->ikm_next = norma_kmsg_incomplete;
  683                 norma_kmsg_incomplete = kmsg;
  684                 norma_ipc_ack(KERN_SUCCESS, 0L);
  685                 return;
  686         }
  687         /*
  688          * The message is complete.
  689          * If it safe to process it now, do so.
  690          */
  691         if (norma_ipc_handoff_mqueue) {
  692                 norma_ipc_ack(KERN_SUCCESS, 0L);
  693                 norma_handoff_kmsg(kmsg);
  694                 return;
  695         }
  696         /*
  697          * It is not safe now to process the complete message,
  698          * so place it on the list of completed messages,
  699          * and post an ast.
  700          * XXX
  701          * 1. should be conditionalized on whether we really
  702          *      are called at interrupt level
  703          * 2. should check flag set by *all* idle loops
  704          * 3. this comment applies as well to deliver_page
  705          */
  706         {
  707                 register ipc_kmsg_t *kmsgp;
  708                 
  709                 kmsg->ikm_next = IKM_NULL;
  710                 if (norma_kmsg_complete) {
  711                         for (kmsgp = &norma_kmsg_complete;
  712                              (*kmsgp)->ikm_next;
  713                              kmsgp = &(*kmsgp)->ikm_next) {
  714                                 continue;
  715                         }
  716                         (*kmsgp)->ikm_next = kmsg;
  717                 } else {
  718                         norma_kmsg_complete = kmsg;
  719                 }
  720         }
  721         jc_handoff_misses++;
  722         ast_on(cpu_number(), AST_NETIPC);
  723         norma_ipc_ack(KERN_SUCCESS, 0L);
  724 }
  725 
  726 kern_return_t
  727 norma_deliver_page_continuation(cont_args, copy_result)
  728         char *cont_args;
  729         vm_map_copy_t *copy_result;
  730 {
  731         boolean_t abort;
  732 
  733         abort = (copy_result == (vm_map_copy_t *) 0);
  734         if (abort) {
  735                 /*
  736                  * XXX need to handle this
  737                  */
  738                 panic("norma_deliver_page_continuation: abort!\n");
  739                 return KERN_SUCCESS;
  740         } else {
  741                 *copy_result = (vm_map_copy_t) cont_args;
  742                 return KERN_SUCCESS;
  743         }
  744 }
  745 
  746 norma_deliver_page(page, copy_msgh_offset, remote, page_first, page_last,
  747                    copy_last, copy_size, copy_offset)
  748         vm_page_t page;
  749         unsigned long copy_msgh_offset;
  750         unsigned long remote;
  751         boolean_t page_first;
  752         boolean_t page_last;
  753         boolean_t copy_last;
  754         unsigned long copy_size;
  755         unsigned long copy_offset;
  756 {
  757         ipc_kmsg_t kmsg, *kmsgp;
  758         vm_map_copy_t copy, *copyp, new_copy;
  759 
  760         assert(netipc_intr_locked());
  761 
  762         /*
  763          * Find appropriate kmsg.
  764          * XXX consider making this an array?
  765          */
  766         for (kmsgp = &norma_kmsg_incomplete; ; kmsgp = &kmsg->ikm_next) {
  767                 if (! (kmsg = *kmsgp)) {
  768                         panic("norma_deliver_page: kmsg not found");
  769                         return;
  770                 }
  771                 if (kmsg->ikm_source_node == remote) {
  772                         break;
  773                 }
  774         }
  775 
  776         /*
  777          * Find the location of the copy within the kmsg.
  778          */
  779         if (copy_msgh_offset == 0) {
  780                 copyp = &kmsg->ikm_copy;
  781         } else {
  782                 copyp = (vm_map_copy_t *)
  783                     ((vm_offset_t) &kmsg->ikm_header + copy_msgh_offset);
  784         }
  785 
  786         /*
  787          * If this is the first page, create a copy object.
  788          */
  789         if (page_first) {
  790                 copy = netipc_copy_grab();
  791                 if (copy == VM_MAP_COPY_NULL) {
  792                         norma_ipc_drop();
  793                         return;
  794                 }
  795                 copy->type = VM_MAP_COPY_PAGE_LIST;
  796                 copy->cpy_npages = 1;
  797                 copy->offset = copy_offset;
  798                 copy->size = copy_size;
  799                 copy->cpy_page_list[0] = page;
  800                 copy->cpy_cont = ((kern_return_t (*)()) 0);
  801                 copy->cpy_cont_args = (char *) VM_MAP_COPYIN_ARGS_NULL;
  802                 *copyp = copy;
  803                 goto test_for_completion;
  804         }
  805 
  806         /*
  807          * There is a preexisting copy object.
  808          * If we are in the first page list, things are simple.
  809          */
  810         copy = *copyp;
  811         if (copy->cpy_npages < VM_MAP_COPY_PAGE_LIST_MAX) {
  812                 copy->cpy_page_list[copy->cpy_npages++] = page;
  813                 goto test_for_completion;
  814         }
  815 
  816         /*
  817          * We are beyond the first page list.
  818          * Chase list of copy objects until we are in the last one.
  819          */
  820         printf3("deliver_page: npages=%d\n", copy->cpy_npages);
  821         while (vm_map_copy_has_cont(copy)) {
  822                 copy = (vm_map_copy_t) copy->cpy_cont_args;
  823         }
  824 
  825         /*
  826          * Will we fit in this page list?
  827          * Note: this may still be the first page list,
  828          * but in that case the test will fail.
  829          */
  830         if (copy->cpy_npages < VM_MAP_COPY_PAGE_LIST_MAX) {
  831                 copy->cpy_page_list[copy->cpy_npages++] = page;
  832                 (*copyp)->cpy_npages++;
  833                 goto test_for_completion;
  834         }
  835 
  836         /*
  837          * We won't fit; we have to create a continuation.
  838          */
  839         printf3("deliver_page: new cont, copy=0x%x\n", copy);
  840         assert(copy->cpy_cont_args == (char *) 0);
  841         if (copy != *copyp) {
  842                 /*
  843                  * Only first copy object has fake (grand total) npages.
  844                  * Only first copy object has unalligned offset.
  845                  */
  846                 assert(copy->cpy_npages == VM_MAP_COPY_PAGE_LIST_MAX);
  847                 assert(copy->offset == 0);
  848         }
  849         new_copy = netipc_copy_grab();
  850         if (new_copy == VM_MAP_COPY_NULL) {
  851                 norma_ipc_drop();
  852                 return;
  853         }
  854         new_copy->cpy_page_list[0] = page;
  855         new_copy->cpy_npages = 1;
  856         new_copy->cpy_cont = ((kern_return_t (*)()) 0);
  857         new_copy->cpy_cont_args = (char *) VM_MAP_COPYIN_ARGS_NULL;
  858         new_copy->size = copy->size -
  859             (PAGE_SIZE * VM_MAP_COPY_PAGE_LIST_MAX - copy->offset);
  860         assert(trunc_page(copy->offset) == 0);
  861         new_copy->offset = 0;
  862         copy->cpy_cont = norma_deliver_page_continuation;
  863         copy->cpy_cont_args = (char *) new_copy;
  864         (*copyp)->cpy_npages++;
  865 
  866 test_for_completion:
  867         /*
  868          * Mark page dirty (why?) and not busy.
  869          */
  870         assert(! page->tabled);
  871         page->busy = FALSE;
  872         page->dirty = TRUE;
  873 
  874         /*
  875          * We were able to put the page in a page list somewhere.
  876          * We therefore know at this point that this call will succeed,
  877          * so acknowledge the page.
  878          */
  879         norma_ipc_ack(KERN_SUCCESS, 0L);
  880 
  881         /*
  882          * If this is the last page in the copy object, then
  883          * correct copy->cpy_npages. If this is not the last page,
  884          * then the message is not yet complete, so return now.
  885          */
  886         if (page_last) {
  887                 if ((*copyp)->cpy_npages > VM_MAP_COPY_PAGE_LIST_MAX) {
  888                         (*copyp)->cpy_npages = VM_MAP_COPY_PAGE_LIST_MAX;
  889                 }
  890         } else {
  891                 return;
  892         }
  893 
  894         /*
  895          * If this is not the last copy object, then the message is
  896          * not yet complete, so return.
  897          */
  898         if (! copy_last) {
  899                 return;
  900         }
  901 
  902         /*
  903          * The message is complete. Take it off the list.
  904          */
  905         *kmsgp = kmsg->ikm_next;
  906 
  907         /*
  908          * If it safe for us to process the message, do so.
  909          * XXX
  910          * 1. should be conditionalized on whether we really
  911          *      are called at interrupt level
  912          * 2. should check flag set by *all* idle loops
  913          * 3. this comment applies as well to deliver_kmsg
  914          */
  915         if (norma_ipc_handoff_mqueue) {
  916                 norma_handoff_kmsg(kmsg);
  917                 return;
  918         }
  919 
  920         /*
  921          * It is not safe for us to process the message, so post an ast.
  922          * XXX
  923          * Should use queue macros
  924          */
  925         kmsg->ikm_next = IKM_NULL;
  926         if (norma_kmsg_complete) {
  927                 for (kmsgp = &norma_kmsg_complete;
  928                      (*kmsgp)->ikm_next;
  929                      kmsgp = &(*kmsgp)->ikm_next) {
  930                         continue;
  931                 }
  932                 (*kmsgp)->ikm_next = kmsg;
  933         } else {
  934                 norma_kmsg_complete = kmsg;
  935         }
  936         jc_handoff_misses++;
  937         ast_on(cpu_number(), AST_NETIPC);
  938 }

Cache object: 2b0aa354f4781726a50a7fe13caa44c0


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.