The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/device/dev_pager.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1993-1989 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        dev_pager.c,v $
   29  * Revision 2.31  93/05/17  20:03:40  rvb
   30  *      A device that can not be mapped has its mapping function == nomap, for
   31  *      type sanity reasons
   32  * 
   33  * Revision 2.30  93/05/10  21:18:19  rvb
   34  *      Removed depends on DEV_BSIZE.
   35  *      [93/05/06  11:08:38  af]
   36  * 
   37  * Revision 2.29  93/03/09  10:54:06  danner
   38  *      Non-mappable devices should use a zero as their d_mmap.
   39  *      [93/03/07            af]
   40  *      Fixed decl for block_io_mmap().  Mods to hash macro for GCC.
   41  *      [93/03/07            af]
   42  * 
   43  * Revision 2.28  93/02/04  07:49:10  danner
   44  *      Prototyped.
   45  *      [93/02/01            danner]
   46  * 
   47  *      Prototyped.
   48  *      [93/02/01            danner]
   49  * 
   50  * Revision 2.27  92/08/03  17:33:17  jfriedl
   51  *      removed silly prototypes
   52  *      [92/08/02            jfriedl]
   53  * 
   54  * Revision 2.26  92/07/09  22:53:19  rvb
   55  *      Dropped offset field from dev_pager structure.
   56  *      Not clear what it was there for, and the only
   57  *      time it got a non-zero value we got screwed.
   58  *      [92/06/01  14:24:19  af]
   59  * 
   60  * Revision 2.25  92/05/21  17:09:00  jfriedl
   61  *      Added void to fcns that still needed it.
   62  *      Removed unsused variable 'result' from device_pager_setup()
   63  *      [92/05/16            jfriedl]
   64  * 
   65  * Revision 2.24  92/03/10  16:25:14  jsb
   66  *      Eliminated all NORMA_VM conditionals except for one in data_request.
   67  *      [92/03/10  10:27:59  jsb]
   68  * 
   69  *      Added missing memory_object_* stubs.
   70  *      [92/03/10  08:41:12  jsb]
   71  * 
   72  * Revision 2.23  92/02/23  19:49:28  elf
   73  *      Eliminate keep_wired argument from vm_map_copyin().
   74  *      [92/02/23            danner]
   75  * 
   76  * Revision 2.22  92/01/03  20:03:11  dbg
   77  *      Define device_pager_server_routine for new kernel stub calling
   78  *      sequence.
   79  *      [91/11/25            dbg]
   80  * 
   81  *      Call MiG stubs for memory_object functions, instead of the
   82  *      internal functions directly, so that the device pager will work
   83  *      remotely.
   84  *      [91/10/30            dbg]
   85  * 
   86  * Revision 2.21  91/09/12  16:37:07  bohman
   87  *      For device_map objects, don't setup the actual resident page mappings
   88  *      at initialization time.  Instead, do it in response to actual requests
   89  *      for the data.  I left in the old code for NORMA_VM.
   90  *      [91/09/11  17:05:20  bohman]
   91  * 
   92  * Revision 2.20  91/08/28  11:26:19  jsb
   93  *      For now, use older dev_pager code for NORMA_VM.
   94  *      I'll fix this soon.
   95  *      [91/08/28  11:24:45  jsb]
   96  * 
   97  * Revision 2.19  91/08/03  18:17:26  jsb
   98  *      Corrected declaration of xmm_vm_object_lookup.
   99  *      [91/07/17  13:54:47  jsb]
  100  * 
  101  * Revision 2.18  91/07/31  17:33:24  dbg
  102  *      Use vm_object_page_map to allocate pages for a device-map object.
  103  *      Its vm_page structures are allocated and freed by the VM system.
  104  *      [91/07/30  16:46:43  dbg]
  105  * 
  106  * Revision 2.17  91/06/25  11:06:33  rpd
  107  *      Fixed includes to avoid norma files unless they are really needed.
  108  *      [91/06/25            rpd]
  109  * 
  110  * Revision 2.16  91/06/18  20:49:55  jsb
  111  *      Removed extra include of norma_vm.h.
  112  *      [91/06/18  18:47:29  jsb]
  113  * 
  114  * Revision 2.15  91/06/17  15:43:50  jsb
  115  *      NORMA_VM: use xmm_vm_object_lookup, since we really need a vm_object_t;
  116  *      use xmm_add_exception to mark device pagers as non-interposable.
  117  *      [91/06/17  13:22:58  jsb]
  118  * 
  119  * Revision 2.14  91/05/18  14:29:38  rpd
  120  *      Added proper locking for vm_page_insert.
  121  *      [91/04/21            rpd]
  122  *      Changed vm_page_init.
  123  *      [91/03/24            rpd]
  124  * 
  125  * Revision 2.13  91/05/14  15:41:35  mrt
  126  *      Correcting copyright
  127  * 
  128  * Revision 2.12  91/02/05  17:08:40  mrt
  129  *      Changed to new Mach copyright
  130  *      [91/01/31  17:27:40  mrt]
  131  * 
  132  * Revision 2.11  90/11/05  14:26:46  rpd
  133  *      Fixed memory_object_terminate to return KERN_SUCCESS.
  134  *      [90/10/29            rpd]
  135  * 
  136  * Revision 2.10  90/09/09  14:31:20  rpd
  137  *      Use decl_simple_lock_data.
  138  *      [90/08/30            rpd]
  139  * 
  140  * Revision 2.9  90/06/02  14:47:19  rpd
  141  *      Converted to new IPC.
  142  *      Renamed functions to mainline conventions.
  143  *      Fixed private/fictitious bug in memory_object_init.
  144  *      Fixed port leak in memory_object_terminate.
  145  *      [90/03/26  21:49:57  rpd]
  146  * 
  147  * Revision 2.8  90/05/29  18:36:37  rwd
  148  *      From rpd: set private to specify that the page structure is
  149  *      ours, not fictitious.
  150  *      [90/05/14            rwd]
  151  *      From rpd: set private to specify that the page structure is
  152  *      ours, not fictitious.
  153  *      [90/05/14            rwd]
  154  * 
  155  * Revision 2.7  90/05/21  13:26:21  dbg
  156  *              From rpd: set private to specify that the page structure is
  157  *              ours, not fictitious.
  158  *              [90/05/14            rwd]
  159  *       
  160  * 
  161  * Revision 2.6  90/02/22  20:02:05  dbg
  162  *      Change PAGE_WAKEUP to PAGE_WAKEUP_DONE to reflect the fact that
  163  *      it clears the busy flag.
  164  *      [90/01/29            dbg]
  165  * 
  166  * Revision 2.5  90/01/11  11:41:53  dbg
  167  *      De-lint.
  168  *      [89/12/06            dbg]
  169  * 
  170  * Revision 2.4  89/09/08  11:23:24  dbg
  171  *      Rewrite to run in kernel task (off user thread or
  172  *      vm_pageout!)
  173  *      [89/08/24            dbg]
  174  * 
  175  * Revision 2.3  89/08/09  14:33:03  rwd
  176  *      Call round_page on incoming size to get to mach page.
  177  *      [89/08/09            rwd]
  178  * 
  179  * Revision 2.2  89/08/05  16:04:51  rwd
  180  *      Added char_pager code for frame buffer.
  181  *      [89/07/26            rwd]
  182  * 
  183  * 26-May-89  Randall Dean (rwd) at Carnegie-Mellon University
  184  *      If no error, zero pad residual and set to 0
  185  *
  186  *  3-Mar-89  David Golub (dbg) at Carnegie-Mellon University
  187  *      Created.
  188  *
  189  */
  190 /*
  191  *      Author: David B. Golub, Carnegie Mellon University
  192  *      Date:   3/89
  193  *
  194  *      Device pager.
  195  */
  196 #include <norma_vm.h>
  197 
  198 #include <mach/boolean.h>
  199 #include <mach/port.h>
  200 #include <mach/message.h>
  201 #include <mach/std_types.h>
  202 #include <mach/mach_types.h>
  203 
  204 #include <ipc/ipc_port.h>
  205 #include <ipc/ipc_space.h>
  206 
  207 #include <kern/queue.h>
  208 #include <kern/zalloc.h>
  209 #include <kern/kalloc.h>
  210 
  211 #include <vm/vm_page.h>
  212 #include <vm/vm_kern.h>
  213 
  214 #include <device/device_types.h>
  215 #include <device/ds_routines.h>
  216 #include <device/dev_hdr.h>
  217 #include <device/io_req.h>
  218 
  219 extern vm_offset_t      block_io_mmap();        /* dummy routine to allow
  220                                                    mmap for block devices */
  221 
  222 /*
  223  *      The device pager routines are called directly from the message
  224  *      system (via mach_msg), and thus run in the kernel-internal
  225  *      environment.  All ports are in internal form (ipc_port_t),
  226  *      and must be correctly reference-counted in order to be saved
  227  *      in other data structures.  Kernel routines may be called
  228  *      directly.  Kernel types are used for data objects (tasks,
  229  *      memory objects, ports).  The only IPC routines that may be
  230  *      called are ones that masquerade as the kernel task (via
  231  *      msg_send_from_kernel).
  232  *
  233  *      Port rights and references are maintained as follows:
  234  *              Memory object port:
  235  *                      The device_pager task has all rights.
  236  *              Memory object control port:
  237  *                      The device_pager task has only send rights.
  238  *              Memory object name port:
  239  *                      The device_pager task has only send rights.
  240  *                      The name port is not even recorded.
  241  *      Regardless how the object is created, the control and name
  242  *      ports are created by the kernel and passed through the memory
  243  *      management interface.
  244  *
  245  *      The device_pager assumes that access to its memory objects
  246  *      will not be propagated to more that one host, and therefore
  247  *      provides no consistency guarantees beyond those made by the
  248  *      kernel.
  249  *
  250  *      In the event that more than one host attempts to use a device
  251  *      memory object, the device_pager will only record the last set
  252  *      of port names.  [This can happen with only one host if a new
  253  *      mapping is being established while termination of all previous
  254  *      mappings is taking place.]  Currently, the device_pager assumes
  255  *      that its clients adhere to the initialization and termination
  256  *      protocols in the memory management interface; otherwise, port
  257  *      rights or out-of-line memory from erroneous messages may be
  258  *      allowed to accumulate.
  259  *
  260  *      [The phrase "currently" has been used above to denote aspects of
  261  *      the implementation that could be altered without changing the rest
  262  *      of the basic documentation.]
  263  */
  264 
  265 /*
  266  * Basic device pager structure.
  267  */
  268 struct dev_pager {
  269         decl_simple_lock_data(, lock)   /* lock for reference count */
  270         int             ref_count;      /* reference count */
  271         int             client_count;   /* How many memory_object_create
  272                                          * calls have we received */
  273         ipc_port_t      pager;          /* pager port */
  274         ipc_port_t      pager_request;  /* Known request port */
  275         ipc_port_t      pager_name;     /* Known name port */
  276         device_t        device;         /* Device handle */
  277         int             type;           /* to distinguish */
  278 #define DEV_PAGER_TYPE  0
  279 #define CHAR_PAGER_TYPE 1
  280         /* char pager specifics */
  281         int             prot;
  282         vm_size_t       size;
  283 };
  284 typedef struct dev_pager *dev_pager_t;
  285 #define DEV_PAGER_NULL  ((dev_pager_t)0)
  286 
  287 
  288 zone_t          dev_pager_zone;
  289 
  290 void dev_pager_reference(register dev_pager_t   ds)
  291 {
  292         simple_lock(&ds->lock);
  293         ds->ref_count++;
  294         simple_unlock(&ds->lock);
  295 }
  296 
  297 void dev_pager_deallocate(register dev_pager_t  ds)
  298 {
  299         simple_lock(&ds->lock);
  300         if (--ds->ref_count > 0) {
  301             simple_unlock(&ds->lock);
  302             return;
  303         }
  304 
  305         simple_unlock(&ds->lock);
  306         zfree(dev_pager_zone, (vm_offset_t)ds);
  307 }
  308 
  309 /*
  310  * A hash table of ports for device_pager backed objects.
  311  */
  312 
  313 #define DEV_PAGER_HASH_COUNT            127
  314 
  315 struct dev_pager_entry {
  316         queue_chain_t   links;
  317         ipc_port_t      name;
  318         dev_pager_t     pager_rec;
  319 };
  320 typedef struct dev_pager_entry *dev_pager_entry_t;
  321 
  322 queue_head_t    dev_pager_hashtable[DEV_PAGER_HASH_COUNT];
  323 zone_t          dev_pager_hash_zone;
  324 decl_simple_lock_data(,
  325                 dev_pager_hash_lock)
  326 
  327 #define dev_pager_hash(name_port) \
  328                 (((natural_t)(name_port) & 0xffffff) % DEV_PAGER_HASH_COUNT)
  329 
  330 void dev_pager_hash_init(void)
  331 {
  332         register int    i;
  333         register vm_size_t      size;
  334 
  335         size = sizeof(struct dev_pager_entry);
  336         dev_pager_hash_zone = zinit(
  337                                 size,
  338                                 size * 1000,
  339                                 PAGE_SIZE,
  340                                 FALSE,
  341                                 "dev_pager port hash");
  342         for (i = 0; i < DEV_PAGER_HASH_COUNT; i++)
  343             queue_init(&dev_pager_hashtable[i]);
  344         simple_lock_init(&dev_pager_hash_lock);
  345 }
  346 
  347 void dev_pager_hash_insert(
  348         ipc_port_t      name_port,
  349         dev_pager_t     rec)
  350 {
  351         register dev_pager_entry_t new_entry;
  352 
  353         new_entry = (dev_pager_entry_t) zalloc(dev_pager_hash_zone);
  354         new_entry->name = name_port;
  355         new_entry->pager_rec = rec;
  356 
  357         simple_lock(&dev_pager_hash_lock);
  358         queue_enter(&dev_pager_hashtable[dev_pager_hash(name_port)],
  359                         new_entry, dev_pager_entry_t, links);
  360         simple_unlock(&dev_pager_hash_lock);
  361 }
  362 
  363 void dev_pager_hash_delete(ipc_port_t   name_port)
  364 {
  365         register queue_t        bucket;
  366         register dev_pager_entry_t      entry;
  367 
  368         bucket = &dev_pager_hashtable[dev_pager_hash(name_port)];
  369 
  370         simple_lock(&dev_pager_hash_lock);
  371         for (entry = (dev_pager_entry_t)queue_first(bucket);
  372              !queue_end(bucket, &entry->links);
  373              entry = (dev_pager_entry_t)queue_next(&entry->links)) {
  374             if (entry->name == name_port) {
  375                 queue_remove(bucket, entry, dev_pager_entry_t, links);
  376                 break;
  377             }
  378         }
  379         simple_unlock(&dev_pager_hash_lock);
  380         if (entry)
  381             zfree(dev_pager_hash_zone, (vm_offset_t)entry);
  382 }
  383 
  384 dev_pager_t dev_pager_hash_lookup(ipc_port_t    name_port)
  385 {
  386         register queue_t        bucket;
  387         register dev_pager_entry_t      entry;
  388         register dev_pager_t    pager;
  389 
  390         bucket = &dev_pager_hashtable[dev_pager_hash(name_port)];
  391 
  392         simple_lock(&dev_pager_hash_lock);
  393         for (entry = (dev_pager_entry_t)queue_first(bucket);
  394              !queue_end(bucket, &entry->links);
  395              entry = (dev_pager_entry_t)queue_next(&entry->links)) {
  396             if (entry->name == name_port) {
  397                 pager = entry->pager_rec;
  398                 dev_pager_reference(pager);
  399                 simple_unlock(&dev_pager_hash_lock);
  400                 return (pager);
  401             }
  402         }
  403         simple_unlock(&dev_pager_hash_lock);
  404         return (DEV_PAGER_NULL);
  405 }
  406 
  407 kern_return_t   device_pager_setup(
  408         device_t        device,
  409         int             prot,
  410         vm_offset_t     offset,
  411         vm_size_t       size,
  412         mach_port_t     *pager)
  413 {
  414         register dev_pager_t    d;
  415 
  416         /*
  417          *      Verify the device is indeed mappable
  418          */
  419         if (!device->dev_ops->d_mmap || (device->dev_ops->d_mmap == nomap))
  420                 return (D_INVALID_OPERATION);
  421 
  422         /*
  423          *      Allocate a structure to hold the arguments
  424          *      and port to represent this object.
  425          */
  426 
  427         d = dev_pager_hash_lookup((ipc_port_t)device);  /* HACK */
  428         if (d != DEV_PAGER_NULL) {
  429                 *pager = (mach_port_t) ipc_port_make_send(d->pager);
  430                 dev_pager_deallocate(d);
  431                 return (D_SUCCESS);
  432         }
  433 
  434         d = (dev_pager_t) zalloc(dev_pager_zone);
  435         if (d == DEV_PAGER_NULL)
  436                 return (KERN_RESOURCE_SHORTAGE);
  437 
  438         simple_lock_init(&d->lock);
  439         d->ref_count = 1;
  440 
  441         /*
  442          * Allocate the pager port.
  443          */
  444         d->pager = ipc_port_alloc_kernel();
  445         if (d->pager == IP_NULL) {
  446                 dev_pager_deallocate(d);
  447                 return (KERN_RESOURCE_SHORTAGE);
  448         }
  449 
  450         d->client_count = 0;
  451         d->pager_request = IP_NULL;
  452         d->pager_name = IP_NULL;
  453         d->device = device;
  454         device_reference(device);
  455         d->prot = prot;
  456         d->size = round_page(size);
  457         if (device->dev_ops->d_mmap == block_io_mmap) {
  458                 d->type = DEV_PAGER_TYPE;
  459         } else {
  460                 d->type = CHAR_PAGER_TYPE;
  461         }
  462 
  463         dev_pager_hash_insert(d->pager, d);
  464         dev_pager_hash_insert((ipc_port_t)device, d);   /* HACK */
  465 
  466         *pager = (mach_port_t) ipc_port_make_send(d->pager);
  467         return (KERN_SUCCESS);
  468 }
  469 
  470 /*
  471  *      Routine:        device_pager_release
  472  *      Purpose:
  473  *              Relinquish any references or rights that were
  474  *              associated with the result of a call to
  475  *              device_pager_setup.
  476  */
  477 void    device_pager_release(memory_object_t    object)
  478 {
  479         if (MACH_PORT_VALID(object))
  480                 ipc_port_release_send((ipc_port_t) object);
  481 }
  482 
  483 /*
  484  * Rename all of the functions in the pager interface, to avoid
  485  * confusing them with the kernel interface.
  486  */
  487 
  488 #define memory_object_init              device_pager_init_pager
  489 #define memory_object_terminate         device_pager_terminate
  490 #define memory_object_copy              device_pager_copy
  491 #define memory_object_data_request      device_pager_data_request
  492 #define memory_object_data_unlock       device_pager_data_unlock
  493 #define memory_object_data_write        device_pager_data_write
  494 #define memory_object_lock_completed    device_pager_lock_completed
  495 #define memory_object_supply_completed  device_pager_supply_completed
  496 #define memory_object_data_return       device_pager_data_return
  497 #define memory_object_change_completed  device_pager_change_completed
  498 
  499 boolean_t       device_pager_debug = FALSE;
  500 
  501 boolean_t       device_pager_data_request_done();       /* forward */
  502 boolean_t       device_pager_data_write_done();         /* forward */
  503 
  504 
  505 kern_return_t   memory_object_data_request(
  506         ipc_port_t      pager,
  507         ipc_port_t      pager_request,
  508         vm_offset_t     offset,
  509         vm_size_t       length,
  510         vm_prot_t       protection_required)
  511 {
  512         register dev_pager_t    ds;
  513 
  514 #ifdef lint
  515         protection_required++;
  516 #endif lint
  517 
  518         if (device_pager_debug)
  519                 printf("(device_pager)data_request: pager=%d, offset=0x%x, length=0x%x\n",
  520                         pager, offset, length);
  521 
  522         ds = dev_pager_hash_lookup((ipc_port_t)pager);
  523         if (ds == DEV_PAGER_NULL)
  524                 panic("(device_pager)data_request: lookup failed");
  525 
  526         if (ds->pager_request != pager_request)
  527                 panic("(device_pager)data_request: bad pager_request");
  528 
  529         if (ds->type == CHAR_PAGER_TYPE) {
  530             register vm_object_t        object;
  531             vm_offset_t                 device_map_page(void *,vm_offset_t);
  532 
  533 #if     NORMA_VM
  534             object = vm_object_lookup(pager);
  535 #else   NORMA_VM
  536             object = vm_object_lookup(pager_request);
  537 #endif  NORMA_VM
  538             if (object == VM_OBJECT_NULL) {
  539                     (void) r_memory_object_data_error(pager_request,
  540                                                       offset, length,
  541                                                       KERN_FAILURE);
  542                     dev_pager_deallocate(ds);
  543                     return (KERN_SUCCESS);
  544             }
  545 
  546             vm_object_page_map(object,
  547                                offset, length,
  548                                device_map_page, (char *)ds);
  549 
  550             vm_object_deallocate(object);
  551         }
  552         else {
  553             register io_req_t   ior;
  554             register device_t   device;
  555             io_return_t         result;
  556 
  557             panic("(device_pager)data_request: dev pager");
  558             
  559             device = ds->device;
  560             device_reference(device);
  561             dev_pager_deallocate(ds);
  562             
  563             /*
  564              * Package the read for the device driver.
  565              */
  566             io_req_alloc(ior, 0);
  567             
  568             ior->io_device      = device;
  569             ior->io_unit        = device->dev_number;
  570             ior->io_op          = IO_READ | IO_CALL;
  571             ior->io_mode        = 0;
  572             ior->io_recnum      = offset / device->bsize;
  573             ior->io_data        = 0;            /* driver must allocate */
  574             ior->io_count       = length;
  575             ior->io_alloc_size  = 0;            /* no data allocated yet */
  576             ior->io_residual    = 0;
  577             ior->io_error       = 0;
  578             ior->io_done        = device_pager_data_request_done;
  579             ior->io_reply_port  = pager_request;
  580             ior->io_reply_port_type = MACH_MSG_TYPE_PORT_SEND;
  581             
  582             result = (*device->dev_ops->d_read)(device->dev_number, ior);
  583             if (result == D_IO_QUEUED)
  584                 return (KERN_SUCCESS);
  585             
  586             /*
  587              * Return by queuing IOR for io_done thread, to reply in
  588              * correct environment (kernel).
  589              */
  590             ior->io_error = result;
  591             iodone(ior);
  592         }
  593 
  594         dev_pager_deallocate(ds);
  595 
  596         return (KERN_SUCCESS);
  597 }
  598 
  599 /*
  600  * Always called by io_done thread.
  601  */
  602 boolean_t device_pager_data_request_done(register io_req_t      ior)
  603 {
  604         vm_offset_t     start_alloc, end_alloc;
  605         vm_size_t       size_read;
  606 
  607         if (ior->io_error == D_SUCCESS) {
  608             size_read = ior->io_count;
  609             if (ior->io_residual) {
  610                 if (device_pager_debug)
  611                     printf("(device_pager)data_request_done: r: 0x%x\n",ior->io_residual);
  612                 bzero( (char *) (&ior->io_data[ior->io_count - 
  613                                                ior->io_residual]),
  614                       (unsigned) ior->io_residual);
  615             }
  616         } else {
  617             size_read = ior->io_count - ior->io_residual;
  618         }
  619 
  620         start_alloc = trunc_page((vm_offset_t)ior->io_data);
  621         end_alloc   = start_alloc + round_page(ior->io_alloc_size);
  622 
  623         if (ior->io_error == D_SUCCESS) {
  624             vm_map_copy_t copy;
  625             kern_return_t kr;
  626 
  627             kr = vm_map_copyin(kernel_map, (vm_offset_t)ior->io_data,
  628                                 size_read, TRUE, &copy);
  629             if (kr != KERN_SUCCESS)
  630                 panic("device_pager_data_request_done");
  631 
  632             (void) r_memory_object_data_provided(
  633                                         ior->io_reply_port,
  634                                         ior->io_recnum * ior->io_device->bsize,
  635                                         (vm_offset_t)copy,
  636                                         size_read,
  637                                         VM_PROT_NONE);
  638         }
  639         else {
  640             (void) r_memory_object_data_error(
  641                                         ior->io_reply_port,
  642                                         ior->io_recnum * ior->io_device->bsize,
  643                                         (vm_size_t)ior->io_count,
  644                                         ior->io_error);
  645         }
  646 
  647         (void)vm_deallocate(kernel_map,
  648                             start_alloc,
  649                             end_alloc - start_alloc);
  650         device_deallocate(ior->io_device);
  651         return (TRUE);
  652 }
  653 
  654 kern_return_t memory_object_data_write(
  655         ipc_port_t              pager,
  656         ipc_port_t              pager_request,
  657         register vm_offset_t    offset,
  658         register pointer_t      addr,
  659         vm_size_t               data_count)
  660 {
  661         register dev_pager_t    ds;
  662         register device_t       device;
  663         register io_req_t       ior;
  664         kern_return_t           result;
  665 
  666         panic("(device_pager)data_write: called");
  667 
  668         ds = dev_pager_hash_lookup((ipc_port_t)pager);
  669         if (ds == DEV_PAGER_NULL)
  670                 panic("(device_pager)data_write: lookup failed");
  671 
  672         if (ds->pager_request != pager_request)
  673                 panic("(device_pager)data_write: bad pager_request");
  674 
  675         if (ds->type == CHAR_PAGER_TYPE)
  676                 panic("(device_pager)data_write: char pager");
  677 
  678         device = ds->device;
  679         device_reference(device);
  680         dev_pager_deallocate(ds);
  681 
  682         /*
  683          * Package the write request for the device driver.
  684          */
  685         io_req_alloc(ior, data_count);
  686 
  687         ior->io_device          = device;
  688         ior->io_unit            = device->dev_number;
  689         ior->io_op              = IO_WRITE | IO_CALL;
  690         ior->io_mode            = 0;
  691         ior->io_recnum          = offset / device->bsize;
  692         ior->io_data            = (io_buf_ptr_t)addr;
  693         ior->io_count           = data_count;
  694         ior->io_alloc_size      = data_count;   /* amount to deallocate */
  695         ior->io_residual        = 0;
  696         ior->io_error           = 0;
  697         ior->io_done            = device_pager_data_write_done;
  698         ior->io_reply_port      = IP_NULL;
  699 
  700         result = (*device->dev_ops->d_write)(device->dev_number, ior);
  701 
  702         if (result != D_IO_QUEUED) {
  703             device_write_dealloc(ior);
  704             io_req_free((vm_offset_t)ior);
  705             device_deallocate(device);
  706         }
  707 
  708         return (KERN_SUCCESS);
  709 }
  710 
  711 boolean_t device_pager_data_write_done(ior)
  712         register io_req_t       ior;
  713 {
  714         device_write_dealloc(ior);
  715         device_deallocate(ior->io_device);
  716 
  717         return (TRUE);
  718 }
  719 
  720 kern_return_t memory_object_copy(
  721         ipc_port_t              pager,
  722         ipc_port_t              pager_request,
  723         register vm_offset_t    offset,
  724         register vm_size_t      length,
  725         ipc_port_t              new_pager)
  726 {
  727         panic("(device_pager)copy: called");
  728 }
  729 
  730 kern_return_t
  731 memory_object_supply_completed(
  732         ipc_port_t pager,
  733         ipc_port_t pager_request,
  734         vm_offset_t offset,
  735         vm_size_t length,
  736         kern_return_t result,
  737         vm_offset_t error_offset)
  738 {
  739         panic("(device_pager)supply_completed: called");
  740 }
  741 
  742 kern_return_t
  743 memory_object_data_return(
  744         ipc_port_t              pager,
  745         ipc_port_t              pager_request,
  746         vm_offset_t             offset,
  747         register pointer_t      addr,
  748         vm_size_t               data_cnt,
  749         boolean_t               dirty,
  750         boolean_t               kernel_copy)
  751 {
  752         panic("(device_pager)data_return: called");
  753 }
  754 
  755 kern_return_t
  756 memory_object_change_completed(
  757         ipc_port_t pager,
  758         boolean_t may_cache,
  759         memory_object_copy_strategy_t copy_strategy)
  760 {
  761         panic("(device_pager)change_completed: called");
  762 }
  763 
  764 /*
  765  *      The mapping function takes a byte offset, but returns
  766  *      a machine-dependent page frame number.  We convert
  767  *      that into something that the pmap module will
  768  *      accept later.
  769  */
  770 vm_offset_t device_map_page(
  771         void            *dsp,
  772         vm_offset_t     offset)
  773 {
  774         register dev_pager_t    ds = (dev_pager_t) dsp;
  775 
  776         return pmap_phys_address(
  777                    (*(ds->device->dev_ops->d_mmap))
  778                         (ds->device->dev_number, offset, ds->prot));
  779 }
  780 
  781 kern_return_t memory_object_init(
  782         ipc_port_t      pager,
  783         ipc_port_t      pager_request,
  784         ipc_port_t      pager_name,
  785         vm_size_t       pager_page_size)
  786 {
  787         register dev_pager_t    ds;
  788 
  789         if (device_pager_debug)
  790                 printf("(device_pager)init: pager=%d, request=%d, name=%d\n",
  791                        pager, pager_request, pager_name);
  792 
  793         assert(pager_page_size == PAGE_SIZE);
  794         assert(IP_VALID(pager_request));
  795         assert(IP_VALID(pager_name));
  796 
  797         ds = dev_pager_hash_lookup(pager);
  798         assert(ds != DEV_PAGER_NULL);
  799 
  800         assert(ds->client_count == 0);
  801         assert(ds->pager_request == IP_NULL);
  802         assert(ds->pager_name == IP_NULL);
  803 
  804         ds->client_count = 1;
  805 
  806         /*
  807          * We save the send rights for the request and name ports.
  808          */
  809 
  810         ds->pager_request = pager_request;
  811         ds->pager_name = pager_name;
  812 
  813         if (ds->type == CHAR_PAGER_TYPE) {
  814             /*
  815              * Reply that the object is ready
  816              */
  817             (void) r_memory_object_set_attributes(pager_request,
  818                                                 TRUE,   /* ready */
  819                                                 FALSE,  /* do not cache */
  820                                                 MEMORY_OBJECT_COPY_NONE);
  821         } else {
  822             (void) r_memory_object_set_attributes(pager_request,
  823                                                 TRUE,   /* ready */
  824                                                 TRUE,   /* cache */
  825                                                 MEMORY_OBJECT_COPY_DELAY);
  826         }
  827 
  828         dev_pager_deallocate(ds);
  829         return (KERN_SUCCESS);
  830 }
  831 
  832 kern_return_t memory_object_terminate(
  833         ipc_port_t      pager,
  834         ipc_port_t      pager_request,
  835         ipc_port_t      pager_name)
  836 {
  837         register dev_pager_t    ds;
  838 
  839         assert(IP_VALID(pager_request));
  840         assert(IP_VALID(pager_name));
  841 
  842         ds = dev_pager_hash_lookup(pager);
  843         assert(ds != DEV_PAGER_NULL);
  844 
  845         assert(ds->client_count == 1);
  846         assert(ds->pager_request == pager_request);
  847         assert(ds->pager_name == pager_name);
  848 
  849         dev_pager_hash_delete(ds->pager);
  850         dev_pager_hash_delete((ipc_port_t)ds->device);  /* HACK */
  851         device_deallocate(ds->device);
  852 
  853         /* release the send rights we have saved from the init call */
  854 
  855         ipc_port_release_send(pager_request);
  856         ipc_port_release_send(pager_name);
  857 
  858         /* release the naked receive rights we just acquired */
  859 
  860         ipc_port_release_receive(pager_request);
  861         ipc_port_release_receive(pager_name);
  862 
  863         /* release the kernel's receive right for the pager port */
  864 
  865         ipc_port_dealloc_kernel(pager);
  866 
  867         /* once for ref from lookup, once to make it go away */
  868         dev_pager_deallocate(ds);
  869         dev_pager_deallocate(ds);
  870 
  871         return (KERN_SUCCESS);
  872 }
  873 
  874 kern_return_t memory_object_data_unlock(
  875         ipc_port_t memory_object,
  876         ipc_port_t memory_control_port,
  877         vm_offset_t offset,
  878         vm_size_t length,
  879         vm_prot_t desired_access)
  880 {
  881 #ifdef  lint
  882         memory_object++; memory_control_port++; offset++; length++; desired_access++;
  883 #endif  lint
  884 
  885         panic("(device_pager)data_unlock: called");
  886         return (KERN_FAILURE);
  887 }
  888 
  889 kern_return_t memory_object_lock_completed(
  890         ipc_port_t      memory_object,
  891         ipc_port_t      pager_request_port,
  892         vm_offset_t     offset,
  893         vm_size_t       length)
  894 {
  895 #ifdef  lint
  896         memory_object++; pager_request_port++; offset++; length++;
  897 #endif  lint
  898 
  899         panic("(device_pager)lock_completed: called");
  900         return (KERN_FAILURE);
  901 }
  902 
  903 /*
  904  * Include memory_object_server in this file to avoid name
  905  * conflicts with other possible pagers.
  906  */
  907 #define memory_object_server            device_pager_server
  908 #define memory_object_server_routine    device_pager_server_routine
  909 #include <device/device_pager_server.c>
  910 
  911 void device_pager_init(void)
  912 {
  913         register vm_size_t      size;
  914 
  915         /*
  916          * Initialize zone of paging structures.
  917          */
  918         size = sizeof(struct dev_pager);
  919         dev_pager_zone = zinit(size,
  920                                 (vm_size_t) size * 1000,
  921                                 PAGE_SIZE,
  922                                 FALSE,
  923                                 "device pager structures");
  924 
  925         /*
  926          *      Initialize the name port hashing stuff.
  927          */
  928         dev_pager_hash_init();
  929 }

Cache object: 2320a33282c41b0141c02263ba8771e2


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.