The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/device/dev_pager.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1993-1989 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        dev_pager.c,v $
   29  * Revision 2.32  93/11/17  16:31:52  dbg
   30  *      Changed mmap routines to return physical address instead of
   31  *      physical page number.
   32  *      [93/05/24            dbg]
   33  * 
   34  * Revision 2.31  93/05/17  20:03:40  rvb
   35  *      A device that can not be mapped has its mapping function == nomap, for
   36  *      type sanity reasons
   37  * 
   38  * Revision 2.30  93/05/10  21:18:19  rvb
   39  *      Removed depends on DEV_BSIZE.
   40  *      [93/05/06  11:08:38  af]
   41  * 
   42  * Revision 2.29  93/03/09  10:54:06  danner
   43  *      Non-mappable devices should use a zero as their d_mmap.
   44  *      [93/03/07            af]
   45  *      Fixed decl for block_io_mmap().  Mods to hash macro for GCC.
   46  *      [93/03/07            af]
   47  * 
   48  * Revision 2.28  93/02/04  07:49:10  danner
   49  *      Prototyped.
   50  *      [93/02/01            danner]
   51  * 
   52  * Revision 2.27  92/08/03  17:33:17  jfriedl
   53  *      removed silly prototypes
   54  *      [92/08/02            jfriedl]
   55  * 
   56  * Revision 2.26  92/07/09  22:53:19  rvb
   57  *      Dropped offset field from dev_pager structure.
   58  *      Not clear what it was there for, and the only
   59  *      time it got a non-zero value we got screwed.
   60  *      [92/06/01  14:24:19  af]
   61  * 
   62  * Revision 2.25  92/05/21  17:09:00  jfriedl
   63  *      Added void to fcns that still needed it.
   64  *      Removed unused variable 'result' from device_pager_setup()
   65  *      [92/05/16            jfriedl]
   66  * 
   67  * Revision 2.24  92/03/10  16:25:14  jsb
   68  *      Eliminated all NORMA_VM conditionals except for one in data_request.
   69  *      [92/03/10  10:27:59  jsb]
   70  * 
   71  *      Added missing memory_object_* stubs.
   72  *      [92/03/10  08:41:12  jsb]
   73  * 
   74  * Revision 2.23  92/02/23  19:49:28  elf
   75  *      Eliminate keep_wired argument from vm_map_copyin().
   76  *      [92/02/23            danner]
   77  * 
   78  * Revision 2.22  92/01/03  20:03:11  dbg
   79  *      Define device_pager_server_routine for new kernel stub calling
   80  *      sequence.
   81  *      [91/11/25            dbg]
   82  * 
   83  *      Call MiG stubs for memory_object functions, instead of the
   84  *      internal functions directly, so that the device pager will work
   85  *      remotely.
   86  *      [91/10/30            dbg]
   87  * 
   88  * Revision 2.21  91/09/12  16:37:07  bohman
   89  *      For device_map objects, don't setup the actual resident page mappings
   90  *      at initialization time.  Instead, do it in response to actual requests
   91  *      for the data.  I left in the old code for NORMA_VM.
   92  *      [91/09/11  17:05:20  bohman]
   93  * 
   94  * Revision 2.20  91/08/28  11:26:19  jsb
   95  *      For now, use older dev_pager code for NORMA_VM.
   96  *      I'll fix this soon.
   97  *      [91/08/28  11:24:45  jsb]
   98  * 
   99  * Revision 2.19  91/08/03  18:17:26  jsb
  100  *      Corrected declaration of xmm_vm_object_lookup.
  101  *      [91/07/17  13:54:47  jsb]
  102  * 
  103  * Revision 2.18  91/07/31  17:33:24  dbg
  104  *      Use vm_object_page_map to allocate pages for a device-map object.
  105  *      Its vm_page structures are allocated and freed by the VM system.
  106  *      [91/07/30  16:46:43  dbg]
  107  * 
  108  * Revision 2.17  91/06/25  11:06:33  rpd
  109  *      Fixed includes to avoid norma files unless they are really needed.
  110  *      [91/06/25            rpd]
  111  * 
  112  * Revision 2.16  91/06/18  20:49:55  jsb
  113  *      Removed extra include of norma_vm.h.
  114  *      [91/06/18  18:47:29  jsb]
  115  * 
  116  * Revision 2.15  91/06/17  15:43:50  jsb
  117  *      NORMA_VM: use xmm_vm_object_lookup, since we really need a vm_object_t;
  118  *      use xmm_add_exception to mark device pagers as non-interposable.
  119  *      [91/06/17  13:22:58  jsb]
  120  * 
  121  * Revision 2.14  91/05/18  14:29:38  rpd
  122  *      Added proper locking for vm_page_insert.
  123  *      [91/04/21            rpd]
  124  *      Changed vm_page_init.
  125  *      [91/03/24            rpd]
  126  * 
  127  * Revision 2.13  91/05/14  15:41:35  mrt
  128  *      Correcting copyright
  129  * 
  130  * Revision 2.12  91/02/05  17:08:40  mrt
  131  *      Changed to new Mach copyright
  132  *      [91/01/31  17:27:40  mrt]
  133  * 
  134  * Revision 2.11  90/11/05  14:26:46  rpd
  135  *      Fixed memory_object_terminate to return KERN_SUCCESS.
  136  *      [90/10/29            rpd]
  137  * 
  138  * Revision 2.10  90/09/09  14:31:20  rpd
  139  *      Use decl_simple_lock_data.
  140  *      [90/08/30            rpd]
  141  * 
  142  * Revision 2.9  90/06/02  14:47:19  rpd
  143  *      Converted to new IPC.
  144  *      Renamed functions to mainline conventions.
  145  *      Fixed private/fictitious bug in memory_object_init.
  146  *      Fixed port leak in memory_object_terminate.
  147  *      [90/03/26  21:49:57  rpd]
  148  * 
  149  * Revision 2.8  90/05/29  18:36:37  rwd
  150  *      From rpd: set private to specify that the page structure is
  151  *      ours, not fictitious.
  152  *      [90/05/14            rwd]
  153  * 
  154  * Revision 2.6  90/02/22  20:02:05  dbg
  155  *      Change PAGE_WAKEUP to PAGE_WAKEUP_DONE to reflect the fact that
  156  *      it clears the busy flag.
  157  *      [90/01/29            dbg]
  158  * 
  159  * Revision 2.5  90/01/11  11:41:53  dbg
  160  *      De-lint.
  161  *      [89/12/06            dbg]
  162  * 
  163  * Revision 2.4  89/09/08  11:23:24  dbg
  164  *      Rewrite to run in kernel task (off user thread or
  165  *      vm_pageout!)
  166  *      [89/08/24            dbg]
  167  * 
  168  * Revision 2.3  89/08/09  14:33:03  rwd
  169  *      Call round_page on incoming size to get to mach page.
  170  *      [89/08/09            rwd]
  171  * 
  172  * Revision 2.2  89/08/05  16:04:51  rwd
  173  *      Added char_pager code for frame buffer.
  174  *      [89/07/26            rwd]
  175  * 
  176  * 26-May-89  Randall Dean (rwd) at Carnegie-Mellon University
  177  *      If no error, zero pad residual and set to 0
  178  *
  179  *  3-Mar-89  David Golub (dbg) at Carnegie-Mellon University
  180  *      Created.
  181  *
  182  */
  183 /*
  184  *      Author: David B. Golub, Carnegie Mellon University
  185  *      Date:   3/89
  186  *
  187  *      Device pager.
  188  */
  189 #include <norma_vm.h>
  190 
  191 #include <mach/boolean.h>
  192 #include <mach/port.h>
  193 #include <mach/message.h>
  194 #include <mach/std_types.h>
  195 #include <mach/mach_types.h>
  196 
  197 #include <mach/mach_user_kernel.h>      /* our calls to kernel */
  198 
  199 #include <ipc/ipc_port.h>
  200 #include <ipc/ipc_space.h>
  201 
  202 #include <kern/kern_io.h>
  203 #include <kern/memory.h>
  204 #include <kern/queue.h>
  205 #include <kern/zalloc.h>
  206 #include <kern/kalloc.h>
  207 
  208 #include <vm/vm_page.h>
  209 #include <vm/vm_kern.h>
  210 #include <vm/vm_user.h>
  211 
  212 #include <device/device_types.h>
  213 #include <device/ds_routines.h>
  214 #include <device/dev_hdr.h>
  215 #include <device/io_req.h>
  216 
  217 extern vm_offset_t      block_io_mmap();        /* dummy routine to allow
  218                                                    mmap for block devices */
  219 
  220 /*
  221  *      The device pager routines are called directly from the message
  222  *      system (via mach_msg), and thus run in the kernel-internal
  223  *      environment.  All ports are in internal form (ipc_port_t),
  224  *      and must be correctly reference-counted in order to be saved
  225  *      in other data structures.  Kernel routines may be called
  226  *      directly.  Kernel types are used for data objects (tasks,
  227  *      memory objects, ports).  The only IPC routines that may be
  228  *      called are ones that masquerade as the kernel task (via
  229  *      msg_send_from_kernel).
  230  *
  231  *      Port rights and references are maintained as follows:
  232  *              Memory object port:
  233  *                      The device_pager task has all rights.
  234  *              Memory object control port:
  235  *                      The device_pager task has only send rights.
  236  *              Memory object name port:
  237  *                      The device_pager task has only send rights.
  238  *                      The name port is not even recorded.
  239  *      Regardless how the object is created, the control and name
  240  *      ports are created by the kernel and passed through the memory
  241  *      management interface.
  242  *
  243  *      The device_pager assumes that access to its memory objects
  244  *      will not be propagated to more that one host, and therefore
  245  *      provides no consistency guarantees beyond those made by the
  246  *      kernel.
  247  *
  248  *      In the event that more than one host attempts to use a device
  249  *      memory object, the device_pager will only record the last set
  250  *      of port names.  [This can happen with only one host if a new
  251  *      mapping is being established while termination of all previous
  252  *      mappings is taking place.]  Currently, the device_pager assumes
  253  *      that its clients adhere to the initialization and termination
  254  *      protocols in the memory management interface; otherwise, port
  255  *      rights or out-of-line memory from erroneous messages may be
  256  *      allowed to accumulate.
  257  *
  258  *      [The phrase "currently" has been used above to denote aspects of
  259  *      the implementation that could be altered without changing the rest
  260  *      of the basic documentation.]
  261  */
  262 
  263 /*
  264  * Basic device pager structure.
  265  */
  266 struct dev_pager {
  267         decl_simple_lock_data(, lock)   /* lock for reference count */
  268         int             ref_count;      /* reference count */
  269         int             client_count;   /* How many memory_object_create
  270                                          * calls have we received */
  271         ipc_port_t      pager;          /* pager port */
  272         ipc_port_t      pager_request;  /* Known request port */
  273         ipc_port_t      pager_name;     /* Known name port */
  274         device_t        device;         /* Device handle */
  275         int             type;           /* to distinguish */
  276 #define DEV_PAGER_TYPE  0
  277 #define CHAR_PAGER_TYPE 1
  278         /* char pager specifics */
  279         int             prot;
  280         vm_size_t       size;
  281 };
  282 typedef struct dev_pager *dev_pager_t;
  283 #define DEV_PAGER_NULL  ((dev_pager_t)0)
  284 
  285 
  286 zone_t          dev_pager_zone;
  287 
  288 void dev_pager_reference(register dev_pager_t   ds)
  289 {
  290         simple_lock(&ds->lock);
  291         ds->ref_count++;
  292         simple_unlock(&ds->lock);
  293 }
  294 
  295 void dev_pager_deallocate(register dev_pager_t  ds)
  296 {
  297         simple_lock(&ds->lock);
  298         if (--ds->ref_count > 0) {
  299             simple_unlock(&ds->lock);
  300             return;
  301         }
  302 
  303         simple_unlock(&ds->lock);
  304         zfree(dev_pager_zone, (vm_offset_t)ds);
  305 }
  306 
  307 /*
  308  * A hash table of ports for device_pager backed objects.
  309  */
  310 
  311 #define DEV_PAGER_HASH_COUNT            127
  312 
  313 struct dev_pager_entry {
  314         queue_chain_t   links;
  315         ipc_port_t      name;
  316         dev_pager_t     pager_rec;
  317 };
  318 typedef struct dev_pager_entry *dev_pager_entry_t;
  319 
  320 queue_head_t    dev_pager_hashtable[DEV_PAGER_HASH_COUNT];
  321 zone_t          dev_pager_hash_zone;
  322 decl_simple_lock_data(,
  323                 dev_pager_hash_lock)
  324 
  325 #define dev_pager_hash(name_port) \
  326                 (((natural_t)(name_port) & 0xffffff) % DEV_PAGER_HASH_COUNT)
  327 
  328 void dev_pager_hash_init(void)
  329 {
  330         register int    i;
  331         register vm_size_t      size;
  332 
  333         size = sizeof(struct dev_pager_entry);
  334         dev_pager_hash_zone = zinit(
  335                                 size,
  336                                 size * 1000,
  337                                 PAGE_SIZE,
  338                                 FALSE,
  339                                 "dev_pager port hash");
  340         for (i = 0; i < DEV_PAGER_HASH_COUNT; i++)
  341             queue_init(&dev_pager_hashtable[i]);
  342         simple_lock_init(&dev_pager_hash_lock);
  343 }
  344 
  345 void dev_pager_hash_insert(
  346         ipc_port_t      name_port,
  347         dev_pager_t     rec)
  348 {
  349         register dev_pager_entry_t new_entry;
  350 
  351         new_entry = (dev_pager_entry_t) zalloc(dev_pager_hash_zone);
  352         new_entry->name = name_port;
  353         new_entry->pager_rec = rec;
  354 
  355         simple_lock(&dev_pager_hash_lock);
  356         queue_enter(&dev_pager_hashtable[dev_pager_hash(name_port)],
  357                         new_entry, dev_pager_entry_t, links);
  358         simple_unlock(&dev_pager_hash_lock);
  359 }
  360 
  361 void dev_pager_hash_delete(ipc_port_t   name_port)
  362 {
  363         register queue_t        bucket;
  364         register dev_pager_entry_t      entry;
  365 
  366         bucket = &dev_pager_hashtable[dev_pager_hash(name_port)];
  367 
  368         simple_lock(&dev_pager_hash_lock);
  369         for (entry = (dev_pager_entry_t)queue_first(bucket);
  370              !queue_end(bucket, &entry->links);
  371              entry = (dev_pager_entry_t)queue_next(&entry->links)) {
  372             if (entry->name == name_port) {
  373                 queue_remove(bucket, entry, dev_pager_entry_t, links);
  374                 break;
  375             }
  376         }
  377         simple_unlock(&dev_pager_hash_lock);
  378         if (entry)
  379             zfree(dev_pager_hash_zone, (vm_offset_t)entry);
  380 }
  381 
  382 dev_pager_t dev_pager_hash_lookup(ipc_port_t    name_port)
  383 {
  384         register queue_t        bucket;
  385         register dev_pager_entry_t      entry;
  386         register dev_pager_t    pager;
  387 
  388         bucket = &dev_pager_hashtable[dev_pager_hash(name_port)];
  389 
  390         simple_lock(&dev_pager_hash_lock);
  391         for (entry = (dev_pager_entry_t)queue_first(bucket);
  392              !queue_end(bucket, &entry->links);
  393              entry = (dev_pager_entry_t)queue_next(&entry->links)) {
  394             if (entry->name == name_port) {
  395                 pager = entry->pager_rec;
  396                 dev_pager_reference(pager);
  397                 simple_unlock(&dev_pager_hash_lock);
  398                 return pager;
  399             }
  400         }
  401         simple_unlock(&dev_pager_hash_lock);
  402         return DEV_PAGER_NULL;
  403 }
  404 
  405 kern_return_t   device_pager_setup(
  406         device_t        device,
  407         int             prot,
  408         vm_offset_t     offset,
  409         vm_size_t       size,
  410         mach_port_t     *pager)
  411 {
  412         register dev_pager_t    d;
  413 
  414         /*
  415          *      Verify the device is indeed mappable
  416          */
  417         if (!device->dev_ops->d_mmap || (device->dev_ops->d_mmap == nomap))
  418                 return D_INVALID_OPERATION;
  419 
  420         /*
  421          *      Allocate a structure to hold the arguments
  422          *      and port to represent this object.
  423          */
  424 
  425         d = dev_pager_hash_lookup((ipc_port_t)device);  /* HACK */
  426         if (d != DEV_PAGER_NULL) {
  427                 *pager = (mach_port_t) ipc_port_make_send(d->pager);
  428                 dev_pager_deallocate(d);
  429                 return D_SUCCESS;
  430         }
  431 
  432         d = (dev_pager_t) zalloc(dev_pager_zone);
  433         if (d == DEV_PAGER_NULL)
  434                 return KERN_RESOURCE_SHORTAGE;
  435 
  436         simple_lock_init(&d->lock);
  437         d->ref_count = 1;
  438 
  439         /*
  440          * Allocate the pager port.
  441          */
  442         d->pager = ipc_port_alloc_kernel();
  443         if (d->pager == IP_NULL) {
  444                 dev_pager_deallocate(d);
  445                 return KERN_RESOURCE_SHORTAGE;
  446         }
  447 
  448         d->client_count = 0;
  449         d->pager_request = IP_NULL;
  450         d->pager_name = IP_NULL;
  451         d->device = device;
  452         device_reference(device);
  453         d->prot = prot;
  454         d->size = round_page(size);
  455         if (device->dev_ops->d_mmap == block_io_mmap) {
  456                 d->type = DEV_PAGER_TYPE;
  457         } else {
  458                 d->type = CHAR_PAGER_TYPE;
  459         }
  460 
  461         dev_pager_hash_insert(d->pager, d);
  462         dev_pager_hash_insert((ipc_port_t)device, d);   /* HACK */
  463 
  464         *pager = (mach_port_t) ipc_port_make_send(d->pager);
  465         return KERN_SUCCESS;
  466 }
  467 
  468 /*
  469  *      Routine:        device_pager_release
  470  *      Purpose:
  471  *              Relinquish any references or rights that were
  472  *              associated with the result of a call to
  473  *              device_pager_setup.
  474  */
  475 void    device_pager_release(memory_object_t    object)
  476 {
  477         if (MACH_PORT_VALID(object))
  478                 ipc_port_release_send((ipc_port_t) object);
  479 }
  480 
  481 /*
  482  * Rename all of the functions in the pager interface, to avoid
  483  * confusing them with the kernel interface.
  484  */
  485 
  486 #define memory_object_init              device_pager_init_pager
  487 #define memory_object_terminate         device_pager_terminate
  488 #define memory_object_copy              device_pager_copy
  489 #define memory_object_data_request      device_pager_data_request
  490 #define memory_object_data_unlock       device_pager_data_unlock
  491 #define memory_object_data_write        device_pager_data_write
  492 #define memory_object_lock_completed    device_pager_lock_completed
  493 #define memory_object_supply_completed  device_pager_supply_completed
  494 #define memory_object_data_return       device_pager_data_return
  495 #define memory_object_change_completed  device_pager_change_completed
  496 
  497 boolean_t       device_pager_debug = FALSE;
  498 
  499 boolean_t       device_pager_data_request_done(io_req_t);       /* forward */
  500 boolean_t       device_pager_data_write_done(io_req_t);         /* forward */
  501 
  502 
  503 kern_return_t   memory_object_data_request(
  504         ipc_port_t      pager,
  505         ipc_port_t      pager_request,
  506         vm_offset_t     offset,
  507         vm_size_t       length,
  508         vm_prot_t       protection_required)
  509 {
  510         register dev_pager_t    ds;
  511 
  512 #ifdef lint
  513         protection_required++;
  514 #endif /* lint */
  515 
  516         if (device_pager_debug)
  517             printf("(device_pager)data_request: pager=%d, offset=%#x, length=%#x\n",
  518                    (vm_offset_t)pager, offset, length);
  519 
  520         ds = dev_pager_hash_lookup((ipc_port_t)pager);
  521         if (ds == DEV_PAGER_NULL)
  522                 panic("(device_pager)data_request: lookup failed");
  523 
  524         if (ds->pager_request != pager_request)
  525                 panic("(device_pager)data_request: bad pager_request");
  526 
  527         if (ds->type == CHAR_PAGER_TYPE) {
  528             register vm_object_t        object;
  529             vm_offset_t                 device_map_page(void *,vm_offset_t);
  530 
  531 #if     NORMA_VM
  532             object = vm_object_lookup(pager);
  533 #else   /* NORMA_VM */
  534             object = vm_object_lookup(pager_request);
  535 #endif  /* NORMA_VM */
  536             if (object == VM_OBJECT_NULL) {
  537                     (void) r_memory_object_data_error(pager_request,
  538                                                       offset, length,
  539                                                       KERN_FAILURE);
  540                     dev_pager_deallocate(ds);
  541                     return KERN_SUCCESS;
  542             }
  543 
  544             vm_object_page_map(object,
  545                                offset, length,
  546                                device_map_page, (char *)ds);
  547 
  548             vm_object_deallocate(object);
  549         }
  550         else {
  551             register io_req_t   ior;
  552             register device_t   device;
  553             io_return_t         result;
  554 
  555             panic("(device_pager)data_request: dev pager");
  556             
  557             device = ds->device;
  558             device_reference(device);
  559             dev_pager_deallocate(ds);
  560             
  561             /*
  562              * Package the read for the device driver.
  563              */
  564             io_req_alloc(ior, 0);
  565             
  566             ior->io_device      = device;
  567             ior->io_unit        = device->dev_number;
  568             ior->io_op          = IO_READ | IO_CALL;
  569             ior->io_mode        = 0;
  570             ior->io_recnum      = offset / device->bsize;
  571             ior->io_data        = 0;            /* driver must allocate */
  572             ior->io_count       = length;
  573             ior->io_alloc_size  = 0;            /* no data allocated yet */
  574             ior->io_residual    = 0;
  575             ior->io_error       = 0;
  576             ior->io_done        = device_pager_data_request_done;
  577             ior->io_reply_port  = pager_request;
  578             ior->io_reply_port_type = MACH_MSG_TYPE_PORT_SEND;
  579             
  580             result = (*device->dev_ops->d_read)(device->dev_number, ior);
  581             if (result == D_IO_QUEUED)
  582                 return KERN_SUCCESS;
  583             
  584             /*
  585              * Return by queuing IOR for io_done thread, to reply in
  586              * correct environment (kernel).
  587              */
  588             ior->io_error = result;
  589             iodone(ior);
  590         }
  591 
  592         dev_pager_deallocate(ds);
  593 
  594         return KERN_SUCCESS;
  595 }
  596 
  597 /*
  598  * Always called by io_done thread.
  599  */
  600 boolean_t device_pager_data_request_done(register io_req_t      ior)
  601 {
  602         vm_offset_t     start_alloc, end_alloc;
  603         vm_size_t       size_read;
  604 
  605         if (ior->io_error == D_SUCCESS) {
  606             size_read = ior->io_count;
  607             if (ior->io_residual) {
  608                 if (device_pager_debug)
  609                     printf("(device_pager)data_request_done: r: 0x%x\n",ior->io_residual);
  610                 bzero( (char *) (&ior->io_data[ior->io_count - 
  611                                                ior->io_residual]),
  612                       (unsigned) ior->io_residual);
  613             }
  614         } else {
  615             size_read = ior->io_count - ior->io_residual;
  616         }
  617 
  618         start_alloc = trunc_page((vm_offset_t)ior->io_data);
  619         end_alloc   = start_alloc + round_page(ior->io_alloc_size);
  620 
  621         if (ior->io_error == D_SUCCESS) {
  622             vm_map_copy_t copy;
  623             kern_return_t kr;
  624 
  625             kr = vm_map_copyin(kernel_map, (vm_offset_t)ior->io_data,
  626                                 size_read, TRUE, &copy);
  627             if (kr != KERN_SUCCESS)
  628                 panic("device_pager_data_request_done");
  629 
  630             (void) r_memory_object_data_provided(
  631                                         ior->io_reply_port,
  632                                         ior->io_recnum * ior->io_device->bsize,
  633                                         (vm_offset_t)copy,
  634                                         size_read,
  635                                         VM_PROT_NONE);
  636         }
  637         else {
  638             (void) r_memory_object_data_error(
  639                                         ior->io_reply_port,
  640                                         ior->io_recnum * ior->io_device->bsize,
  641                                         (vm_size_t)ior->io_count,
  642                                         ior->io_error);
  643         }
  644 
  645         (void)vm_deallocate(kernel_map,
  646                             start_alloc,
  647                             end_alloc - start_alloc);
  648         device_deallocate(ior->io_device);
  649         return TRUE;
  650 }
  651 
  652 kern_return_t memory_object_data_write(
  653         ipc_port_t              pager,
  654         ipc_port_t              pager_request,
  655         register vm_offset_t    offset,
  656         register pointer_t      addr,
  657         vm_size_t               data_count)
  658 {
  659         register dev_pager_t    ds;
  660         register device_t       device;
  661         register io_req_t       ior;
  662         kern_return_t           result;
  663 
  664         panic("(device_pager)data_write: called");
  665 
  666         ds = dev_pager_hash_lookup((ipc_port_t)pager);
  667         if (ds == DEV_PAGER_NULL)
  668                 panic("(device_pager)data_write: lookup failed");
  669 
  670         if (ds->pager_request != pager_request)
  671                 panic("(device_pager)data_write: bad pager_request");
  672 
  673         if (ds->type == CHAR_PAGER_TYPE)
  674                 panic("(device_pager)data_write: char pager");
  675 
  676         device = ds->device;
  677         device_reference(device);
  678         dev_pager_deallocate(ds);
  679 
  680         /*
  681          * Package the write request for the device driver.
  682          */
  683         io_req_alloc(ior, data_count);
  684 
  685         ior->io_device          = device;
  686         ior->io_unit            = device->dev_number;
  687         ior->io_op              = IO_WRITE | IO_CALL;
  688         ior->io_mode            = 0;
  689         ior->io_recnum          = offset / device->bsize;
  690         ior->io_data            = (io_buf_ptr_t)addr;
  691         ior->io_count           = data_count;
  692         ior->io_alloc_size      = data_count;   /* amount to deallocate */
  693         ior->io_residual        = 0;
  694         ior->io_error           = 0;
  695         ior->io_done            = device_pager_data_write_done;
  696         ior->io_reply_port      = IP_NULL;
  697 
  698         result = (*device->dev_ops->d_write)(device->dev_number, ior);
  699 
  700         if (result != D_IO_QUEUED) {
  701             device_write_dealloc(ior);
  702             io_req_free((vm_offset_t)ior);
  703             device_deallocate(device);
  704         }
  705 
  706         return KERN_SUCCESS;
  707 }
  708 
  709 boolean_t device_pager_data_write_done(
  710         register io_req_t       ior)
  711 {
  712         device_write_dealloc(ior);
  713         device_deallocate(ior->io_device);
  714 
  715         return TRUE;
  716 }
  717 
  718 kern_return_t memory_object_copy(
  719         ipc_port_t              pager,
  720         ipc_port_t              pager_request,
  721         register vm_offset_t    offset,
  722         register vm_size_t      length,
  723         ipc_port_t              new_pager)
  724 {
  725         panic("(device_pager)copy: called");
  726         return KERN_FAILURE;
  727 }
  728 
  729 kern_return_t
  730 memory_object_supply_completed(
  731         ipc_port_t pager,
  732         ipc_port_t pager_request,
  733         vm_offset_t offset,
  734         vm_size_t length,
  735         kern_return_t result,
  736         vm_offset_t error_offset)
  737 {
  738         panic("(device_pager)supply_completed: called");
  739         return KERN_FAILURE;
  740 }
  741 
  742 kern_return_t
  743 memory_object_data_return(
  744         ipc_port_t              pager,
  745         ipc_port_t              pager_request,
  746         vm_offset_t             offset,
  747         register pointer_t      addr,
  748         vm_size_t               data_cnt,
  749         boolean_t               dirty,
  750         boolean_t               kernel_copy)
  751 {
  752         panic("(device_pager)data_return: called");
  753         return KERN_FAILURE;
  754 }
  755 
  756 kern_return_t
  757 memory_object_change_completed(
  758         ipc_port_t pager,
  759         boolean_t may_cache,
  760         memory_object_copy_strategy_t copy_strategy)
  761 {
  762         panic("(device_pager)change_completed: called");
  763         return KERN_FAILURE;
  764 }
  765 
  766 /*
  767  *      The mapping function takes a byte offset, but returns
  768  *      a machine-dependent page frame number.  We convert
  769  *      that into something that the pmap module will
  770  *      accept later.
  771  */
  772 vm_offset_t device_map_page(
  773         void            *dsp,
  774         vm_offset_t     offset)
  775 {
  776         register dev_pager_t    ds = (dev_pager_t) dsp;
  777 
  778         return (*ds->device->dev_ops->d_mmap)
  779                         (ds->device->dev_number, offset, ds->prot);
  780 }
  781 
  782 kern_return_t memory_object_init(
  783         ipc_port_t      pager,
  784         ipc_port_t      pager_request,
  785         ipc_port_t      pager_name,
  786         vm_size_t       pager_page_size)
  787 {
  788         register dev_pager_t    ds;
  789 
  790         if (device_pager_debug)
  791                 printf("(device_pager)init: pager=%#x, request=%#x, name=%#x\n",
  792                        (vm_offset_t) pager, (vm_offset_t) pager_request,
  793                        (vm_offset_t) pager_name);
  794 
  795         assert(pager_page_size == PAGE_SIZE);
  796         assert(IP_VALID(pager_request));
  797         assert(IP_VALID(pager_name));
  798 
  799         ds = dev_pager_hash_lookup(pager);
  800         assert(ds != DEV_PAGER_NULL);
  801 
  802         assert(ds->client_count == 0);
  803         assert(ds->pager_request == IP_NULL);
  804         assert(ds->pager_name == IP_NULL);
  805 
  806         ds->client_count = 1;
  807 
  808         /*
  809          * We save the send rights for the request and name ports.
  810          */
  811 
  812         ds->pager_request = pager_request;
  813         ds->pager_name = pager_name;
  814 
  815         if (ds->type == CHAR_PAGER_TYPE) {
  816             /*
  817              * Reply that the object is ready
  818              */
  819             (void) r_memory_object_set_attributes(pager_request,
  820                                                 TRUE,   /* ready */
  821                                                 FALSE,  /* do not cache */
  822                                                 MEMORY_OBJECT_COPY_NONE);
  823         } else {
  824             (void) r_memory_object_set_attributes(pager_request,
  825                                                 TRUE,   /* ready */
  826                                                 TRUE,   /* cache */
  827                                                 MEMORY_OBJECT_COPY_DELAY);
  828         }
  829 
  830         dev_pager_deallocate(ds);
  831         return KERN_SUCCESS;
  832 }
  833 
  834 kern_return_t memory_object_terminate(
  835         ipc_port_t      pager,
  836         ipc_port_t      pager_request,
  837         ipc_port_t      pager_name)
  838 {
  839         register dev_pager_t    ds;
  840 
  841         assert(IP_VALID(pager_request));
  842         assert(IP_VALID(pager_name));
  843 
  844         ds = dev_pager_hash_lookup(pager);
  845         assert(ds != DEV_PAGER_NULL);
  846 
  847         assert(ds->client_count == 1);
  848         assert(ds->pager_request == pager_request);
  849         assert(ds->pager_name == pager_name);
  850 
  851         dev_pager_hash_delete(ds->pager);
  852         dev_pager_hash_delete((ipc_port_t)ds->device);  /* HACK */
  853         device_deallocate(ds->device);
  854 
  855         /* release the send rights we have saved from the init call */
  856 
  857         ipc_port_release_send(pager_request);
  858         ipc_port_release_send(pager_name);
  859 
  860         /* release the naked receive rights we just acquired */
  861 
  862         ipc_port_release_receive(pager_request);
  863         ipc_port_release_receive(pager_name);
  864 
  865         /* release the kernel's receive right for the pager port */
  866 
  867         ipc_port_dealloc_kernel(pager);
  868 
  869         /* once for ref from lookup, once to make it go away */
  870         dev_pager_deallocate(ds);
  871         dev_pager_deallocate(ds);
  872 
  873         return KERN_SUCCESS;
  874 }
  875 
  876 kern_return_t memory_object_data_unlock(
  877         ipc_port_t memory_object,
  878         ipc_port_t memory_control_port,
  879         vm_offset_t offset,
  880         vm_size_t length,
  881         vm_prot_t desired_access)
  882 {
  883 #ifdef  lint
  884         memory_object++; memory_control_port++; offset++; length++; desired_access++;
  885 #endif  /* lint */
  886 
  887         panic("(device_pager)data_unlock: called");
  888         return KERN_FAILURE;
  889 }
  890 
  891 kern_return_t memory_object_lock_completed(
  892         ipc_port_t      memory_object,
  893         ipc_port_t      pager_request_port,
  894         vm_offset_t     offset,
  895         vm_size_t       length)
  896 {
  897 #ifdef  lint
  898         memory_object++; pager_request_port++; offset++; length++;
  899 #endif  /* lint */
  900 
  901         panic("(device_pager)lock_completed: called");
  902         return KERN_FAILURE;
  903 }
  904 
  905 /*
  906  * Include memory_object_server in this file to avoid name
  907  * conflicts with other possible pagers.
  908  */
  909 #define memory_object_server            device_pager_server
  910 #define memory_object_server_routine    device_pager_server_routine
  911 #include <device/device_pager_server.c>
  912 
  913 void device_pager_init(void)
  914 {
  915         register vm_size_t      size;
  916 
  917         /*
  918          * Initialize zone of paging structures.
  919          */
  920         size = sizeof(struct dev_pager);
  921         dev_pager_zone = zinit(size,
  922                                 (vm_size_t) size * 1000,
  923                                 PAGE_SIZE,
  924                                 FALSE,
  925                                 "device pager structures");
  926 
  927         /*
  928          *      Initialize the name port hashing stuff.
  929          */
  930         dev_pager_hash_init();
  931 }

Cache object: 0f7bf37afbea283d3d4ead9154d0de68


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.