The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/vm/vm_object.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1993-1987 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        vm_object.c,v $
   29  * Revision 2.37  93/01/14  18:01:42  danner
   30  *      Added ANSI function prototypes.
   31  *      [92/12/30            dbg]
   32  *      64bit cleanup.
   33  *      [92/12/01            af]
   34  * 
   35  * Revision 2.36  92/08/03  18:01:47  jfriedl
   36  *      removed silly prototypes
   37  *      [92/08/02            jfriedl]
   38  * 
   39  * Revision 2.35  92/05/21  17:26:23  jfriedl
   40  *      Cleanup to quiet gcc warnings.
   41  *      [92/05/16            jfriedl]
   42  * 
   43  * Revision 2.34  92/04/01  19:37:41  rpd
   44  *      Fixed vm_object_pager_wakeup to use ip_active.  From dlb.
   45  *      [92/03/22            rpd]
   46  * 
   47  * Revision 2.33  92/03/10  16:30:12  jsb
   48  *      Fix to NORMA_VM pager_name merge.
   49  *      [92/03/10  13:45:01  jsb]
   50  * 
   51  *      Merged in NORMA_VM strategy for pager_name ports.
   52  *      Should one day reconcile the jsb and rpd schemes.
   53  *      [92/03/10  08:17:32  jsb]
   54  * 
   55  *      Added new NORMA_VM changes.
   56  *      [92/03/06  16:35:19  jsb]
   57  * 
   58  *      (Naively) handle temporary objects in vm_object_copy_strategically.
   59  *      This now happens because of new MEMORY_OBJECT_COPY_TEMPORARY option.
   60  *      [92/03/06  16:22:17  jsb]
   61  * 
   62  *      Use IKOT_PAGER_TERMINATING to solve terminate/init race.
   63  *      Includes fixes from David Black.
   64  *      [92/03/06  16:15:41  jsb]
   65  * 
   66  *      Changes for object->pager_request now being a pager_request_t.
   67  *      [92/03/06  15:20:07  jsb]
   68  * 
   69  *      Removed old NORMA_VM stuff.
   70  *      [92/03/06  14:37:19  jsb]
   71  * 
   72  * Revision 2.32  92/03/04  20:26:49  rpd
   73  *      Fixed the most blatant errors in vm_object_copy_call.
   74  *      [92/03/04  17:52:30  jsb]
   75  * 
   76  * Revision 2.31  92/03/01  15:16:01  rpd
   77  *      Fixed vm_object_copy_temporary to only set shadowed
   78  *      if symmetric copy-on-write is used.  From dlb.
   79  *      [92/03/01            rpd]
   80  * 
   81  * Revision 2.30  92/02/23  19:51:09  elf
   82  *      Remove dest_wired logic from vm_object_copy_slowly.
   83  *      [92/02/21  10:16:35  dlb]
   84  * 
   85  *      Maintain shadowed field in objects for use by new
   86  *      VM_INHERIT_SHARE code (vm_map_fork).  Restore old
   87  *      interface to vm_object_pmap_protect.
   88  *      [92/02/19  14:28:12  dlb]
   89  * 
   90  *      Use use_shared_copy instead of new copy strategy for
   91  *      temporary objects.
   92  *      [92/01/07  11:15:53  dlb]
   93  * 
   94  *      Add asynchronous copy-on-write logic for temporary objects.
   95  *      [92/01/06  16:24:59  dlb]
   96  * 
   97  * Revision 2.29  92/02/19  15:10:25  elf
   98  *      Fixed vm_object_collapse to unlock the object before calling
   99  *      ipc_port_dealloc_kernel (which might allocate memory).
  100  *      [92/02/16            rpd]
  101  * 
  102  *      Picked up dlb fix for vm_object_copy_slowly,
  103  *      for when vm_fault_page is interrupted.
  104  *      [92/02/14            rpd]
  105  * 
  106  * Revision 2.28  92/01/14  16:48:10  rpd
  107  *      Fixed vm_object_lookup to use IP_VALID, lock the port, and
  108  *      allow only IKOT_PAGING_REQUEST, not IKOT_PAGER.
  109  *      Added vm_object_lookup_name (which uses IKOT_PAGING_NAME).
  110  *      Removed vm_object_debug.
  111  *      [91/12/31            rpd]
  112  * 
  113  *      Changed vm_object_allocate, vm_object_enter, etc so that
  114  *      all objects have name ports.  Added IKOT_PAGING_NAME.
  115  *      Split vm_object_init into vm_object_bootstrap and vm_object_init.
  116  *      [91/12/28            rpd]
  117  * 
  118  * Revision 2.27  91/12/11  08:44:01  jsb
  119  *      Changed vm_object_coalesce to also check for paging references.
  120  *      This fixes a problem with page-list map-copy objects.
  121  *      [91/12/03            rpd]
  122  * 
  123  * Revision 2.26  91/12/10  13:27:07  jsb
  124  *      Comment out noisy printf. (Avoiding dealloc ...)
  125  *      [91/12/10  12:50:34  jsb]
  126  * 
  127  * Revision 2.25  91/11/12  11:52:09  rvb
  128  *      Added simple_lock_pause.
  129  *      [91/11/12            rpd]
  130  * 
  131  * Revision 2.24  91/08/28  11:18:37  jsb
  132  *      Handle dirty private pages correctly in vm_object_terminate().
  133  *      [91/07/30  14:18:54  dlb]
  134  * 
  135  *      single_use --> use_old_pageout in object initialization template.
  136  *      Support precious pages in vm_object_terminate().
  137  *      [91/07/03  14:18:07  dlb]
  138  * 
  139  * Revision 2.23  91/07/31  18:21:27  dbg
  140  *      Add vm_object_page_map, to attach a set of physical pages to an
  141  *      object.
  142  *      [91/07/30  17:26:58  dbg]
  143  * 
  144  * Revision 2.22  91/07/01  08:27:52  jsb
  145  *      Improved NORMA_VM support, including support for memory_object_create.
  146  *      [91/06/29  17:06:11  jsb]
  147  * 
  148  * Revision 2.21  91/06/25  11:07:02  rpd
  149  *      Fixed includes to avoid norma files unless they are really needed.
  150  *      [91/06/25            rpd]
  151  * 
  152  * Revision 2.20  91/06/25  10:34:11  rpd
  153  *      Changed memory_object_t to ipc_port_t where appropriate.
  154  *      Removed extraneous casts.
  155  *      [91/05/28            rpd]
  156  * 
  157  * Revision 2.19  91/06/17  15:49:25  jsb
  158  *      Added NORMA_VM support.
  159  *      [91/06/17  15:28:16  jsb]
  160  * 
  161  * Revision 2.18  91/05/18  14:41:22  rpd
  162  *      Changed vm_object_deactivate_pages to call vm_page_deactivate
  163  *      on inactivate pages, because that is no longer a no-op.
  164  *      [91/04/20            rpd]
  165  * 
  166  *      Fixed vm_object_pmap_protect, vm_object_pmap_remove,
  167  *      vm_object_page_remove, vm_object_copy_call, vm_object_copy_delayed
  168  *      to check for fictitious pages.
  169  *      [91/04/10            rpd]
  170  *      Fixed vm_object_terminate to allow for busy/absent pages.
  171  *      [91/04/02            rpd]
  172  * 
  173  *      Added VM_FAULT_FICTITIOUS_SHORTAGE.
  174  *      [91/03/29            rpd]
  175  *      Added vm/memory_object.h.
  176  *      [91/03/22            rpd]
  177  * 
  178  * Revision 2.17  91/05/14  17:50:19  mrt
  179  *      Correcting copyright
  180  * 
  181  * Revision 2.16  91/05/08  13:34:59  dbg
  182  *      Rearrange initialization code in vm_object_enter to clearly
  183  *      separate 'internal' case (and to avoid a vax GCC bug).
  184  *      [91/04/17            dbg]
  185  * 
  186  * Revision 2.15  91/03/16  15:06:19  rpd
  187  *      Changed vm_object_deactivate_pages and vm_object_terminate to
  188  *      check for busy pages.  Changed how vm_object_terminate
  189  *      interacts with the pageout daemon.
  190  *      [91/03/12            rpd]
  191  *      Fixed vm_object_page_remove to be smart about small regions.
  192  *      [91/03/05            rpd]
  193  *      Added resume, continuation arguments to vm_fault_page.
  194  *      Added continuation argument to VM_PAGE_WAIT.
  195  *      [91/02/05            rpd]
  196  * 
  197  * Revision 2.14  91/02/05  17:59:16  mrt
  198  *      Changed to new Mach copyright
  199  *      [91/02/01  16:33:29  mrt]
  200  * 
  201  * Revision 2.13  91/01/08  16:45:32  rpd
  202  *      Added continuation argument to thread_block.
  203  *      [90/12/08            rpd]
  204  * 
  205  *      Fixed vm_object_terminate to give vm_pageout_page busy pages.
  206  *      [90/11/22            rpd]
  207  * 
  208  *      Changed VM_WAIT to VM_PAGE_WAIT.
  209  *      [90/11/13            rpd]
  210  * 
  211  * Revision 2.12  90/11/06  18:44:12  rpd
  212  *      From dlb@osf.org:
  213  *      If pager initialization is in progress (object->pager_created &&
  214  *      !object->pager_initialized), then vm_object_deallocate must wait
  215  *      for it to complete before terminating the object.  Because anything
  216  *      can happen in the interim, it must recheck its decision to terminate
  217  *      after the wait completes.
  218  * 
  219  * Revision 2.11  90/10/12  13:06:24  rpd
  220  *      In vm_object_copy_slowly, only activate pages returned from
  221  *      vm_fault_page if they aren't already on a pageout queue.
  222  *      [90/10/08            rpd]
  223  * 
  224  * Revision 2.10  90/09/09  14:34:20  rpd
  225  *      Fixed bug in vm_object_copy_slowly.  The pages in the new object
  226  *      should be marked as dirty.
  227  *      [90/09/08            rpd]
  228  * 
  229  * Revision 2.9  90/06/19  23:02:52  rpd
  230  *      Picked up vm_submap_object.
  231  *      [90/06/08            rpd]
  232  * 
  233  * Revision 2.8  90/06/02  15:11:31  rpd
  234  *      Changed vm_object_collapse_bypass_allowed back to TRUE.
  235  *      [90/04/22            rpd]
  236  *      Converted to new IPC.
  237  *      [90/03/26  23:16:54  rpd]
  238  * 
  239  * Revision 2.7  90/05/29  18:38:57  rwd
  240  *      Rfr change to reflect change in vm_pageout_page.
  241  *      [90/04/12  13:48:31  rwd]
  242  * 
  243  * Revision 2.6  90/05/03  15:53:08  dbg
  244  *      Make vm_object_pmap_protect follow shadow chains if removing all
  245  *      permissions.  Fix check for using pmap_protect on entire range.
  246  *      [90/04/18            dbg]
  247  * 
  248  *      Pass 'flush' argument to vm_pageout_page.
  249  *      [90/03/28            dbg]
  250  * 
  251  * Revision 2.5  90/02/22  20:06:24  dbg
  252  *      Add changes from mainline:
  253  * 
  254  *              Fix comment on vm_object_delayed.
  255  *              [89/12/18            dlb]
  256  *              Revised use of PAGE_WAKEUP macros.  Don't clear busy flag when
  257  *              restarting unlock requests.
  258  *              [89/12/13            dlb]
  259  *              Fix locking problems in vm_object_copy_slowly:
  260  *                  1.  Must hold lock on object containing page being copied
  261  *                      from (result_page) object when activating that page.
  262  *                  2.  Fix typo that tried to lock source object twice if
  263  *                      vm_fault_page returned two objects; this rare case
  264  *                      would hang a multiprocessor if it occurred.
  265  *              [89/12/11            dlb]
  266  * 
  267  *              Complete un-optimized overwriting implementation.  Move
  268  *              futuristic implementation here.  Clean up and document
  269  *              copy routines.
  270  *              [89/08/05  18:01:22  mwyoung]
  271  *              Add vm_object_pmap_protect() to perform pmap protection
  272  *              changes more efficiently.
  273  *              [89/07/07  14:06:34  mwyoung]
  274  *              Split up vm_object_copy into one part that does not unlock the
  275  *              object (i.e., does not sleep), and another that may.  Also, use
  276  *              separate routines for each copy strategy.
  277  *              [89/07/06  18:39:55  mwyoung]
  278  * 
  279  * Revision 2.4  90/01/11  11:47:58  dbg
  280  *      Use vm_fault_cleanup.
  281  *      [89/12/13            dbg]
  282  * 
  283  *      Fix locking bug in vm_object_copy_slowly.
  284  *      [89/12/13            dbg]
  285  * 
  286  *      Pick up changes from mainline:
  287  *              Remove size argument from vm_external_destroy.
  288  *              Add paging_offset to size used in vm_external_create.
  289  *              Account for existence_info movement in vm_object_collapse.
  290  *              [89/10/16  15:30:16  af]
  291  * 
  292  *              Remove XP conditionals.
  293  *              Document global variables.
  294  *              [89/10/10            mwyoung]
  295  * 
  296  *              Added initialization for lock_in_progress, lock_restart fields.
  297  *              [89/08/07            mwyoung]
  298  * 
  299  *              Beautify dirty bit handling in vm_object_terminate().
  300  *              Don't write back "error" pages in vm_object_terminate().
  301  *              [89/04/22            mwyoung]
  302  * 
  303  *              Removed vm_object_list, vm_object_count.
  304  *              [89/08/31  19:45:43  rpd]
  305  * 
  306  *              Optimization from NeXT: vm_object_deactivate_pages checks now
  307  *              that the page is inactive before call vm_page_deactivate.  Also,
  308  *              initialize the last_alloc field in vm_object_template.
  309  *              [89/08/19  23:46:42  rpd]
  310  * 
  311  * Revision 2.3  89/10/16  15:22:22  rwd
  312  *      In vm_object_collapse: leave paging_offset zero if the backing
  313  *      object does not have a pager.
  314  *      [89/10/12            dbg]
  315  * 
  316  * Revision 2.2  89/09/08  11:28:45  dbg
  317  *      Add keep_wired parameter to vm_object_copy_slowly to wire new
  318  *      pages.
  319  *      [89/07/14            dbg]
  320  * 
  321  * 22-May-89  David Golub (dbg) at Carnegie-Mellon University
  322  *      Refuse vm_object_pager_create if no default memory manager.
  323  *
  324  * 28-Apr-89  David Golub (dbg) at Carnegie-Mellon University
  325  *      Changes for MACH_KERNEL:
  326  *      . Use port_alloc_internal to simplify port_alloc call.
  327  *      . Remove non-XP code.
  328  *
  329  * Revision 2.23  89/04/23  13:25:43  gm0w
  330  *      Changed assert(!p->busy) to assert(!p->busy || p->absent) in
  331  *      vm_object_collapse().  Change from jsb/mwyoung.
  332  *      [89/04/23            gm0w]
  333  * 
  334  * Revision 2.22  89/04/18  21:26:29  mwyoung
  335  *      Recent history [mwyoung]:
  336  *              Use hints about pages not in main memory.
  337  *              Get default memory manager port more carefully, as it
  338  *               may now be changed after initialization.
  339  *              Use new event mechanism.
  340  *              Correct vm_object_destroy() to actually abort requests.
  341  *              Correct handling of paging offset in vm_object_collapse().
  342  *      Condensed history:
  343  *              Add copy strategy handling, including a routine
  344  *               (vm_object_copy_slowly() to perform an immediate copy
  345  *               of a region of an object. [mwyoung]
  346  *              Restructure the handling of the relationships among memory
  347  *               objects (the ports), the related fields in the vm_object
  348  *               structure, and routines that manipulate them [mwyoung].
  349  *              Simplify maintenance of the unreferenced-object cache. [mwyoung]
  350  *              Distinguish internal and temporary attributes. [mwyoung]
  351  *              Reorganized and documented maintenance of the
  352  *               vm_object-to-memory_object assocations. [mwyoung]
  353  *              Improved external memory management interface. [mwyoung]
  354  *              Several reimplementations/fixes to the object cache
  355  *               and the port to object translation.  [mwyoung, avie, dbg]
  356  *              Create a cache of objects that have no references
  357  *               so that their pages remain in memory (inactive).  [avie]
  358  *              Collapse object tree when reference counts drop to one.
  359  *               Use "paging_in_progress" to prevent collapsing. [dbg]
  360  *              Split up paging system lock, eliminate SPL handling.
  361  *               The only uses of vm_object code at interrupt level
  362  *               uses objects that are only touched at interrupt level. [dbg]
  363  *              Use "copy object" rather than normal shadows for
  364  *               permanent objects.  [dbg]
  365  *              Accomodate external pagers [mwyoung, bolosky].
  366  *              Allow objects to be coalesced to avoid growing address
  367  *               maps during sequential allocations.  [dbg]
  368  *              Optimizations and fixes to copy-on-write [avie, mwyoung, dbg].
  369  *              Use only one object for all kernel data. [avie]
  370  * 
  371  */
  372 /*
  373  *      File:   vm/vm_object.c
  374  *      Author: Avadis Tevanian, Jr., Michael Wayne Young
  375  *
  376  *      Virtual memory object module.
  377  */
  378 
  379 #include <norma_vm.h>
  380 #include <mach_pagemap.h>
  381 
  382 #if     NORMA_VM
  383 #include <norma/xmm_server_rename.h>
  384 #endif  /* NORMA_VM */
  385 
  386 #include <mach/memory_object.h>
  387 #include <mach/memory_object_default.h>
  388 #include <mach/memory_object_user.h>
  389 #include <mach/vm_param.h>
  390 #include <ipc/ipc_port.h>
  391 #include <ipc/ipc_space.h>
  392 #include <kern/assert.h>
  393 #include <kern/lock.h>
  394 #include <kern/queue.h>
  395 #include <kern/xpr.h>
  396 #include <kern/zalloc.h>
  397 #include <vm/memory_object.h>
  398 #include <vm/vm_fault.h>
  399 #include <vm/vm_map.h>
  400 #include <vm/vm_object.h>
  401 #include <vm/vm_page.h>
  402 #include <vm/vm_pageout.h>
  403 
  404 
  405 void memory_object_release(
  406         ipc_port_t      pager,
  407         pager_request_t pager_request,
  408         ipc_port_t      pager_name); /* forward */
  409 
  410 void vm_object_deactivate_pages(vm_object_t);
  411 
  412 /*
  413  *      Virtual memory objects maintain the actual data
  414  *      associated with allocated virtual memory.  A given
  415  *      page of memory exists within exactly one object.
  416  *
  417  *      An object is only deallocated when all "references"
  418  *      are given up.  Only one "reference" to a given
  419  *      region of an object should be writeable.
  420  *
  421  *      Associated with each object is a list of all resident
  422  *      memory pages belonging to that object; this list is
  423  *      maintained by the "vm_page" module, but locked by the object's
  424  *      lock.
  425  *
  426  *      Each object also records the memory object port
  427  *      that is used by the kernel to request and write
  428  *      back data (the memory object port, field "pager"),
  429  *      and the ports provided to the memory manager, the server that
  430  *      manages that data, to return data and control its
  431  *      use (the memory object control port, field "pager_request")
  432  *      and for naming (the memory object name port, field "pager_name").
  433  *
  434  *      Virtual memory objects are allocated to provide
  435  *      zero-filled memory (vm_allocate) or map a user-defined
  436  *      memory object into a virtual address space (vm_map).
  437  *
  438  *      Virtual memory objects that refer to a user-defined
  439  *      memory object are called "permanent", because all changes
  440  *      made in virtual memory are reflected back to the
  441  *      memory manager, which may then store it permanently.
  442  *      Other virtual memory objects are called "temporary",
  443  *      meaning that changes need be written back only when
  444  *      necessary to reclaim pages, and that storage associated
  445  *      with the object can be discarded once it is no longer
  446  *      mapped.
  447  *
  448  *      A permanent memory object may be mapped into more
  449  *      than one virtual address space.  Moreover, two threads
  450  *      may attempt to make the first mapping of a memory
  451  *      object concurrently.  Only one thread is allowed to
  452  *      complete this mapping; all others wait for the
  453  *      "pager_initialized" field is asserted, indicating
  454  *      that the first thread has initialized all of the
  455  *      necessary fields in the virtual memory object structure.
  456  *
  457  *      The kernel relies on a *default memory manager* to
  458  *      provide backing storage for the zero-filled virtual
  459  *      memory objects.  The memory object ports associated
  460  *      with these temporary virtual memory objects are only
  461  *      generated and passed to the default memory manager
  462  *      when it becomes necessary.  Virtual memory objects
  463  *      that depend on the default memory manager are called
  464  *      "internal".  The "pager_created" field is provided to
  465  *      indicate whether these ports have ever been allocated.
  466  *      
  467  *      The kernel may also create virtual memory objects to
  468  *      hold changed pages after a copy-on-write operation.
  469  *      In this case, the virtual memory object (and its
  470  *      backing storage -- its memory object) only contain
  471  *      those pages that have been changed.  The "shadow"
  472  *      field refers to the virtual memory object that contains
  473  *      the remainder of the contents.  The "shadow_offset"
  474  *      field indicates where in the "shadow" these contents begin.
  475  *      The "copy" field refers to a virtual memory object
  476  *      to which changed pages must be copied before changing
  477  *      this object, in order to implement another form
  478  *      of copy-on-write optimization.
  479  *
  480  *      The virtual memory object structure also records
  481  *      the attributes associated with its memory object.
  482  *      The "pager_ready", "can_persist" and "copy_strategy"
  483  *      fields represent those attributes.  The "cached_list"
  484  *      field is used in the implementation of the persistence
  485  *      attribute.
  486  *
  487  * ZZZ Continue this comment.
  488  */
  489 
  490 zone_t          vm_object_zone;         /* vm backing store zone */
  491 
  492 /*
  493  *      All wired-down kernel memory belongs to a single virtual
  494  *      memory object (kernel_object) to avoid wasting data structures.
  495  */
  496 vm_object_t     kernel_object;
  497 
  498 /*
  499  *      Virtual memory objects that are not referenced by
  500  *      any address maps, but that are allowed to persist
  501  *      (an attribute specified by the associated memory manager),
  502  *      are kept in a queue (vm_object_cached_list).
  503  *
  504  *      When an object from this queue is referenced again,
  505  *      for example to make another address space mapping,
  506  *      it must be removed from the queue.  That is, the
  507  *      queue contains *only* objects with zero references.
  508  *
  509  *      The kernel may choose to terminate objects from this
  510  *      queue in order to reclaim storage.  The current policy
  511  *      is to permit a fixed maximum number of unreferenced
  512  *      objects (vm_object_cached_max).
  513  *
  514  *      A simple lock (accessed by routines
  515  *      vm_object_cache_{lock,lock_try,unlock}) governs the
  516  *      object cache.  It must be held when objects are
  517  *      added to or removed from the cache (in vm_object_terminate).
  518  *      The routines that acquire a reference to a virtual
  519  *      memory object based on one of the memory object ports
  520  *      must also lock the cache.
  521  *
  522  *      Ideally, the object cache should be more isolated
  523  *      from the reference mechanism, so that the lock need
  524  *      not be held to make simple references.
  525  */
  526 queue_head_t    vm_object_cached_list;
  527 int             vm_object_cached_count;
  528 int             vm_object_cached_max = 100;     /* may be patched*/
  529 
  530 decl_simple_lock_data(,vm_object_cached_lock_data)
  531 
  532 #define vm_object_cache_lock()          \
  533                 simple_lock(&vm_object_cached_lock_data)
  534 #define vm_object_cache_lock_try()      \
  535                 simple_lock_try(&vm_object_cached_lock_data)
  536 #define vm_object_cache_unlock()        \
  537                 simple_unlock(&vm_object_cached_lock_data)
  538 
  539 /*
  540  *      Virtual memory objects are initialized from
  541  *      a template (see vm_object_allocate).
  542  *
  543  *      When adding a new field to the virtual memory
  544  *      object structure, be sure to add initialization
  545  *      (see vm_object_init).
  546  */
  547 vm_object_t     vm_object_template;
  548 
  549 /*
  550  *      vm_object_allocate:
  551  *
  552  *      Returns a new object with the given size.
  553  */
  554 
  555 vm_object_t _vm_object_allocate(
  556         vm_size_t               size)
  557 {
  558         register vm_object_t object;
  559 
  560         object = (vm_object_t) zalloc(vm_object_zone);
  561 
  562         *object = *vm_object_template;
  563         queue_init(&object->memq);
  564         vm_object_lock_init(object);
  565         object->size = size;
  566 
  567         return object;
  568 }
  569 
  570 vm_object_t vm_object_allocate(
  571         vm_size_t       size)
  572 {
  573         register vm_object_t object;
  574         register ipc_port_t port;
  575 
  576         object = _vm_object_allocate(size);
  577 #if     !NORMA_VM
  578         port = ipc_port_alloc_kernel();
  579         if (port == IP_NULL)
  580                 panic("vm_object_allocate");
  581         object->pager_name = port;
  582         ipc_kobject_set(port, (ipc_kobject_t) object, IKOT_PAGING_NAME);
  583 #endif  /* !NORMA_VM */
  584 
  585         return object;
  586 }
  587 
  588 /*
  589  *      vm_object_bootstrap:
  590  *
  591  *      Initialize the VM objects module.
  592  */
  593 void vm_object_bootstrap(void)
  594 {
  595         vm_object_zone = zinit((vm_size_t) sizeof(struct vm_object),
  596                                 round_page(512*1024),
  597                                 round_page(12*1024),
  598                                 FALSE, "objects");
  599 
  600         queue_init(&vm_object_cached_list);
  601         simple_lock_init(&vm_object_cached_lock_data);
  602 
  603         /*
  604          *      Fill in a template object, for quick initialization
  605          */
  606 
  607         vm_object_template = (vm_object_t) zalloc(vm_object_zone);
  608         bzero((char *) vm_object_template, sizeof *vm_object_template);
  609 
  610         vm_object_template->ref_count = 1;
  611         vm_object_template->size = 0;
  612         vm_object_template->resident_page_count = 0;
  613         vm_object_template->copy = VM_OBJECT_NULL;
  614         vm_object_template->shadow = VM_OBJECT_NULL;
  615         vm_object_template->shadow_offset = (vm_offset_t) 0;
  616 
  617         vm_object_template->pager = IP_NULL;
  618         vm_object_template->paging_offset = 0;
  619         vm_object_template->pager_request = PAGER_REQUEST_NULL;
  620         vm_object_template->pager_name = IP_NULL;
  621 
  622         vm_object_template->pager_created = FALSE;
  623         vm_object_template->pager_initialized = FALSE;
  624         vm_object_template->pager_ready = FALSE;
  625 
  626         vm_object_template->copy_strategy = MEMORY_OBJECT_COPY_NONE;
  627                 /* ignored if temporary, will be reset before
  628                  * permanent object becomes ready */
  629         vm_object_template->use_shared_copy = FALSE;
  630         vm_object_template->shadowed = FALSE;
  631 
  632         vm_object_template->absent_count = 0;
  633         vm_object_template->all_wanted = 0; /* all bits FALSE */
  634 
  635         vm_object_template->paging_in_progress = 0;
  636         vm_object_template->can_persist = FALSE;
  637         vm_object_template->internal = TRUE;
  638         vm_object_template->temporary = TRUE;
  639         vm_object_template->alive = TRUE;
  640         vm_object_template->lock_in_progress = FALSE;
  641         vm_object_template->lock_restart = FALSE;
  642         vm_object_template->use_old_pageout = TRUE; /* XXX change later */
  643         vm_object_template->last_alloc = (vm_offset_t) 0;
  644 
  645 #if     MACH_PAGEMAP
  646         vm_object_template->existence_info = VM_EXTERNAL_NULL;
  647 #endif  /* MACH_PAGEMAP */
  648 
  649                 /*
  650          *      Initialize the "kernel object"
  651          */
  652 
  653         kernel_object = _vm_object_allocate(
  654                 VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS);
  655 
  656         /*
  657          *      Initialize the "submap object".  Make it as large as the
  658          *      kernel object so that no limit is imposed on submap sizes.
  659          */
  660 
  661         vm_submap_object = _vm_object_allocate(
  662                 VM_MAX_KERNEL_ADDRESS - VM_MIN_KERNEL_ADDRESS);
  663 
  664 #if     MACH_PAGEMAP
  665         vm_external_module_initialize();
  666 #endif  /* MACH_PAGEMAP */
  667 }
  668 
  669 void vm_object_init(void)
  670 {
  671 #if     !NORMA_VM
  672         /*
  673          *      Finish initializing the kernel object.
  674          *      The submap object doesn't need a name port.
  675          */
  676 
  677         kernel_object->pager_name = ipc_port_alloc_kernel();
  678         ipc_kobject_set(kernel_object->pager_name,
  679                         (ipc_kobject_t) kernel_object,
  680                         IKOT_PAGING_NAME);
  681 #endif  /* !NORMA_VM */
  682 }
  683 
  684 /*
  685  *      vm_object_reference:
  686  *
  687  *      Gets another reference to the given object.
  688  */
  689 void vm_object_reference(
  690         register vm_object_t    object)
  691 {
  692         if (object == VM_OBJECT_NULL)
  693                 return;
  694 
  695         vm_object_lock(object);
  696         assert(object->ref_count > 0);
  697         object->ref_count++;
  698         vm_object_unlock(object);
  699 }
  700 
  701 /*
  702  *      vm_object_deallocate:
  703  *
  704  *      Release a reference to the specified object,
  705  *      gained either through a vm_object_allocate
  706  *      or a vm_object_reference call.  When all references
  707  *      are gone, storage associated with this object
  708  *      may be relinquished.
  709  *
  710  *      No object may be locked.
  711  */
  712 void vm_object_deallocate(
  713         register vm_object_t    object)
  714 {
  715         vm_object_t     temp;
  716 
  717         while (object != VM_OBJECT_NULL) {
  718 
  719                 /*
  720                  *      The cache holds a reference (uncounted) to
  721                  *      the object; we must lock it before removing
  722                  *      the object.
  723                  */
  724 
  725                 vm_object_cache_lock();
  726 
  727                 /*
  728                  *      Lose the reference
  729                  */
  730                 vm_object_lock(object);
  731                 if (--(object->ref_count) > 0) {
  732 
  733                         /*
  734                          *      If there are still references, then
  735                          *      we are done.
  736                          */
  737                         vm_object_unlock(object);
  738                         vm_object_cache_unlock();
  739                         return;
  740                 }
  741 
  742                 /*
  743                  *      See whether this object can persist.  If so, enter
  744                  *      it in the cache, then deactivate all of its
  745                  *      pages.
  746                  */
  747                 if (object->can_persist) {
  748                         boolean_t       overflow;
  749 
  750                         /*
  751                          *      Enter the object onto the queue
  752                          *      of "cached" objects.  Remember whether
  753                          *      we've caused the queue to overflow,
  754                          *      as a hint.
  755                          */
  756 
  757                         queue_enter(&vm_object_cached_list, object,
  758                                 vm_object_t, cached_list);
  759                         overflow = (++vm_object_cached_count > vm_object_cached_max);
  760                         vm_object_cache_unlock();
  761 
  762                         vm_object_deactivate_pages(object);
  763                         vm_object_unlock(object);
  764 
  765                         /*
  766                          *      If we didn't overflow, or if the queue has
  767                          *      been reduced back to below the specified
  768                          *      minimum, then quit.
  769                          */
  770                         if (!overflow)
  771                                 return;
  772 
  773                         while (TRUE) {
  774                                 vm_object_cache_lock();
  775                                 if (vm_object_cached_count <=
  776                                     vm_object_cached_max) {
  777                                         vm_object_cache_unlock();
  778                                         return;
  779                                 }
  780 
  781                                 /*
  782                                  *      If we must trim down the queue, take
  783                                  *      the first object, and proceed to
  784                                  *      terminate it instead of the original
  785                                  *      object.  Have to wait for pager init.
  786                                  *  if it's in progress.
  787                                  */
  788                                 object= (vm_object_t)
  789                                     queue_first(&vm_object_cached_list);
  790                                 vm_object_lock(object);
  791 
  792                                 if (!(object->pager_created &&
  793                                     !object->pager_initialized)) {
  794 
  795                                         /*
  796                                          *  Ok to terminate, hang on to lock.
  797                                          */
  798                                         break;
  799                                 }
  800 
  801                                 vm_object_assert_wait(object,
  802                                         VM_OBJECT_EVENT_INITIALIZED, FALSE);
  803                                 vm_object_unlock(object);
  804                                 vm_object_cache_unlock();
  805                                 thread_block((void (*)()) 0);
  806 
  807                                 /*
  808                                  *  Continue loop to check if cache still
  809                                  *  needs to be trimmed.
  810                                  */
  811                         }
  812 
  813                         /*
  814                          *      Actually remove object from cache.
  815                          */
  816 
  817                         queue_remove(&vm_object_cached_list, object,
  818                                         vm_object_t, cached_list);
  819                         vm_object_cached_count--;
  820 
  821                         assert(object->ref_count == 0);
  822                 }
  823                 else {
  824                         if (object->pager_created &&
  825                             !object->pager_initialized) {
  826 
  827                                 /*
  828                                  *      Have to wait for initialization.
  829                                  *      Put reference back and retry
  830                                  *      when it's initialized.
  831                                  */
  832                                 object->ref_count++;
  833                                 vm_object_assert_wait(object,
  834                                         VM_OBJECT_EVENT_INITIALIZED, FALSE);
  835                                 vm_object_unlock(object);
  836                                 vm_object_cache_unlock();
  837                                 thread_block((void (*)()) 0);
  838                                 continue;
  839                           }
  840                 }
  841 
  842                 /*
  843                  *      Take the reference to the shadow object
  844                  *      out of the object to be destroyed.
  845                  */
  846 
  847                 temp = object->shadow;
  848 
  849                 /*
  850                  *      Destroy the object; the cache lock will
  851                  *      be released in the process.
  852                  */
  853 
  854                 vm_object_terminate(object);
  855 
  856                 /*
  857                  *      Deallocate the reference to the shadow
  858                  *      by continuing the loop with that object
  859                  *      in place of the original.
  860                  */
  861 
  862                 object = temp;
  863         }
  864 }
  865 
  866 boolean_t       vm_object_terminate_remove_all = FALSE;
  867 
  868 /*
  869  *      Routine:        vm_object_terminate
  870  *      Purpose:
  871  *              Free all resources associated with a vm_object.
  872  *      In/out conditions:
  873  *              Upon entry, the object and the cache must be locked,
  874  *              and the object must have no references.
  875  *
  876  *              The shadow object reference is left alone.
  877  *
  878  *              Upon exit, the cache will be unlocked, and the
  879  *              object will cease to exist.
  880  */
  881 void vm_object_terminate(
  882         register vm_object_t    object)
  883 {
  884         register vm_page_t      p;
  885         vm_object_t             shadow_object;
  886 
  887         /*
  888          *      Make sure the object isn't already being terminated
  889          */
  890 
  891         assert(object->alive);
  892         object->alive = FALSE;
  893 
  894         /*
  895          *      Make sure no one can look us up now.
  896          */
  897 
  898         vm_object_remove(object);
  899         vm_object_cache_unlock();
  900 
  901         /*
  902          *      Detach the object from its shadow if we are the shadow's
  903          *      copy.
  904          */
  905         if ((shadow_object = object->shadow) != VM_OBJECT_NULL) {
  906                 vm_object_lock(shadow_object);
  907                 assert((shadow_object->copy == object) ||
  908                        (shadow_object->copy == VM_OBJECT_NULL));
  909                 shadow_object->copy = VM_OBJECT_NULL;
  910                 vm_object_unlock(shadow_object);
  911         }
  912 
  913         /*
  914          *      The pageout daemon might be playing with our pages.
  915          *      Now that the object is dead, it won't touch any more
  916          *      pages, but some pages might already be on their way out.
  917          *      Hence, we wait until the active paging activities have ceased.
  918          */
  919 
  920         vm_object_paging_wait(object, FALSE);
  921 
  922         /*
  923          *      Clean or free the pages, as appropriate.
  924          *      It is possible for us to find busy/absent pages,
  925          *      if some faults on this object were aborted.
  926          */
  927 
  928         if ((object->temporary) || (object->pager == IP_NULL)) {
  929                 while (!queue_empty(&object->memq)) {
  930                         p = (vm_page_t) queue_first(&object->memq);
  931 
  932                         VM_PAGE_CHECK(p);
  933 
  934                         if (p->busy && !p->absent)
  935                                 panic("vm_object_terminate.2 0x%x 0x%x",
  936                                       object, p);
  937 
  938                         VM_PAGE_FREE(p);
  939                 }
  940         } else while (!queue_empty(&object->memq)) {
  941                 p = (vm_page_t) queue_first(&object->memq);
  942 
  943                 VM_PAGE_CHECK(p);
  944 
  945                 if (p->busy && !p->absent)
  946                         panic("vm_object_terminate.3 0x%x 0x%x", object, p);
  947 
  948                 vm_page_lock_queues();
  949                 VM_PAGE_QUEUES_REMOVE(p);
  950                 vm_page_unlock_queues();
  951 
  952                 if (p->absent || p->private) {
  953 
  954                         /*
  955                          *      For private pages, VM_PAGE_FREE just
  956                          *      leaves the page structure around for
  957                          *      its owner to clean up.  For absent
  958                          *      pages, the structure is returned to
  959                          *      the appropriate pool.
  960                          */
  961 
  962                         goto free_page;
  963                 }
  964 
  965                 if (p->fictitious)
  966                         panic("vm_object_terminate.4 0x%x 0x%x", object, p);
  967 
  968                 if (!p->dirty)
  969                         p->dirty = pmap_is_modified(p->phys_addr);
  970 
  971                 if (p->dirty || p->precious) {
  972                         p->busy = TRUE;
  973                         vm_pageout_page(p, FALSE, TRUE); /* flush page */
  974                 } else {
  975                     free_page:
  976                         VM_PAGE_FREE(p);
  977                 }
  978         }
  979 
  980         assert(object->ref_count == 0);
  981         assert(object->paging_in_progress == 0);
  982 
  983         /*
  984          *      Throw away port rights... note that they may
  985          *      already have been thrown away (by vm_object_destroy
  986          *      or memory_object_destroy).
  987          *
  988          *      Instead of destroying the control and name ports,
  989          *      we send all rights off to the memory manager instead,
  990          *      using memory_object_terminate.
  991          */
  992 
  993         vm_object_unlock(object);
  994 
  995         if (object->pager != IP_NULL) {
  996                 /* consumes our rights for pager, pager_request, pager_name */
  997                 memory_object_release(object->pager,
  998                                              object->pager_request,
  999                                              object->pager_name);
 1000         } else if (object->pager_name != IP_NULL) {
 1001                 /* consumes our right for pager_name */
 1002 #if     NORMA_VM
 1003                 ipc_port_release_send(object->pager_name);
 1004 #else   /* NORMA_VM */
 1005                 ipc_port_dealloc_kernel(object->pager_name);
 1006 #endif  /* NORMA_VM */
 1007         }
 1008 
 1009 #if     MACH_PAGEMAP
 1010         vm_external_destroy(object->existence_info);
 1011 #endif  /* MACH_PAGEMAP */
 1012 
 1013         /*
 1014          *      Free the space for the object.
 1015          */
 1016 
 1017         zfree(vm_object_zone, (vm_offset_t) object);
 1018 }
 1019 
 1020 /*
 1021  *      Routine:        vm_object_pager_wakeup
 1022  *      Purpose:        Wake up anyone waiting for IKOT_PAGER_TERMINATING
 1023  */
 1024 
 1025 void
 1026 vm_object_pager_wakeup(
 1027         ipc_port_t      pager)
 1028 {
 1029         boolean_t someone_waiting;
 1030 
 1031         /*
 1032          *      If anyone was waiting for the memory_object_terminate
 1033          *      to be queued, wake them up now.
 1034          */
 1035         vm_object_cache_lock();
 1036         assert(ip_kotype(pager) == IKOT_PAGER_TERMINATING);
 1037         someone_waiting = (pager->ip_kobject != IKO_NULL);
 1038         if (ip_active(pager))
 1039                 ipc_kobject_set(pager, IKO_NULL, IKOT_NONE);
 1040         vm_object_cache_unlock();
 1041         if (someone_waiting) {
 1042                 thread_wakeup((event_t) pager);
 1043         }
 1044 }
 1045 
 1046 /*
 1047  *      Routine:        memory_object_release
 1048  *      Purpose:        Terminate the pager and release port rights,
 1049  *                      just like memory_object_terminate, except
 1050  *                      that we wake up anyone blocked in vm_object_enter
 1051  *                      waiting for termination message to be queued
 1052  *                      before calling memory_object_init.
 1053  */
 1054 void memory_object_release(
 1055         ipc_port_t      pager,
 1056         pager_request_t pager_request,
 1057         ipc_port_t      pager_name)
 1058 {
 1059 
 1060         /*
 1061          *      Keep a reference to pager port;
 1062          *      the terminate might otherwise release all references.
 1063          */
 1064         ip_reference(pager);
 1065 
 1066         /*
 1067          *      Terminate the pager.
 1068          */
 1069         (void) memory_object_terminate(pager, pager_request, pager_name);
 1070 
 1071         /*
 1072          *      Wakeup anyone waiting for this terminate
 1073          */
 1074         vm_object_pager_wakeup(pager);
 1075 
 1076         /*
 1077          *      Release reference to pager port.
 1078          */
 1079         ip_release(pager);
 1080 }
 1081 
 1082 /*
 1083  *      Routine:        vm_object_abort_activity [internal use only]
 1084  *      Purpose:
 1085  *              Abort paging requests pending on this object.
 1086  *      In/out conditions:
 1087  *              The object is locked on entry and exit.
 1088  */
 1089 void vm_object_abort_activity(
 1090         vm_object_t     object)
 1091 {
 1092         register
 1093         vm_page_t       p;
 1094         vm_page_t       next;
 1095 
 1096         /*
 1097          *      Abort all activity that would be waiting
 1098          *      for a result on this memory object.
 1099          *
 1100          *      We could also choose to destroy all pages
 1101          *      that we have in memory for this object, but
 1102          *      we don't.
 1103          */
 1104 
 1105         p = (vm_page_t) queue_first(&object->memq);
 1106         while (!queue_end(&object->memq, (queue_entry_t) p)) {
 1107                 next = (vm_page_t) queue_next(&p->listq);
 1108 
 1109                 /*
 1110                  *      If it's being paged in, destroy it.
 1111                  *      If an unlock has been requested, start it again.
 1112                  */
 1113 
 1114                 if (p->busy && p->absent) {
 1115                         VM_PAGE_FREE(p);
 1116                 }
 1117                  else {
 1118                         if (p->unlock_request != VM_PROT_NONE)
 1119                                 p->unlock_request = VM_PROT_NONE;
 1120                         PAGE_WAKEUP(p);
 1121                 }
 1122                 
 1123                 p = next;
 1124         }
 1125 
 1126         /*
 1127          *      Wake up threads waiting for the memory object to
 1128          *      become ready.
 1129          */
 1130 
 1131         object->pager_ready = TRUE;
 1132         vm_object_wakeup(object, VM_OBJECT_EVENT_PAGER_READY);
 1133 }
 1134 
 1135 /*
 1136  *      Routine:        memory_object_destroy [user interface]
 1137  *      Purpose:
 1138  *              Shut down a memory object, despite the
 1139  *              presence of address map (or other) references
 1140  *              to the vm_object.
 1141  *      Note:
 1142  *              This routine may be called either from the user interface,
 1143  *              or from port destruction handling (via vm_object_destroy).
 1144  */
 1145 kern_return_t memory_object_destroy(
 1146         register
 1147         vm_object_t     object,
 1148         kern_return_t   reason)
 1149 {
 1150         ipc_port_t      old_object,  old_name;
 1151         pager_request_t old_control;
 1152 
 1153 #ifdef  lint
 1154         reason++;
 1155 #endif  /* lint */
 1156 
 1157         if (object == VM_OBJECT_NULL)
 1158                 return KERN_SUCCESS;
 1159 
 1160         /*
 1161          *      Remove the port associations immediately.
 1162          *
 1163          *      This will prevent the memory manager from further
 1164          *      meddling.  [If it wanted to flush data or make
 1165          *      other changes, it should have done so before performing
 1166          *      the destroy call.]
 1167          */
 1168 
 1169         vm_object_cache_lock();
 1170         vm_object_lock(object);
 1171         vm_object_remove(object);
 1172         object->can_persist = FALSE;
 1173         vm_object_cache_unlock();
 1174 
 1175         /*
 1176          *      Rip out the ports from the vm_object now... this
 1177          *      will prevent new memory_object calls from succeeding.
 1178          */
 1179 
 1180         old_object = object->pager;
 1181         object->pager = IP_NULL;
 1182         
 1183         old_control = object->pager_request;
 1184         object->pager_request = PAGER_REQUEST_NULL;
 1185 
 1186         old_name = object->pager_name;
 1187         object->pager_name = IP_NULL;
 1188 
 1189 
 1190         /*
 1191          *      Wait for existing paging activity (that might
 1192          *      have the old ports) to subside.
 1193          */
 1194 
 1195         vm_object_paging_wait(object, FALSE);
 1196         vm_object_unlock(object);
 1197 
 1198         /*
 1199          *      Shut down the ports now.
 1200          *
 1201          *      [Paging operations may be proceeding concurrently --
 1202          *      they'll get the null values established above.]
 1203          */
 1204 
 1205         if (old_object != IP_NULL) {
 1206                 /* consumes our rights for object, control, name */
 1207                 memory_object_release(old_object, old_control,
 1208                                              old_name);
 1209         } else if (old_name != IP_NULL) {
 1210                 /* consumes our right for name */
 1211 #if     NORMA_VM
 1212                 ipc_port_release_send(object->pager_name);
 1213 #else   /* NORMA_VM */
 1214                 ipc_port_dealloc_kernel(object->pager_name);
 1215 #endif  /* NORMA_VM */
 1216         }
 1217 
 1218         /*
 1219          *      Lose the reference that was donated for this routine
 1220          */
 1221 
 1222         vm_object_deallocate(object);
 1223 
 1224         return KERN_SUCCESS;
 1225 }
 1226 
 1227 /*
 1228  *      vm_object_deactivate_pages
 1229  *
 1230  *      Deactivate all pages in the specified object.  (Keep its pages
 1231  *      in memory even though it is no longer referenced.)
 1232  *
 1233  *      The object must be locked.
 1234  */
 1235 void vm_object_deactivate_pages(
 1236         register vm_object_t    object)
 1237 {
 1238         register vm_page_t      p;
 1239 
 1240         queue_iterate(&object->memq, p, vm_page_t, listq) {
 1241                 vm_page_lock_queues();
 1242                 if (!p->busy)
 1243                         vm_page_deactivate(p);
 1244                 vm_page_unlock_queues();
 1245         }
 1246 }
 1247 
 1248 
 1249 /*
 1250  *      Routine:        vm_object_pmap_protect
 1251  *
 1252  *      Purpose:
 1253  *              Reduces the permission for all physical
 1254  *              pages in the specified object range.
 1255  *
 1256  *              If removing write permission only, it is
 1257  *              sufficient to protect only the pages in
 1258  *              the top-level object; only those pages may
 1259  *              have write permission.
 1260  *
 1261  *              If removing all access, we must follow the
 1262  *              shadow chain from the top-level object to
 1263  *              remove access to all pages in shadowed objects.
 1264  *
 1265  *              The object must *not* be locked.  The object must
 1266  *              be temporary/internal.  
 1267  *
 1268  *              If pmap is not NULL, this routine assumes that
 1269  *              the only mappings for the pages are in that
 1270  *              pmap.
 1271  */
 1272 boolean_t vm_object_pmap_protect_by_page = FALSE;
 1273 
 1274 void vm_object_pmap_protect(
 1275         register vm_object_t    object,
 1276         register vm_offset_t    offset,
 1277         vm_offset_t             size,
 1278         pmap_t                  pmap,
 1279         vm_offset_t             pmap_start,
 1280         vm_prot_t               prot)
 1281 {
 1282         if (object == VM_OBJECT_NULL)
 1283             return;
 1284 
 1285         vm_object_lock(object);
 1286 
 1287         assert(object->temporary && object->internal);
 1288 
 1289         while (TRUE) {
 1290             if (object->resident_page_count > atop(size) / 2 &&
 1291                     pmap != PMAP_NULL) {
 1292                 vm_object_unlock(object);
 1293                 pmap_protect(pmap, pmap_start, pmap_start + size, prot);
 1294                 return;
 1295             }
 1296 
 1297             {
 1298                 register vm_page_t      p;
 1299                 register vm_offset_t    end;
 1300 
 1301                 end = offset + size;
 1302 
 1303                 queue_iterate(&object->memq, p, vm_page_t, listq) {
 1304                     if (!p->fictitious &&
 1305                         (offset <= p->offset) &&
 1306                         (p->offset < end)) {
 1307                         if ((pmap == PMAP_NULL) ||
 1308                             vm_object_pmap_protect_by_page) {
 1309                             pmap_page_protect(p->phys_addr,
 1310                                               prot & ~p->page_lock);
 1311                         } else {
 1312                             vm_offset_t start =
 1313                                         pmap_start +
 1314                                         (p->offset - offset);
 1315 
 1316                             pmap_protect(pmap,
 1317                                          start,
 1318                                          start + PAGE_SIZE,
 1319                                          prot);
 1320                         }
 1321                     }
 1322                 }
 1323             }
 1324 
 1325             if (prot == VM_PROT_NONE) {
 1326                 /*
 1327                  * Must follow shadow chain to remove access
 1328                  * to pages in shadowed objects.
 1329                  */
 1330                 register vm_object_t    next_object;
 1331 
 1332                 next_object = object->shadow;
 1333                 if (next_object != VM_OBJECT_NULL) {
 1334                     offset += object->shadow_offset;
 1335                     vm_object_lock(next_object);
 1336                     vm_object_unlock(object);
 1337                     object = next_object;
 1338                 }
 1339                 else {
 1340                     /*
 1341                      * End of chain - we are done.
 1342                      */
 1343                     break;
 1344                 }
 1345             }
 1346             else {
 1347                 /*
 1348                  * Pages in shadowed objects may never have
 1349                  * write permission - we may stop here.
 1350                  */
 1351                 break;
 1352             }
 1353         }
 1354 
 1355         vm_object_unlock(object);
 1356 }
 1357 
 1358 /*
 1359  *      vm_object_pmap_remove:
 1360  *
 1361  *      Removes all physical pages in the specified
 1362  *      object range from all physical maps.
 1363  *
 1364  *      The object must *not* be locked.
 1365  */
 1366 void vm_object_pmap_remove(
 1367         register vm_object_t    object,
 1368         register vm_offset_t    start,
 1369         register vm_offset_t    end)
 1370 {
 1371         register vm_page_t      p;
 1372 
 1373         if (object == VM_OBJECT_NULL)
 1374                 return;
 1375 
 1376         vm_object_lock(object);
 1377         queue_iterate(&object->memq, p, vm_page_t, listq) {
 1378                 if (!p->fictitious &&
 1379                     (start <= p->offset) &&
 1380                     (p->offset < end))
 1381                         pmap_page_protect(p->phys_addr, VM_PROT_NONE);
 1382         }
 1383         vm_object_unlock(object);
 1384 }
 1385 
 1386 /*
 1387  *      Routine:        vm_object_copy_slowly
 1388  *
 1389  *      Description:
 1390  *              Copy the specified range of the source
 1391  *              virtual memory object without using
 1392  *              protection-based optimizations (such
 1393  *              as copy-on-write).  The pages in the
 1394  *              region are actually copied.
 1395  *
 1396  *      In/out conditions:
 1397  *              The caller must hold a reference and a lock
 1398  *              for the source virtual memory object.  The source
 1399  *              object will be returned *unlocked*.
 1400  *
 1401  *      Results:
 1402  *              If the copy is completed successfully, KERN_SUCCESS is
 1403  *              returned.  If the caller asserted the interruptible
 1404  *              argument, and an interruption occurred while waiting
 1405  *              for a user-generated event, MACH_SEND_INTERRUPTED is
 1406  *              returned.  Other values may be returned to indicate
 1407  *              hard errors during the copy operation.
 1408  *
 1409  *              A new virtual memory object is returned in a
 1410  *              parameter (_result_object).  The contents of this
 1411  *              new object, starting at a zero offset, are a copy
 1412  *              of the source memory region.  In the event of
 1413  *              an error, this parameter will contain the value
 1414  *              VM_OBJECT_NULL.
 1415  */
 1416 kern_return_t vm_object_copy_slowly(
 1417         register
 1418         vm_object_t     src_object,
 1419         vm_offset_t     src_offset,
 1420         vm_size_t       size,
 1421         boolean_t       interruptible,
 1422         vm_object_t     *_result_object)        /* OUT */
 1423 {
 1424         vm_object_t     new_object;
 1425         vm_offset_t     new_offset;
 1426 
 1427         if (size == 0) {
 1428                 vm_object_unlock(src_object);
 1429                 *_result_object = VM_OBJECT_NULL;
 1430                 return KERN_INVALID_ARGUMENT;
 1431         }
 1432 
 1433         /*
 1434          *      Prevent destruction of the source object while we copy.
 1435          */
 1436 
 1437         assert(src_object->ref_count > 0);
 1438         src_object->ref_count++;
 1439         vm_object_unlock(src_object);
 1440 
 1441         /*
 1442          *      Create a new object to hold the copied pages.
 1443          *      A few notes:
 1444          *              We fill the new object starting at offset 0,
 1445          *               regardless of the input offset.
 1446          *              We don't bother to lock the new object within
 1447          *               this routine, since we have the only reference.
 1448          */
 1449 
 1450         new_object = vm_object_allocate(size);
 1451         new_offset = 0;
 1452 
 1453         assert(size == trunc_page(size));       /* Will the loop terminate? */
 1454 
 1455         for ( ;
 1456             size != 0 ;
 1457             src_offset += PAGE_SIZE, new_offset += PAGE_SIZE, size -= PAGE_SIZE
 1458             ) {
 1459                 vm_page_t       new_page;
 1460                 vm_fault_return_t result;
 1461 
 1462                 while ((new_page = vm_page_alloc(new_object, new_offset))
 1463                                 == VM_PAGE_NULL) {
 1464                         VM_PAGE_WAIT((void (*)()) 0);
 1465                 }
 1466 
 1467                 do {
 1468                         vm_prot_t       prot = VM_PROT_READ;
 1469                         vm_page_t       _result_page;
 1470                         vm_page_t       top_page;
 1471                         register
 1472                         vm_page_t       result_page;
 1473 
 1474                         vm_object_lock(src_object);
 1475                         src_object->paging_in_progress++;
 1476 
 1477                         result = vm_fault_page(src_object, src_offset,
 1478                                 VM_PROT_READ, FALSE, interruptible,
 1479                                 &prot, &_result_page, &top_page,
 1480                                 FALSE, (void (*)()) 0);
 1481 
 1482                         switch(result) {
 1483                                 case VM_FAULT_SUCCESS:
 1484                                         result_page = _result_page;
 1485 
 1486                                         /*
 1487                                          *      We don't need to hold the object
 1488                                          *      lock -- the busy page will be enough.
 1489                                          *      [We don't care about picking up any
 1490                                          *      new modifications.]
 1491                                          *
 1492                                          *      Copy the page to the new object.
 1493                                          *
 1494                                          *      POLICY DECISION:
 1495                                          *              If result_page is clean,
 1496                                          *              we could steal it instead
 1497                                          *              of copying.
 1498                                          */
 1499 
 1500                                         vm_object_unlock(result_page->object);
 1501                                         vm_page_copy(result_page, new_page);
 1502 
 1503                                         /*
 1504                                          *      Let go of both pages (make them
 1505                                          *      not busy, perform wakeup, activate).
 1506                                          */
 1507 
 1508                                         new_page->busy = FALSE;
 1509                                         new_page->dirty = TRUE;
 1510                                         vm_object_lock(result_page->object);
 1511                                         PAGE_WAKEUP_DONE(result_page);
 1512 
 1513                                         vm_page_lock_queues();
 1514                                         if (!result_page->active &&
 1515                                             !result_page->inactive)
 1516                                                 vm_page_activate(result_page);
 1517                                         vm_page_activate(new_page);
 1518                                         vm_page_unlock_queues();
 1519 
 1520                                         /*
 1521                                          *      Release paging references and
 1522                                          *      top-level placeholder page, if any.
 1523                                          */
 1524 
 1525                                         vm_fault_cleanup(result_page->object,
 1526                                                         top_page);
 1527 
 1528                                         break;
 1529                                 
 1530                                 case VM_FAULT_RETRY:
 1531                                         break;
 1532 
 1533                                 case VM_FAULT_MEMORY_SHORTAGE:
 1534                                         VM_PAGE_WAIT((void (*)()) 0);
 1535                                         break;
 1536 
 1537                                 case VM_FAULT_FICTITIOUS_SHORTAGE:
 1538                                         vm_page_more_fictitious();
 1539                                         break;
 1540 
 1541                                 case VM_FAULT_INTERRUPTED:
 1542                                         vm_page_free(new_page);
 1543                                         vm_object_deallocate(new_object);
 1544                                         vm_object_deallocate(src_object);
 1545                                         *_result_object = VM_OBJECT_NULL;
 1546                                         return MACH_SEND_INTERRUPTED;
 1547 
 1548                                 case VM_FAULT_MEMORY_ERROR:
 1549                                         /*
 1550                                          * A policy choice:
 1551                                          *      (a) ignore pages that we can't
 1552                                          *          copy
 1553                                          *      (b) return the null object if
 1554                                          *          any page fails [chosen]
 1555                                          */
 1556 
 1557                                         vm_page_free(new_page);
 1558                                         vm_object_deallocate(new_object);
 1559                                         vm_object_deallocate(src_object);
 1560                                         *_result_object = VM_OBJECT_NULL;
 1561                                         return KERN_MEMORY_ERROR;
 1562                         }
 1563                 } while (result != VM_FAULT_SUCCESS);
 1564         }
 1565 
 1566         /*
 1567          *      Lose the extra reference, and return our object.
 1568          */
 1569 
 1570         vm_object_deallocate(src_object);
 1571         *_result_object = new_object;
 1572         return KERN_SUCCESS;
 1573 }
 1574 
 1575 /*
 1576  *      Routine:        vm_object_copy_temporary
 1577  *
 1578  *      Purpose:
 1579  *              Copy the specified range of the source virtual
 1580  *              memory object, if it can be done without blocking.
 1581  *
 1582  *      Results:
 1583  *              If the copy is successful, the copy is returned in
 1584  *              the arguments; otherwise, the arguments are not
 1585  *              affected.
 1586  *
 1587  *      In/out conditions:
 1588  *              The object should be unlocked on entry and exit.
 1589  */
 1590 
 1591 vm_object_t     vm_object_copy_delayed();       /* forward declaration */
 1592 
 1593 boolean_t vm_object_copy_temporary(
 1594         vm_object_t     *_object,               /* INOUT */
 1595         vm_offset_t     *_offset,               /* INOUT */
 1596         boolean_t       *_src_needs_copy,       /* OUT */
 1597         boolean_t       *_dst_needs_copy)       /* OUT */
 1598 {
 1599         vm_object_t     object = *_object;
 1600 
 1601 #ifdef  lint
 1602         ++*_offset;
 1603 #endif  /* lint */
 1604 
 1605         if (object == VM_OBJECT_NULL) {
 1606                 *_src_needs_copy = FALSE;
 1607                 *_dst_needs_copy = FALSE;
 1608                 return TRUE;
 1609         }
 1610 
 1611         /*
 1612          *      If the object is temporary, we can perform
 1613          *      a symmetric copy-on-write without asking.
 1614          */
 1615 
 1616         vm_object_lock(object);
 1617         if (object->temporary) {
 1618 
 1619                 /*
 1620                  *      Shared objects use delayed copy
 1621                  */
 1622                 if (object->use_shared_copy) {
 1623 
 1624                         /*
 1625                          *      Asymmetric copy strategy.  Destination
 1626                          *      must be copied (to allow copy object reuse).
 1627                          *      Source is unaffected.
 1628                          */
 1629                         vm_object_unlock(object);
 1630                         object = vm_object_copy_delayed(object);
 1631                         *_object = object;
 1632                         *_src_needs_copy = FALSE;
 1633                         *_dst_needs_copy = TRUE;
 1634                         return TRUE;
 1635                 }
 1636 
 1637                 /*
 1638                  *      Make another reference to the object.
 1639                  *
 1640                  *      Leave object/offset unchanged.
 1641                  */
 1642 
 1643                 assert(object->ref_count > 0);
 1644                 object->ref_count++;
 1645                 object->shadowed = TRUE;
 1646                 vm_object_unlock(object);
 1647 
 1648                 /*
 1649                  *      Both source and destination must make
 1650                  *      shadows, and the source must be made
 1651                  *      read-only if not already.
 1652                  */
 1653 
 1654                 *_src_needs_copy = TRUE;
 1655                 *_dst_needs_copy = TRUE;
 1656                 return TRUE;
 1657         }
 1658 
 1659         if (object->pager_ready &&
 1660             (object->copy_strategy == MEMORY_OBJECT_COPY_DELAY)) {
 1661                 /* XXX Do something intelligent (see temporary code above) */
 1662         }
 1663         vm_object_unlock(object);
 1664 
 1665         return FALSE;
 1666 }
 1667 
 1668 /*
 1669  *      Routine:        vm_object_copy_call [internal]
 1670  *
 1671  *      Description:
 1672  *              Copy the specified (src_offset, size) portion
 1673  *              of the source object (src_object), using the
 1674  *              user-managed copy algorithm.
 1675  *
 1676  *      In/out conditions:
 1677  *              The source object must be locked on entry.  It
 1678  *              will be *unlocked* on exit.
 1679  *
 1680  *      Results:
 1681  *              If the copy is successful, KERN_SUCCESS is returned.
 1682  *              This routine is interruptible; if a wait for
 1683  *              a user-generated event is interrupted, MACH_SEND_INTERRUPTED
 1684  *              is returned.  Other return values indicate hard errors
 1685  *              in creating the user-managed memory object for the copy.
 1686  *
 1687  *              A new object that represents the copied virtual
 1688  *              memory is returned in a parameter (*_result_object).
 1689  *              If the return value indicates an error, this parameter
 1690  *              is not valid.
 1691  */
 1692 kern_return_t vm_object_copy_call(
 1693         vm_object_t     src_object,
 1694         vm_offset_t     src_offset,
 1695         vm_size_t       size,
 1696         vm_object_t     *_result_object)        /* OUT */
 1697 {
 1698         vm_offset_t     src_end = src_offset + size;
 1699         ipc_port_t      new_memory_object;
 1700         vm_object_t     new_object;
 1701         vm_page_t       p;
 1702 
 1703         /*
 1704          *      Set the backing object for the new
 1705          *      temporary object.
 1706          */
 1707 
 1708         assert(src_object->ref_count > 0);
 1709         src_object->ref_count++;
 1710         vm_object_paging_begin(src_object);
 1711         vm_object_unlock(src_object);
 1712 
 1713         /*
 1714          *      Create a memory object port to be associated
 1715          *      with this new vm_object.
 1716          *
 1717          *      Since the kernel has the only rights to this
 1718          *      port, we need not hold the cache lock.
 1719          *
 1720          *      Since we have the only object reference, we
 1721          *      need not be worried about collapse operations.
 1722          *
 1723          */
 1724 
 1725         new_memory_object = ipc_port_alloc_kernel();
 1726         if (new_memory_object == IP_NULL) {
 1727                 panic("vm_object_copy_call: allocate memory object port");
 1728                 /* XXX Shouldn't panic here. */
 1729         }
 1730 
 1731         /* we hold a naked receive right for new_memory_object */
 1732         (void) ipc_port_make_send(new_memory_object);
 1733         /* now we also hold a naked send right for new_memory_object */
 1734 
 1735         /*
 1736          *      Let the memory manager know that a copy operation
 1737          *      is in progress.  Note that we're using the old
 1738          *      memory object's ports (for which we're holding
 1739          *      a paging reference)... the memory manager cannot
 1740          *      yet affect the new memory object.
 1741          */
 1742 
 1743         (void) memory_object_copy(src_object->pager,
 1744                                 src_object->pager_request,
 1745                                 src_offset, size,
 1746                                 new_memory_object);
 1747         /* no longer hold the naked receive right for new_memory_object */
 1748 
 1749         vm_object_lock(src_object);
 1750         vm_object_paging_end(src_object);
 1751 
 1752         /*
 1753          *      Remove write access from all of the pages of
 1754          *      the old memory object that we can.
 1755          */
 1756 
 1757         queue_iterate(&src_object->memq, p, vm_page_t, listq) {
 1758             if (!p->fictitious &&
 1759                 (src_offset <= p->offset) &&
 1760                 (p->offset < src_end) &&
 1761                 !(p->page_lock & VM_PROT_WRITE)) {
 1762                 p->page_lock |= VM_PROT_WRITE;
 1763                 pmap_page_protect(p->phys_addr, VM_PROT_ALL & ~p->page_lock);
 1764             }
 1765         }
 1766 
 1767         vm_object_unlock(src_object);
 1768                 
 1769         /*
 1770          *      Initialize the rest of the paging stuff
 1771          */
 1772 
 1773         new_object = vm_object_enter(new_memory_object, size, FALSE);
 1774         new_object->shadow = src_object;
 1775         new_object->shadow_offset = src_offset;
 1776 
 1777         /*
 1778          *      Drop the reference for new_memory_object taken above.
 1779          */
 1780 
 1781         ipc_port_release_send(new_memory_object);
 1782         /* no longer hold the naked send right for new_memory_object */
 1783 
 1784         *_result_object = new_object;
 1785         return KERN_SUCCESS;
 1786 }
 1787 
 1788 /*
 1789  *      Routine:        vm_object_copy_delayed [internal]
 1790  *
 1791  *      Description:
 1792  *              Copy the specified virtual memory object, using
 1793  *              the asymmetric copy-on-write algorithm.
 1794  *
 1795  *      In/out conditions:
 1796  *              The object must be unlocked on entry.
 1797  *
 1798  *              This routine will not block waiting for user-generated
 1799  *              events.  It is not interruptible.
 1800  */
 1801 vm_object_t vm_object_copy_delayed(
 1802         vm_object_t     src_object)
 1803 {
 1804         vm_object_t     new_copy;
 1805         vm_object_t     old_copy;
 1806         vm_page_t       p;
 1807 
 1808         /*
 1809          *      The user-level memory manager wants to see
 1810          *      all of the changes to this object, but it
 1811          *      has promised not to make any changes on its own.
 1812          *
 1813          *      Perform an asymmetric copy-on-write, as follows:
 1814          *              Create a new object, called a "copy object"
 1815          *               to hold pages modified by the new mapping
 1816          *               (i.e., the copy, not the original mapping).
 1817          *              Record the original object as the backing
 1818          *               object for the copy object.  If the
 1819          *               original mapping does not change a page,
 1820          *               it may be used read-only by the copy.
 1821          *              Record the copy object in the original
 1822          *               object.  When the original mapping causes
 1823          *               a page to be modified, it must be copied
 1824          *               to a new page that is "pushed" to the
 1825          *               copy object.
 1826          *              Mark the new mapping (the copy object)
 1827          *               copy-on-write.  This makes the copy
 1828          *               object itself read-only, allowing it
 1829          *               to be reused if the original mapping
 1830          *               makes no changes, and simplifying the
 1831          *               synchronization required in the "push"
 1832          *               operation described above.
 1833          *
 1834          *      The copy-on-write is said to be assymetric because
 1835          *      the original object is *not* marked copy-on-write.
 1836          *      A copied page is pushed to the copy object, regardless
 1837          *      which party attempted to modify the page.
 1838          *
 1839          *      Repeated asymmetric copy operations may be done.
 1840          *      If the original object has not been changed since
 1841          *      the last copy, its copy object can be reused.
 1842          *      Otherwise, a new copy object can be inserted
 1843          *      between the original object and its previous
 1844          *      copy object.  Since any copy object is read-only,
 1845          *      this cannot affect the contents of the previous copy
 1846          *      object.
 1847          *
 1848          *      Note that a copy object is higher in the object
 1849          *      tree than the original object; therefore, use of
 1850          *      the copy object recorded in the original object
 1851          *      must be done carefully, to avoid deadlock.
 1852          */
 1853 
 1854         /*
 1855          *      Allocate a new copy object before locking, even
 1856          *      though we may not need it later.
 1857          */
 1858 
 1859         new_copy = vm_object_allocate(src_object->size);
 1860 
 1861         vm_object_lock(src_object);
 1862 
 1863         /*
 1864          *      See whether we can reuse the result of a previous
 1865          *      copy operation.
 1866          */
 1867  Retry:
 1868         old_copy = src_object->copy;
 1869         if (old_copy != VM_OBJECT_NULL) {
 1870                 /*
 1871                  *      Try to get the locks (out of order)
 1872                  */
 1873                 if (!vm_object_lock_try(old_copy)) {
 1874                         vm_object_unlock(src_object);
 1875 
 1876                         simple_lock_pause();    /* wait a bit */
 1877 
 1878                         vm_object_lock(src_object);
 1879                         goto Retry;
 1880                 }
 1881 
 1882                 /*
 1883                  *      Determine whether the old copy object has
 1884                  *      been modified.
 1885                  */
 1886 
 1887                 if (old_copy->resident_page_count == 0 &&
 1888                     !old_copy->pager_created) {
 1889                         /*
 1890                          *      It has not been modified.
 1891                          *
 1892                          *      Return another reference to
 1893                          *      the existing copy-object.
 1894                          */
 1895                         assert(old_copy->ref_count > 0);
 1896                         old_copy->ref_count++;
 1897                         vm_object_unlock(old_copy);
 1898                         vm_object_unlock(src_object);
 1899 
 1900                         vm_object_deallocate(new_copy);
 1901 
 1902                         return old_copy;
 1903                 }
 1904 
 1905                 /*
 1906                  *      The copy-object is always made large enough to
 1907                  *      completely shadow the original object, since
 1908                  *      it may have several users who want to shadow
 1909                  *      the original object at different points.
 1910                  */
 1911 
 1912                 assert((old_copy->shadow == src_object) &&
 1913                     (old_copy->shadow_offset == (vm_offset_t) 0));
 1914 
 1915                 /*
 1916                  *      Make the old copy-object shadow the new one.
 1917                  *      It will receive no more pages from the original
 1918                  *      object.
 1919                  */
 1920 
 1921                 src_object->ref_count--;        /* remove ref. from old_copy */
 1922                 assert(src_object->ref_count > 0);
 1923                 old_copy->shadow = new_copy;
 1924                 assert(new_copy->ref_count > 0);
 1925                 new_copy->ref_count++;
 1926                 vm_object_unlock(old_copy);     /* done with old_copy */
 1927         }
 1928 
 1929         /*
 1930          *      Point the new copy at the existing object.
 1931          */
 1932 
 1933         new_copy->shadow = src_object;
 1934         new_copy->shadow_offset = 0;
 1935         new_copy->shadowed = TRUE;      /* caller must set needs_copy */
 1936         assert(src_object->ref_count > 0);
 1937         src_object->ref_count++;
 1938         src_object->copy = new_copy;
 1939 
 1940         /*
 1941          *      Mark all pages of the existing object copy-on-write.
 1942          *      This object may have a shadow chain below it, but
 1943          *      those pages will already be marked copy-on-write.
 1944          */
 1945 
 1946         queue_iterate(&src_object->memq, p, vm_page_t, listq) {
 1947             if (!p->fictitious)
 1948                 pmap_page_protect(p->phys_addr, 
 1949                                   (VM_PROT_ALL & ~VM_PROT_WRITE &
 1950                                    ~p->page_lock));
 1951         }
 1952 
 1953         vm_object_unlock(src_object);
 1954         
 1955         return new_copy;
 1956 }
 1957 
 1958 /*
 1959  *      Routine:        vm_object_copy_strategically
 1960  *
 1961  *      Purpose:
 1962  *              Perform a copy according to the source object's
 1963  *              declared strategy.  This operation may block,
 1964  *              and may be interrupted.
 1965  */
 1966 kern_return_t   vm_object_copy_strategically(
 1967         register
 1968         vm_object_t     src_object,
 1969         vm_offset_t     src_offset,
 1970         vm_size_t       size,
 1971         vm_object_t     *dst_object,    /* OUT */
 1972         vm_offset_t     *dst_offset,    /* OUT */
 1973         boolean_t       *dst_needs_copy) /* OUT */
 1974 {
 1975         kern_return_t   result = KERN_SUCCESS;  /* to quiet gcc warnings */
 1976         boolean_t       interruptible = TRUE; /* XXX */
 1977 
 1978         assert(src_object != VM_OBJECT_NULL);
 1979 
 1980         vm_object_lock(src_object);
 1981 
 1982         /* XXX assert(!src_object->temporary);  JSB FIXME */
 1983 
 1984         /*
 1985          *      The copy strategy is only valid if the memory manager
 1986          *      is "ready".
 1987          */
 1988 
 1989         while (!src_object->pager_ready) {
 1990                 vm_object_wait( src_object,
 1991                                 VM_OBJECT_EVENT_PAGER_READY,
 1992                                 interruptible);
 1993                 if (interruptible &&
 1994                     (current_thread()->wait_result != THREAD_AWAKENED)) {
 1995                         *dst_object = VM_OBJECT_NULL;
 1996                         *dst_offset = 0;
 1997                         *dst_needs_copy = FALSE;
 1998                         return MACH_SEND_INTERRUPTED;
 1999                 }
 2000                 vm_object_lock(src_object);
 2001         }
 2002 
 2003         /*
 2004          *      The object may be temporary (even though it is external).
 2005          *      If so, do a symmetric copy.
 2006          */
 2007 
 2008         if (src_object->temporary) {
 2009                 /*
 2010                  *      XXX
 2011                  *      This does not count as intelligent!
 2012                  *      This buys us the object->temporary optimizations,
 2013                  *      but we aren't using a symmetric copy,
 2014                  *      which may confuse the vm code. The correct thing
 2015                  *      to do here is to figure out what to call to get
 2016                  *      a temporary shadowing set up.
 2017                  */
 2018                 src_object->copy_strategy = MEMORY_OBJECT_COPY_DELAY;
 2019         }
 2020 
 2021         /*
 2022          *      The object is permanent. Use the appropriate copy strategy.
 2023          */
 2024 
 2025         switch (src_object->copy_strategy) {
 2026             case MEMORY_OBJECT_COPY_NONE:
 2027                 if ((result = vm_object_copy_slowly(
 2028                                         src_object,
 2029                                         src_offset,
 2030                                         size,
 2031                                         interruptible,
 2032                                         dst_object))
 2033                     == KERN_SUCCESS) {
 2034                         *dst_offset = 0;
 2035                         *dst_needs_copy = FALSE;
 2036                 }
 2037                 break;
 2038 
 2039             case MEMORY_OBJECT_COPY_CALL:
 2040                 if ((result = vm_object_copy_call(      
 2041                                 src_object,
 2042                                 src_offset,
 2043                                 size,
 2044                                 dst_object))
 2045                     == KERN_SUCCESS) {
 2046                         *dst_offset = 0;
 2047                         *dst_needs_copy = FALSE;
 2048                 }
 2049                 break;
 2050 
 2051             case MEMORY_OBJECT_COPY_DELAY:
 2052                 vm_object_unlock(src_object);
 2053                 *dst_object = vm_object_copy_delayed(src_object);
 2054                 *dst_offset = src_offset;
 2055                 *dst_needs_copy = TRUE;
 2056 
 2057                 result = KERN_SUCCESS;
 2058                 break;
 2059         }
 2060 
 2061         return result;
 2062 }
 2063 
 2064 /*
 2065  *      vm_object_shadow:
 2066  *
 2067  *      Create a new object which is backed by the
 2068  *      specified existing object range.  The source
 2069  *      object reference is deallocated.
 2070  *
 2071  *      The new object and offset into that object
 2072  *      are returned in the source parameters.
 2073  */
 2074 
 2075 void vm_object_shadow(
 2076         vm_object_t     *object,        /* IN/OUT */
 2077         vm_offset_t     *offset,        /* IN/OUT */
 2078         vm_size_t       length)
 2079 {
 2080         register vm_object_t    source;
 2081         register vm_object_t    result;
 2082 
 2083         source = *object;
 2084 
 2085         /*
 2086          *      Allocate a new object with the given length
 2087          */
 2088 
 2089         if ((result = vm_object_allocate(length)) == VM_OBJECT_NULL)
 2090                 panic("vm_object_shadow: no object for shadowing");
 2091 
 2092         /*
 2093          *      The new object shadows the source object, adding
 2094          *      a reference to it.  Our caller changes his reference
 2095          *      to point to the new object, removing a reference to
 2096          *      the source object.  Net result: no change of reference
 2097          *      count.
 2098          */
 2099         result->shadow = source;
 2100         
 2101         /*
 2102          *      Store the offset into the source object,
 2103          *      and fix up the offset into the new object.
 2104          */
 2105 
 2106         result->shadow_offset = *offset;
 2107 
 2108         /*
 2109          *      Return the new things
 2110          */
 2111 
 2112         *offset = 0;
 2113         *object = result;
 2114 }
 2115 
 2116 /*
 2117  *      The relationship between vm_object structures and
 2118  *      the memory_object ports requires careful synchronization.
 2119  *
 2120  *      All associations are created by vm_object_enter.  All three
 2121  *      port fields are filled in, as follows:
 2122  *              pager:  the memory_object port itself, supplied by
 2123  *                      the user requesting a mapping (or the kernel,
 2124  *                      when initializing internal objects); the
 2125  *                      kernel simulates holding send rights by keeping
 2126  *                      a port reference;
 2127  *              pager_request:
 2128  *              pager_name:
 2129  *                      the memory object control and name ports,
 2130  *                      created by the kernel; the kernel holds
 2131  *                      receive (and ownership) rights to these
 2132  *                      ports, but no other references.
 2133  *      All of the ports are referenced by their global names.
 2134  *
 2135  *      When initialization is complete, the "initialized" field
 2136  *      is asserted.  Other mappings using a particular memory object,
 2137  *      and any references to the vm_object gained through the
 2138  *      port association must wait for this initialization to occur.
 2139  *
 2140  *      In order to allow the memory manager to set attributes before
 2141  *      requests (notably virtual copy operations, but also data or
 2142  *      unlock requests) are made, a "ready" attribute is made available.
 2143  *      Only the memory manager may affect the value of this attribute.
 2144  *      Its value does not affect critical kernel functions, such as
 2145  *      internal object initialization or destruction.  [Furthermore,
 2146  *      memory objects created by the kernel are assumed to be ready
 2147  *      immediately; the default memory manager need not explicitly
 2148  *      set the "ready" attribute.]
 2149  *
 2150  *      [Both the "initialized" and "ready" attribute wait conditions
 2151  *      use the "pager" field as the wait event.]
 2152  *
 2153  *      The port associations can be broken down by any of the
 2154  *      following routines:
 2155  *              vm_object_terminate:
 2156  *                      No references to the vm_object remain, and
 2157  *                      the object cannot (or will not) be cached.
 2158  *                      This is the normal case, and is done even
 2159  *                      though one of the other cases has already been
 2160  *                      done.
 2161  *              vm_object_destroy:
 2162  *                      The memory_object port has been destroyed,
 2163  *                      meaning that the kernel cannot flush dirty
 2164  *                      pages or request new data or unlock existing
 2165  *                      data.
 2166  *              memory_object_destroy:
 2167  *                      The memory manager has requested that the
 2168  *                      kernel relinquish rights to the memory object
 2169  *                      port.  [The memory manager may not want to
 2170  *                      destroy the port, but may wish to refuse or
 2171  *                      tear down existing memory mappings.]
 2172  *      Each routine that breaks an association must break all of
 2173  *      them at once.  At some later time, that routine must clear
 2174  *      the vm_object port fields and release the port rights.
 2175  *      [Furthermore, each routine must cope with the simultaneous
 2176  *      or previous operations of the others.]
 2177  *
 2178  *      In addition to the lock on the object, the vm_object_cache_lock
 2179  *      governs the port associations.  References gained through the
 2180  *      port association require use of the cache lock.
 2181  *
 2182  *      Because the port fields may be cleared spontaneously, they
 2183  *      cannot be used to determine whether a memory object has
 2184  *      ever been associated with a particular vm_object.  [This
 2185  *      knowledge is important to the shadow object mechanism.]
 2186  *      For this reason, an additional "created" attribute is
 2187  *      provided.
 2188  *
 2189  *      During various paging operations, the port values found in the
 2190  *      vm_object must be valid.  To prevent these port rights from being
 2191  *      released, and to prevent the port associations from changing
 2192  *      (other than being removed, i.e., made null), routines may use
 2193  *      the vm_object_paging_begin/end routines [actually, macros].
 2194  *      The implementation uses the "paging_in_progress" and "wanted" fields.
 2195  *      [Operations that alter the validity of the port values include the
 2196  *      termination routines and vm_object_collapse.]
 2197  */
 2198 
 2199 vm_object_t vm_object_lookup(
 2200         ipc_port_t      port)
 2201 {
 2202         vm_object_t     object = VM_OBJECT_NULL;
 2203 
 2204         if (IP_VALID(port)) {
 2205                 ip_lock(port);
 2206                 if (ip_active(port) &&
 2207 #if     NORMA_VM
 2208                     (ip_kotype(port) == IKOT_PAGER)) {
 2209 #else   /* NORMA_VM */
 2210                     (ip_kotype(port) == IKOT_PAGING_REQUEST)) {
 2211 #endif  /* NORMA_VM */
 2212                         vm_object_cache_lock();
 2213                         object = (vm_object_t) port->ip_kobject;
 2214                         vm_object_lock(object);
 2215 
 2216                         assert(object->alive);
 2217 
 2218                         if (object->ref_count == 0) {
 2219                                 queue_remove(&vm_object_cached_list, object,
 2220                                              vm_object_t, cached_list);
 2221                                 vm_object_cached_count--;
 2222                         }
 2223 
 2224                         object->ref_count++;
 2225                         vm_object_unlock(object);
 2226                         vm_object_cache_unlock();
 2227                 }
 2228                 ip_unlock(port);
 2229         }
 2230 
 2231         return object;
 2232 }
 2233 
 2234 vm_object_t vm_object_lookup_name(
 2235         ipc_port_t      port)
 2236 {
 2237         vm_object_t     object = VM_OBJECT_NULL;
 2238 
 2239         if (IP_VALID(port)) {
 2240                 ip_lock(port);
 2241                 if (ip_active(port) &&
 2242                     (ip_kotype(port) == IKOT_PAGING_NAME)) {
 2243                         vm_object_cache_lock();
 2244                         object = (vm_object_t) port->ip_kobject;
 2245                         vm_object_lock(object);
 2246 
 2247                         assert(object->alive);
 2248 
 2249                         if (object->ref_count == 0) {
 2250                                 queue_remove(&vm_object_cached_list, object,
 2251                                              vm_object_t, cached_list);
 2252                                 vm_object_cached_count--;
 2253                         }
 2254 
 2255                         object->ref_count++;
 2256                         vm_object_unlock(object);
 2257                         vm_object_cache_unlock();
 2258                 }
 2259                 ip_unlock(port);
 2260         }
 2261 
 2262         return object;
 2263 }
 2264 
 2265 void vm_object_destroy(
 2266         ipc_port_t      pager)
 2267 {
 2268         vm_object_t     object;
 2269         pager_request_t old_request;
 2270         ipc_port_t      old_name;
 2271 
 2272         /*
 2273          *      Perform essentially the same operations as in vm_object_lookup,
 2274          *      except that this time we look up based on the memory_object
 2275          *      port, not the control port.
 2276          */
 2277         vm_object_cache_lock();
 2278         if (ip_kotype(pager) != IKOT_PAGER) {
 2279                 vm_object_cache_unlock();
 2280                 return;
 2281         }
 2282 
 2283         object = (vm_object_t) pager->ip_kobject;
 2284         vm_object_lock(object);
 2285         if (object->ref_count == 0) {
 2286                 queue_remove(&vm_object_cached_list, object,
 2287                                 vm_object_t, cached_list);
 2288                 vm_object_cached_count--;
 2289         }
 2290         object->ref_count++;
 2291 
 2292         object->can_persist = FALSE;
 2293 
 2294         assert(object->pager == pager);
 2295 
 2296         /*
 2297          *      Remove the port associations.
 2298          *
 2299          *      Note that the memory_object itself is dead, so
 2300          *      we don't bother with it.
 2301          */
 2302 
 2303         object->pager = IP_NULL;
 2304         vm_object_remove(object);
 2305 
 2306         old_request = object->pager_request;
 2307         object->pager_request = PAGER_REQUEST_NULL;
 2308 
 2309         old_name = object->pager_name;
 2310         object->pager_name = IP_NULL;
 2311 
 2312         vm_object_unlock(object);
 2313         vm_object_cache_unlock();
 2314 
 2315         /*
 2316          *      Clean up the port references.  Note that there's no
 2317          *      point in trying the memory_object_terminate call
 2318          *      because the memory_object itself is dead.
 2319          */
 2320 
 2321         ipc_port_release_send(pager);
 2322 #if     !NORMA_VM
 2323         if (old_request != IP_NULL)
 2324                 ipc_port_dealloc_kernel(old_request);
 2325 #endif  /* !NORMA_VM */
 2326         if (old_name != IP_NULL)
 2327 #if     NORMA_VM
 2328                 ipc_port_release_send(old_name);
 2329 #else   /* NORMA_VM */
 2330                 ipc_port_dealloc_kernel(old_name);
 2331 #endif  /* NORMA_VM */
 2332 
 2333         /*
 2334          *      Restart pending page requests
 2335          */
 2336 
 2337         vm_object_abort_activity(object);
 2338 
 2339         /*
 2340          *      Lose the object reference.
 2341          */
 2342 
 2343         vm_object_deallocate(object);
 2344 }
 2345 
 2346 boolean_t       vm_object_accept_old_init_protocol = FALSE;
 2347 
 2348 /*
 2349  *      Routine:        vm_object_enter
 2350  *      Purpose:
 2351  *              Find a VM object corresponding to the given
 2352  *              pager; if no such object exists, create one,
 2353  *              and initialize the pager.
 2354  */
 2355 vm_object_t vm_object_enter(
 2356         ipc_port_t      pager,
 2357         vm_size_t       size,
 2358         boolean_t       internal)
 2359 {
 2360         register
 2361         vm_object_t     object;
 2362         vm_object_t     new_object;
 2363         boolean_t       must_init;
 2364         ipc_kobject_type_t po;
 2365 
 2366 restart:
 2367         if (!IP_VALID(pager))
 2368                 return vm_object_allocate(size);
 2369 
 2370         new_object = VM_OBJECT_NULL;
 2371         must_init = FALSE;
 2372 
 2373         /*
 2374          *      Look for an object associated with this port.
 2375          */
 2376 
 2377         vm_object_cache_lock();
 2378         for (;;) {
 2379                 po = ip_kotype(pager);
 2380 
 2381                 /*
 2382                  *      If a previous object is being terminated,
 2383                  *      we must wait for the termination message
 2384                  *      to be queued.
 2385                  *
 2386                  *      We set kobject to a non-null value to let the
 2387                  *      terminator know that someone is waiting.
 2388                  *      Among the possibilities is that the port
 2389                  *      could die while we're waiting.  Must restart
 2390                  *      instead of continuing the loop.
 2391                  */
 2392 
 2393                 if (po == IKOT_PAGER_TERMINATING) {
 2394                         pager->ip_kobject = (ipc_kobject_t) pager;
 2395                         assert_wait((event_t) pager, FALSE);
 2396                         vm_object_cache_unlock();
 2397                         thread_block((void (*)()) 0);
 2398                         goto restart;
 2399                 }
 2400 
 2401                 /*
 2402                  *      Bail if there is already a kobject associated
 2403                  *      with the pager port.
 2404                  */
 2405                 if (po != IKOT_NONE) {
 2406                         break;
 2407                 }
 2408 
 2409                 /*
 2410                  *      We must unlock to create a new object;
 2411                  *      if we do so, we must try the lookup again.
 2412                  */
 2413 
 2414                 if (new_object == VM_OBJECT_NULL) {
 2415                         vm_object_cache_unlock();
 2416                         new_object = vm_object_allocate(size);
 2417                         vm_object_cache_lock();
 2418                 } else {
 2419                         /*
 2420                          *      Lookup failed twice, and we have something
 2421                          *      to insert; set the object.
 2422                          */
 2423 
 2424                         ipc_kobject_set(pager,
 2425                                         (ipc_kobject_t) new_object,
 2426                                         IKOT_PAGER);
 2427                         new_object = VM_OBJECT_NULL;
 2428                         must_init = TRUE;
 2429                 }
 2430         }
 2431 
 2432         if (internal)
 2433                 must_init = TRUE;
 2434 
 2435         /*
 2436          *      It's only good if it's a VM object!
 2437          */
 2438 
 2439         object = (po == IKOT_PAGER) ? (vm_object_t) pager->ip_kobject
 2440                                     : VM_OBJECT_NULL;
 2441 
 2442         if ((object != VM_OBJECT_NULL) && !must_init) {
 2443                 vm_object_lock(object);
 2444                 if (object->ref_count == 0) {
 2445                         queue_remove(&vm_object_cached_list, object,
 2446                                         vm_object_t, cached_list);
 2447                         vm_object_cached_count--;
 2448                 }
 2449                 object->ref_count++;
 2450                 vm_object_unlock(object);
 2451 
 2452                 vm_stat.hits++;
 2453         }
 2454         assert((object == VM_OBJECT_NULL) || (object->ref_count > 0) ||
 2455                 ((object->paging_in_progress != 0) && internal));
 2456 
 2457         vm_stat.lookups++;
 2458 
 2459         vm_object_cache_unlock();
 2460 
 2461         /*
 2462          *      If we raced to create a vm_object but lost, let's
 2463          *      throw away ours.
 2464          */
 2465 
 2466         if (new_object != VM_OBJECT_NULL)
 2467                 vm_object_deallocate(new_object);
 2468 
 2469         if (object == VM_OBJECT_NULL)
 2470                 return(object);
 2471 
 2472         if (must_init) {
 2473                 /*
 2474                  *      Copy the naked send right we were given.
 2475                  */
 2476 
 2477                 pager = ipc_port_copy_send(pager);
 2478                 if (!IP_VALID(pager))
 2479                         panic("vm_object_enter: port died"); /* XXX */
 2480 
 2481                 object->pager_created = TRUE;
 2482                 object->pager = pager;
 2483 
 2484 #if     NORMA_VM
 2485 
 2486                 /*
 2487                  *      Let the xmm system know that we want to use the pager.
 2488                  *
 2489                  *      Name port will be provided by the xmm system
 2490                  *      when set_attributes_common is called.
 2491                  */
 2492 
 2493                 object->internal = internal;
 2494                 object->pager_ready = internal;
 2495                 if (internal) {
 2496                         assert(object->temporary);
 2497                 } else {
 2498                         object->temporary = FALSE;
 2499                 }
 2500                 object->pager_name = IP_NULL;
 2501 
 2502                 (void) xmm_memory_object_init(object);
 2503 #else   /* NORMA_VM */
 2504 
 2505                 /*
 2506                  *      Allocate request port.
 2507                  */
 2508 
 2509                 object->pager_request = ipc_port_alloc_kernel();
 2510                 if (object->pager_request == IP_NULL)
 2511                         panic("vm_object_enter: pager request alloc");
 2512 
 2513                 ipc_kobject_set(object->pager_request,
 2514                                 (ipc_kobject_t) object,
 2515                                 IKOT_PAGING_REQUEST);
 2516 
 2517                 /*
 2518                  *      Let the pager know we're using it.
 2519                  */
 2520 
 2521                 if (internal) {
 2522                         /* acquire a naked send right for the DMM */
 2523                         ipc_port_t DMM = memory_manager_default_reference();
 2524 
 2525                         /* mark the object internal */
 2526                         object->internal = TRUE;
 2527                         assert(object->temporary);
 2528 
 2529                         /* default-pager objects are ready immediately */
 2530                         object->pager_ready = TRUE;
 2531 
 2532                         /* consumes the naked send right for DMM */
 2533                         (void) memory_object_create(DMM,
 2534                                 pager,
 2535                                 object->size,
 2536                                 object->pager_request,
 2537                                 object->pager_name,
 2538                                 PAGE_SIZE);
 2539                 } else {
 2540                         /* the object is external and not temporary */
 2541                         object->internal = FALSE;
 2542                         object->temporary = FALSE;
 2543 
 2544                         /* user pager objects are not ready until marked so */
 2545                         object->pager_ready = FALSE;
 2546 
 2547                         (void) memory_object_init(pager,
 2548                                 object->pager_request,
 2549                                 object->pager_name,
 2550                                 PAGE_SIZE);
 2551 
 2552                 }
 2553 #endif  /* NORMA_VM */
 2554 
 2555                 vm_object_lock(object);
 2556                 object->pager_initialized = TRUE;
 2557 
 2558                 if (vm_object_accept_old_init_protocol)
 2559                         object->pager_ready = TRUE;
 2560 
 2561                 vm_object_wakeup(object, VM_OBJECT_EVENT_INITIALIZED);
 2562         } else {
 2563                 vm_object_lock(object);
 2564         }
 2565         /*
 2566          *      [At this point, the object must be locked]
 2567          */
 2568 
 2569         /*
 2570          *      Wait for the work above to be done by the first
 2571          *      thread to map this object.
 2572          */
 2573 
 2574         while (!object->pager_initialized) {
 2575                 vm_object_wait( object,
 2576                                 VM_OBJECT_EVENT_INITIALIZED,
 2577                                 FALSE);
 2578                 vm_object_lock(object);
 2579         }
 2580         vm_object_unlock(object);
 2581 
 2582         return object;
 2583 }
 2584 
 2585 /*
 2586  *      Routine:        vm_object_pager_create
 2587  *      Purpose:
 2588  *              Create a memory object for an internal object.
 2589  *      In/out conditions:
 2590  *              The object is locked on entry and exit;
 2591  *              it may be unlocked within this call.
 2592  *      Limitations:
 2593  *              Only one thread may be performing a
 2594  *              vm_object_pager_create on an object at
 2595  *              a time.  Presumably, only the pageout
 2596  *              daemon will be using this routine.
 2597  */
 2598 void vm_object_pager_create(
 2599         register
 2600         vm_object_t     object)
 2601 {
 2602         ipc_port_t      pager;
 2603 
 2604         if (object->pager_created) {
 2605                 /*
 2606                  *      Someone else got to it first...
 2607                  *      wait for them to finish initializing
 2608                  */
 2609 
 2610                 while (!object->pager_initialized) {
 2611                         vm_object_wait( object,
 2612                                         VM_OBJECT_EVENT_PAGER_READY,
 2613                                         FALSE);
 2614                         vm_object_lock(object);
 2615                 }
 2616                 return;
 2617         }
 2618 
 2619         /*
 2620          *      Indicate that a memory object has been assigned
 2621          *      before dropping the lock, to prevent a race.
 2622          */
 2623 
 2624         object->pager_created = TRUE;
 2625                 
 2626         /*
 2627          *      Prevent collapse or termination by
 2628          *      holding a paging reference
 2629          */
 2630 
 2631         vm_object_paging_begin(object);
 2632         vm_object_unlock(object);
 2633 
 2634 #if     MACH_PAGEMAP
 2635         object->existence_info = vm_external_create(
 2636                                         object->size +
 2637                                         object->paging_offset);
 2638         assert((object->size + object->paging_offset) >=
 2639                 object->size);
 2640 #endif  /* MACH_PAGEMAP */
 2641 
 2642         /*
 2643          *      Create the pager, and associate with it
 2644          *      this object.
 2645          *
 2646          *      Note that we only make the port association
 2647          *      so that vm_object_enter can properly look up
 2648          *      the object to complete the initialization...
 2649          *      we do not expect any user to ever map this
 2650          *      object.
 2651          *
 2652          *      Since the kernel has the only rights to the
 2653          *      port, it's safe to install the association
 2654          *      without holding the cache lock.
 2655          */
 2656 
 2657         pager = ipc_port_alloc_kernel();
 2658         if (pager == IP_NULL)
 2659                 panic("vm_object_pager_create: allocate pager port");
 2660 
 2661         (void) ipc_port_make_send(pager);
 2662         ipc_kobject_set(pager, (ipc_kobject_t) object, IKOT_PAGER);
 2663 
 2664         /*
 2665          *      Initialize the rest of the paging stuff
 2666          */
 2667 
 2668         if (vm_object_enter(pager, object->size, TRUE) != object)
 2669                 panic("vm_object_pager_create: mismatch");
 2670 
 2671         /*
 2672          *      Drop the naked send right taken above.
 2673          */
 2674 
 2675         ipc_port_release_send(pager);
 2676 
 2677         /*
 2678          *      Release the paging reference
 2679          */
 2680 
 2681         vm_object_lock(object);
 2682         vm_object_paging_end(object);
 2683 }
 2684 
 2685 /*
 2686  *      Routine:        vm_object_remove
 2687  *      Purpose:
 2688  *              Eliminate the pager/object association
 2689  *              for this pager.
 2690  *      Conditions:
 2691  *              The object cache must be locked.
 2692  */
 2693 void vm_object_remove(
 2694         vm_object_t     object)
 2695 {
 2696         ipc_port_t port;
 2697 
 2698         if ((port = object->pager) != IP_NULL) {
 2699                 if (ip_kotype(port) == IKOT_PAGER)
 2700                         ipc_kobject_set(port, IKO_NULL,
 2701                                         IKOT_PAGER_TERMINATING);
 2702                  else if (ip_kotype(port) != IKOT_NONE)
 2703                         panic("vm_object_remove: bad object port");
 2704         }
 2705 #if     !NORMA_VM
 2706         if ((port = object->pager_request) != IP_NULL) {
 2707                 if (ip_kotype(port) == IKOT_PAGING_REQUEST)
 2708                         ipc_kobject_set(port, IKO_NULL, IKOT_NONE);
 2709                  else if (ip_kotype(port) != IKOT_NONE)
 2710                         panic("vm_object_remove: bad request port");
 2711         }
 2712         if ((port = object->pager_name) != IP_NULL) {
 2713                 if (ip_kotype(port) == IKOT_PAGING_NAME)
 2714                         ipc_kobject_set(port, IKO_NULL, IKOT_NONE);
 2715                  else if (ip_kotype(port) != IKOT_NONE)
 2716                         panic("vm_object_remove: bad name port");
 2717         }
 2718 #endif  /* !NORMA_VM */
 2719 }
 2720 
 2721 /*
 2722  *      Global variables for vm_object_collapse():
 2723  *
 2724  *              Counts for normal collapses and bypasses.
 2725  *              Debugging variables, to watch or disable collapse.
 2726  */
 2727 long    object_collapses = 0;
 2728 long    object_bypasses  = 0;
 2729 
 2730 int             vm_object_collapse_debug = 0;
 2731 boolean_t       vm_object_collapse_allowed = TRUE;
 2732 boolean_t       vm_object_collapse_bypass_allowed = TRUE;
 2733 
 2734 /*
 2735  *      vm_object_collapse:
 2736  *
 2737  *      Collapse an object with the object backing it.
 2738  *      Pages in the backing object are moved into the
 2739  *      parent, and the backing object is deallocated.
 2740  *
 2741  *      Requires that the object be locked and the page
 2742  *      queues be unlocked.  May unlock/relock the object,
 2743  *      so the caller should hold a reference for the object.
 2744  */
 2745 void vm_object_collapse(
 2746         register vm_object_t    object)
 2747 {
 2748         register vm_object_t    backing_object;
 2749         register vm_offset_t    backing_offset;
 2750         register vm_size_t      size;
 2751         register vm_offset_t    new_offset;
 2752         register vm_page_t      p, pp;
 2753         ipc_port_t old_name_port;
 2754 
 2755         if (!vm_object_collapse_allowed)
 2756                 return;
 2757 
 2758         while (TRUE) {
 2759                 /*
 2760                  *      Verify that the conditions are right for collapse:
 2761                  *
 2762                  *      The object exists and no pages in it are currently
 2763                  *      being paged out (or have ever been paged out).
 2764                  *
 2765                  *      This check is probably overkill -- if a memory
 2766                  *      object has not been created, the fault handler
 2767                  *      shouldn't release the object lock while paging
 2768                  *      is in progress or absent pages exist.
 2769                  */
 2770                 if (object == VM_OBJECT_NULL ||
 2771                     object->pager_created ||
 2772                     object->paging_in_progress != 0 ||
 2773                     object->absent_count != 0)
 2774                         return;
 2775 
 2776                 /*
 2777                  *              There is a backing object, and
 2778                  */
 2779         
 2780                 if ((backing_object = object->shadow) == VM_OBJECT_NULL)
 2781                         return;
 2782         
 2783                 vm_object_lock(backing_object);
 2784                 /*
 2785                  *      ...
 2786                  *              The backing object is not read_only,
 2787                  *              and no pages in the backing object are
 2788                  *              currently being paged out.
 2789                  *              The backing object is internal.
 2790                  *
 2791                  *      XXX It may be sufficient for the backing
 2792                  *      XXX object to be temporary.
 2793                  */
 2794         
 2795                 if (!backing_object->internal ||
 2796                     backing_object->paging_in_progress != 0) {
 2797                         vm_object_unlock(backing_object);
 2798                         return;
 2799                 }
 2800         
 2801                 /*
 2802                  *      The backing object can't be a copy-object:
 2803                  *      the shadow_offset for the copy-object must stay
 2804                  *      as 0.  Furthermore (for the 'we have all the
 2805                  *      pages' case), if we bypass backing_object and
 2806                  *      just shadow the next object in the chain, old
 2807                  *      pages from that object would then have to be copied
 2808                  *      BOTH into the (former) backing_object and into the
 2809                  *      parent object.
 2810                  */
 2811                 if (backing_object->shadow != VM_OBJECT_NULL &&
 2812                     backing_object->shadow->copy != VM_OBJECT_NULL) {
 2813                         vm_object_unlock(backing_object);
 2814                         return;
 2815                 }
 2816 
 2817                 /*
 2818                  *      We know that we can either collapse the backing
 2819                  *      object (if the parent is the only reference to
 2820                  *      it) or (perhaps) remove the parent's reference
 2821                  *      to it.
 2822                  */
 2823 
 2824                 backing_offset = object->shadow_offset;
 2825                 size = object->size;
 2826 
 2827                 /*
 2828                  *      If there is exactly one reference to the backing
 2829                  *      object, we can collapse it into the parent.
 2830                  */
 2831         
 2832                 if (backing_object->ref_count == 1) {
 2833                         if (!vm_object_cache_lock_try()) {
 2834                                 vm_object_unlock(backing_object);
 2835                                 return;
 2836                         }
 2837 
 2838                         /*
 2839                          *      We can collapse the backing object.
 2840                          *
 2841                          *      Move all in-memory pages from backing_object
 2842                          *      to the parent.  Pages that have been paged out
 2843                          *      will be overwritten by any of the parent's
 2844                          *      pages that shadow them.
 2845                          */
 2846 
 2847                         while (!queue_empty(&backing_object->memq)) {
 2848 
 2849                                 p = (vm_page_t)
 2850                                         queue_first(&backing_object->memq);
 2851 
 2852                                 new_offset = (p->offset - backing_offset);
 2853 
 2854                                 assert(!p->busy || p->absent);
 2855 
 2856                                 /*
 2857                                  *      If the parent has a page here, or if
 2858                                  *      this page falls outside the parent,
 2859                                  *      dispose of it.
 2860                                  *
 2861                                  *      Otherwise, move it as planned.
 2862                                  */
 2863 
 2864                                 if (p->offset < backing_offset ||
 2865                                     new_offset >= size) {
 2866                                         vm_page_lock_queues();
 2867                                         vm_page_free(p);
 2868                                         vm_page_unlock_queues();
 2869                                 } else {
 2870                                     pp = vm_page_lookup(object, new_offset);
 2871                                     if (pp != VM_PAGE_NULL && !pp->absent) {
 2872                                         /*
 2873                                          *      Parent object has a real page.
 2874                                          *      Throw away the backing object's
 2875                                          *      page.
 2876                                          */
 2877                                         vm_page_lock_queues();
 2878                                         vm_page_free(p);
 2879                                         vm_page_unlock_queues();
 2880                                     }
 2881                                     else {
 2882                                         if (pp != VM_PAGE_NULL) {
 2883                                             /*
 2884                                              *  Parent has an absent page...
 2885                                              *  it's not being paged in, so
 2886                                              *  it must really be missing from
 2887                                              *  the parent.
 2888                                              *
 2889                                              *  Throw out the absent page...
 2890                                              *  any faults looking for that
 2891                                              *  page will restart with the new
 2892                                              *  one.
 2893                                              */
 2894 
 2895                                             /*
 2896                                              *  This should never happen -- the
 2897                                              *  parent cannot have ever had an
 2898                                              *  external memory object, and thus
 2899                                              *  cannot have absent pages.
 2900                                              */
 2901                                             panic("vm_object_collapse: bad case");
 2902 
 2903                                             vm_page_lock_queues();
 2904                                             vm_page_free(pp);
 2905                                             vm_page_unlock_queues();
 2906 
 2907                                             /*
 2908                                              *  Fall through to move the backing
 2909                                              *  object's page up.
 2910                                              */
 2911                                         }
 2912                                         /*
 2913                                          *      Parent now has no page.
 2914                                          *      Move the backing object's page up.
 2915                                          */
 2916                                         vm_page_rename(p, object, new_offset);
 2917                                     }
 2918                                 }
 2919                         }
 2920 
 2921                         /*
 2922                          *      Move the pager from backing_object to object.
 2923                          *
 2924                          *      XXX We're only using part of the paging space
 2925                          *      for keeps now... we ought to discard the
 2926                          *      unused portion.
 2927                          */
 2928 
 2929                         switch (vm_object_collapse_debug) {
 2930                             case 0:
 2931                                 break;
 2932                             case 1:
 2933                                 if ((backing_object->pager == IP_NULL) &&
 2934                                     (backing_object->pager_request ==
 2935                                      PAGER_REQUEST_NULL))
 2936                                     break;
 2937                                 /* Fall through to... */
 2938 
 2939                             default:
 2940                                 printf("vm_object_collapse: %#x (pager %#x, request %#x) up to %#x\n",
 2941                                         backing_object, backing_object->pager, backing_object->pager_request,
 2942                                         object);
 2943                                 if (vm_object_collapse_debug > 2)
 2944                                     Debugger("vm_object_collapse");
 2945                         }
 2946 
 2947                         object->pager = backing_object->pager;
 2948                         if (object->pager != IP_NULL)
 2949                                 ipc_kobject_set(object->pager,
 2950                                                 (ipc_kobject_t) object,
 2951                                                 IKOT_PAGER);
 2952                         object->pager_initialized = backing_object->pager_initialized;
 2953                         object->pager_ready = backing_object->pager_ready;
 2954                         object->pager_created = backing_object->pager_created;
 2955 
 2956                         object->pager_request = backing_object->pager_request;
 2957 #if     NORMA_VM
 2958                         old_name_port = object->pager_name;
 2959                         object->pager_name = backing_object->pager_name;
 2960 #else   /* NORMA_VM */
 2961                         if (object->pager_request != IP_NULL)
 2962                                 ipc_kobject_set(object->pager_request,
 2963                                                 (ipc_kobject_t) object,
 2964                                                 IKOT_PAGING_REQUEST);
 2965                         old_name_port = object->pager_name;
 2966                         if (old_name_port != IP_NULL)
 2967                                 ipc_kobject_set(old_name_port,
 2968                                                 IKO_NULL, IKOT_NONE);
 2969                         object->pager_name = backing_object->pager_name;
 2970                         if (object->pager_name != IP_NULL)
 2971                                 ipc_kobject_set(object->pager_name,
 2972                                                 (ipc_kobject_t) object,
 2973                                                 IKOT_PAGING_NAME);
 2974 #endif  /* NORMA_VM */
 2975 
 2976                         vm_object_cache_unlock();
 2977 
 2978                         /*
 2979                          * If there is no pager, leave paging-offset alone.
 2980                          */
 2981                         if (object->pager != IP_NULL)
 2982                                 object->paging_offset =
 2983                                         backing_object->paging_offset +
 2984                                                 backing_offset;
 2985 
 2986 #if     MACH_PAGEMAP
 2987                         assert(object->existence_info == VM_EXTERNAL_NULL);
 2988                         object->existence_info = backing_object->existence_info;
 2989 #endif  /* MACH_PAGEMAP */
 2990 
 2991                         /*
 2992                          *      Object now shadows whatever backing_object did.
 2993                          *      Note that the reference to backing_object->shadow
 2994                          *      moves from within backing_object to within object.
 2995                          */
 2996 
 2997                         object->shadow = backing_object->shadow;
 2998                         object->shadow_offset += backing_object->shadow_offset;
 2999                         if (object->shadow != VM_OBJECT_NULL &&
 3000                             object->shadow->copy != VM_OBJECT_NULL) {
 3001                                 panic("vm_object_collapse: we collapsed a copy-object!");
 3002                         }
 3003                         /*
 3004                          *      Discard backing_object.
 3005                          *
 3006                          *      Since the backing object has no pages, no
 3007                          *      pager left, and no object references within it,
 3008                          *      all that is necessary is to dispose of it.
 3009                          */
 3010 
 3011                         assert(
 3012                                 (backing_object->ref_count == 1) &&
 3013                                 (backing_object->resident_page_count == 0) &&
 3014                                 (backing_object->paging_in_progress == 0)
 3015                         );
 3016 
 3017                         assert(backing_object->alive);
 3018                         backing_object->alive = FALSE;
 3019                         vm_object_unlock(backing_object);
 3020 
 3021                         vm_object_unlock(object);
 3022                         if (old_name_port != IP_NULL)
 3023 #if     NORMA_VM
 3024                                 ipc_port_release_send(old_name_port);
 3025 #else   /* NORMA_VM */
 3026                                 ipc_port_dealloc_kernel(old_name_port);
 3027 #endif  /* NORMA_VM */
 3028                         zfree(vm_object_zone, (vm_offset_t) backing_object);
 3029                         vm_object_lock(object);
 3030 
 3031                         object_collapses++;
 3032                 }
 3033                 else {
 3034                         if (!vm_object_collapse_bypass_allowed) {
 3035                                 vm_object_unlock(backing_object);
 3036                                 return;
 3037                         }
 3038 
 3039                         /*
 3040                          *      If all of the pages in the backing object are
 3041                          *      shadowed by the parent object, the parent
 3042                          *      object no longer has to shadow the backing
 3043                          *      object; it can shadow the next one in the
 3044                          *      chain.
 3045                          *
 3046                          *      The backing object must not be paged out - we'd
 3047                          *      have to check all of the paged-out pages, as
 3048                          *      well.
 3049                          */
 3050 
 3051                         if (backing_object->pager_created) {
 3052                                 vm_object_unlock(backing_object);
 3053                                 return;
 3054                         }
 3055 
 3056                         /*
 3057                          *      Should have a check for a 'small' number
 3058                          *      of pages here.
 3059                          */
 3060 
 3061                         queue_iterate(&backing_object->memq, p,
 3062                                       vm_page_t, listq)
 3063                         {
 3064                                 new_offset = (p->offset - backing_offset);
 3065 
 3066                                 /*
 3067                                  *      If the parent has a page here, or if
 3068                                  *      this page falls outside the parent,
 3069                                  *      keep going.
 3070                                  *
 3071                                  *      Otherwise, the backing_object must be
 3072                                  *      left in the chain.
 3073                                  */
 3074 
 3075                                 if (p->offset >= backing_offset &&
 3076                                     new_offset <= size &&
 3077                                     (pp = vm_page_lookup(object, new_offset))
 3078                                       == VM_PAGE_NULL) {
 3079                                         /*
 3080                                          *      Page still needed.
 3081                                          *      Can't go any further.
 3082                                          */
 3083                                         vm_object_unlock(backing_object);
 3084                                         return;
 3085                                 }
 3086                         }
 3087 
 3088                         /*
 3089                          *      Make the parent shadow the next object
 3090                          *      in the chain.  Deallocating backing_object
 3091                          *      will not remove it, since its reference
 3092                          *      count is at least 2.
 3093                          */
 3094 
 3095                         vm_object_reference(object->shadow = backing_object->shadow);
 3096                         object->shadow_offset += backing_object->shadow_offset;
 3097 
 3098                         /*
 3099                          *      Backing object might have had a copy pointer
 3100                          *      to us.  If it did, clear it. 
 3101                          */
 3102                         if (backing_object->copy == object)
 3103                                 backing_object->copy = VM_OBJECT_NULL;
 3104 
 3105                         /*
 3106                          *      Drop the reference count on backing_object.
 3107                          *      Since its ref_count was at least 2, it
 3108                          *      will not vanish; so we don't need to call
 3109                          *      vm_object_deallocate.
 3110                          */
 3111                         backing_object->ref_count--;
 3112                         assert(backing_object->ref_count > 0);
 3113                         vm_object_unlock(backing_object);
 3114 
 3115                         object_bypasses ++;
 3116 
 3117                 }
 3118 
 3119                 /*
 3120                  *      Try again with this object's new backing object.
 3121                  */
 3122         }
 3123 }
 3124 
 3125 /*
 3126  *      Routine:        vm_object_page_remove: [internal]
 3127  *      Purpose:
 3128  *              Removes all physical pages in the specified
 3129  *              object range from the object's list of pages.
 3130  *
 3131  *      In/out conditions:
 3132  *              The object must be locked.
 3133  */
 3134 unsigned int vm_object_page_remove_lookup = 0;
 3135 unsigned int vm_object_page_remove_iterate = 0;
 3136 
 3137 void vm_object_page_remove(
 3138         register vm_object_t    object,
 3139         register vm_offset_t    start,
 3140         register vm_offset_t    end)
 3141 {
 3142         register vm_page_t      p, next;
 3143 
 3144         /*
 3145          *      One and two page removals are most popular.
 3146          *      The factor of 16 here is somewhat arbitrary.
 3147          *      It balances vm_object_lookup vs iteration.
 3148          */
 3149 
 3150         if (atop(end - start) < (unsigned)object->resident_page_count/16) {
 3151                 vm_object_page_remove_lookup++;
 3152 
 3153                 for (; start < end; start += PAGE_SIZE) {
 3154                         p = vm_page_lookup(object, start);
 3155                         if (p != VM_PAGE_NULL) {
 3156                                 if (!p->fictitious)
 3157                                         pmap_page_protect(p->phys_addr,
 3158                                                           VM_PROT_NONE);
 3159                                 vm_page_lock_queues();
 3160                                 vm_page_free(p);
 3161                                 vm_page_unlock_queues();
 3162                         }
 3163                 }
 3164         } else {
 3165                 vm_object_page_remove_iterate++;
 3166 
 3167                 p = (vm_page_t) queue_first(&object->memq);
 3168                 while (!queue_end(&object->memq, (queue_entry_t) p)) {
 3169                         next = (vm_page_t) queue_next(&p->listq);
 3170                         if ((start <= p->offset) && (p->offset < end)) {
 3171                                 if (!p->fictitious)
 3172                                     pmap_page_protect(p->phys_addr,
 3173                                                       VM_PROT_NONE);
 3174                                 vm_page_lock_queues();
 3175                                 vm_page_free(p);
 3176                                 vm_page_unlock_queues();
 3177                         }
 3178                         p = next;
 3179                 }
 3180         }
 3181 }
 3182 
 3183 /*
 3184  *      Routine:        vm_object_coalesce
 3185  *      Function:       Coalesces two objects backing up adjoining
 3186  *                      regions of memory into a single object.
 3187  *
 3188  *      returns TRUE if objects were combined.
 3189  *
 3190  *      NOTE:   Only works at the moment if the second object is NULL -
 3191  *              if it's not, which object do we lock first?
 3192  *
 3193  *      Parameters:
 3194  *              prev_object     First object to coalesce
 3195  *              prev_offset     Offset into prev_object
 3196  *              next_object     Second object into coalesce
 3197  *              next_offset     Offset into next_object
 3198  *
 3199  *              prev_size       Size of reference to prev_object
 3200  *              next_size       Size of reference to next_object
 3201  *
 3202  *      Conditions:
 3203  *      The object must *not* be locked.
 3204  */
 3205 
 3206 boolean_t vm_object_coalesce(
 3207         register vm_object_t prev_object,
 3208         vm_object_t     next_object,
 3209         vm_offset_t     prev_offset,
 3210         vm_offset_t     next_offset,
 3211         vm_size_t       prev_size,
 3212         vm_size_t       next_size)
 3213 {
 3214         vm_size_t       newsize;
 3215 
 3216 #ifdef  lint
 3217         next_offset++;
 3218 #endif  /* lint */
 3219 
 3220         if (next_object != VM_OBJECT_NULL) {
 3221                 return FALSE;
 3222         }
 3223 
 3224         if (prev_object == VM_OBJECT_NULL) {
 3225                 return TRUE;
 3226         }
 3227 
 3228         vm_object_lock(prev_object);
 3229 
 3230         /*
 3231          *      Try to collapse the object first
 3232          */
 3233         vm_object_collapse(prev_object);
 3234 
 3235         /*
 3236          *      Can't coalesce if pages not mapped to
 3237          *      prev_entry may be in use anyway:
 3238          *      . more than one reference
 3239          *      . paged out
 3240          *      . shadows another object
 3241          *      . has a copy elsewhere
 3242          *      . paging references (pages might be in page-list)
 3243          */
 3244 
 3245         if ((prev_object->ref_count > 1) ||
 3246             prev_object->pager_created ||
 3247             (prev_object->shadow != VM_OBJECT_NULL) ||
 3248             (prev_object->copy != VM_OBJECT_NULL) ||
 3249             (prev_object->paging_in_progress != 0)) {
 3250                 vm_object_unlock(prev_object);
 3251                 return FALSE;
 3252         }
 3253 
 3254         /*
 3255          *      Remove any pages that may still be in the object from
 3256          *      a previous deallocation.
 3257          */
 3258 
 3259         vm_object_page_remove(prev_object,
 3260                         prev_offset + prev_size,
 3261                         prev_offset + prev_size + next_size);
 3262 
 3263         /*
 3264          *      Extend the object if necessary.
 3265          */
 3266         newsize = prev_offset + prev_size + next_size;
 3267         if (newsize > prev_object->size)
 3268                 prev_object->size = newsize;
 3269 
 3270         vm_object_unlock(prev_object);
 3271         return TRUE;
 3272 }
 3273 
 3274 vm_object_t     vm_object_request_object(
 3275         ipc_port_t      p)
 3276 {
 3277         return vm_object_lookup(p);
 3278 }
 3279 
 3280 /*
 3281  *      Routine:        vm_object_name
 3282  *      Purpose:
 3283  *              Returns a naked send right to the "name" port associated
 3284  *              with this object.
 3285  */
 3286 ipc_port_t      vm_object_name(
 3287         vm_object_t     object)
 3288 {
 3289         ipc_port_t      p;
 3290 
 3291         if (object == VM_OBJECT_NULL)
 3292                 return IP_NULL;
 3293 
 3294         vm_object_lock(object);
 3295 
 3296         while (object->shadow != VM_OBJECT_NULL) {
 3297                 vm_object_t     new_object = object->shadow;
 3298                 vm_object_lock(new_object);
 3299                 vm_object_unlock(object);
 3300                 object = new_object;
 3301         }
 3302 
 3303         p = object->pager_name;
 3304         if (p != IP_NULL)
 3305 #if     NORMA_VM
 3306                 p = ipc_port_copy_send(p);
 3307 #else   /* NORMA_VM */
 3308                 p = ipc_port_make_send(p);
 3309 #endif  /* NORMA_VM */
 3310         vm_object_unlock(object);
 3311 
 3312         return p;
 3313 }
 3314 
 3315 /*
 3316  *      Attach a set of physical pages to an object, so that they can
 3317  *      be mapped by mapping the object.  Typically used to map IO memory.
 3318  *
 3319  *      The mapping function and its private data are used to obtain the
 3320  *      physical addresses for each page to be mapped.
 3321  */
 3322 void
 3323 vm_object_page_map(
 3324         vm_object_t     object,
 3325         vm_offset_t     offset,
 3326         vm_size_t       size,
 3327         vm_offset_t     (*map_fn)(void *, vm_offset_t),
 3328         void *          map_fn_data)    /* private to map_fn */
 3329 {
 3330         int     num_pages;
 3331         int     i;
 3332         vm_page_t       m;
 3333         vm_page_t       old_page;
 3334         vm_offset_t     addr;
 3335 
 3336         num_pages = atop(size);
 3337 
 3338         for (i = 0; i < num_pages; i++, offset += PAGE_SIZE) {
 3339 
 3340             addr = (*map_fn)(map_fn_data, offset);
 3341 
 3342             while ((m = vm_page_grab_fictitious()) == VM_PAGE_NULL)
 3343                 vm_page_more_fictitious();
 3344 
 3345             vm_object_lock(object);
 3346             if ((old_page = vm_page_lookup(object, offset))
 3347                         != VM_PAGE_NULL)
 3348             {
 3349                 vm_page_lock_queues();
 3350                 vm_page_free(old_page);
 3351                 vm_page_unlock_queues();
 3352             }
 3353 
 3354             vm_page_init(m, addr);
 3355             m->private = TRUE;          /* don`t free page */
 3356             m->wire_count = 1;
 3357             vm_page_lock_queues();
 3358             vm_page_insert(m, object, offset);
 3359             vm_page_unlock_queues();
 3360 
 3361             PAGE_WAKEUP_DONE(m);
 3362             vm_object_unlock(object);
 3363         }
 3364 }
 3365 
 3366 #include <mach_kdb.h>
 3367 
 3368 
 3369 #if     MACH_KDB
 3370 #define printf  kdbprintf
 3371 
 3372 boolean_t       vm_object_print_pages = FALSE;
 3373 
 3374 /*
 3375  *      vm_object_print:        [ debug ]
 3376  */
 3377 void vm_object_print(
 3378         vm_object_t     object)
 3379 {
 3380         register vm_page_t      p;
 3381         extern indent;
 3382 
 3383         register int count;
 3384 
 3385         if (object == VM_OBJECT_NULL)
 3386                 return;
 3387 
 3388         iprintf("Object 0x%X: size=0x%X",
 3389                 (vm_offset_t) object, (vm_offset_t) object->size);
 3390          printf(", %d references, %d resident pages,", object->ref_count,
 3391                 object->resident_page_count);
 3392          printf(" %d absent pages,", object->absent_count);
 3393          printf(" %d paging ops\n", object->paging_in_progress);
 3394         indent += 2;
 3395         iprintf("memory object=0x%X (offset=0x%X),",
 3396                  (vm_offset_t) object->pager, (vm_offset_t) object->paging_offset);
 3397          printf("control=0x%X, name=0x%X\n",
 3398                 (vm_offset_t) object->pager_request, (vm_offset_t) object->pager_name);
 3399         iprintf("%s%s",
 3400                 object->pager_ready ? " ready" : "",
 3401                 object->pager_created ? " created" : "");
 3402          printf("%s,%s ",
 3403                 object->pager_initialized ? "" : "uninitialized",
 3404                 object->temporary ? "temporary" : "permanent");
 3405          printf("%s%s,",
 3406                 object->internal ? "internal" : "external",
 3407                 object->can_persist ? " cacheable" : "");
 3408          printf("copy_strategy=%d\n", (vm_offset_t)object->copy_strategy);
 3409         iprintf("shadow=0x%X (offset=0x%X),",
 3410                 (vm_offset_t) object->shadow, (vm_offset_t) object->shadow_offset);
 3411          printf("copy=0x%X\n", (vm_offset_t) object->copy);
 3412 
 3413         indent += 2;
 3414 
 3415         if (vm_object_print_pages) {
 3416                 count = 0;
 3417                 p = (vm_page_t) queue_first(&object->memq);
 3418                 while (!queue_end(&object->memq, (queue_entry_t) p)) {
 3419                         if (count == 0) iprintf("memory:=");
 3420                         else if (count == 4) {printf("\n"); iprintf(" ..."); count = 0;}
 3421                         else printf(",");
 3422                         count++;
 3423 
 3424                         printf("(off=0x%X,page=0x%X)", p->offset, (vm_offset_t) p);
 3425                         p = (vm_page_t) queue_next(&p->listq);
 3426                 }
 3427                 if (count != 0)
 3428                         printf("\n");
 3429         }
 3430         indent -= 4;
 3431 }
 3432 
 3433 #endif  /* MACH_KDB */

Cache object: 04c5bab2dd184499ddd92192ba331bd9


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.