The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/vm/memory_object.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1993,1992,1991,1990,1989,1988,1987 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        memory_object.c,v $
   29  * Revision 2.32  93/11/17  18:52:14  dbg
   30  *      Moved code from PAGEOUT_PAGES macro into a function
   31  *      called by the macro.  Added ANSI prototypes.
   32  *      [93/09/21  22:09:18  dbg]
   33  * 
   34  * Revision 2.31  93/01/14  18:00:42  danner
   35  *      Removed unneeded cast in argument to thread_wakeup/assert_wait.
   36  *      [92/12/30            dbg]
   37  * 
   38  *      Unlock object around call to vm_map_copy_invoke_cont in
   39  *      memory_object_data_supply.
   40  *      [92/09/22            dbg]
   41  *      64bit cleanup.
   42  *      [92/12/01            af]
   43  * 
   44  *      Unlock object around call to vm_map_copy_invoke_cont in
   45  *      memory_object_data_supply.
   46  *      [92/09/22            dbg]
   47  * 
   48  * Revision 2.30  92/08/03  17:59:55  jfriedl
   49  *      removed silly prototypes
   50  *      [92/08/02            jfriedl]
   51  * 
   52  * Revision 2.29  92/05/21  17:25:17  jfriedl
   53  *      Added stuff to quiet gcc warnings.
   54  *      [92/05/16            jfriedl]
   55  * 
   56  * Revision 2.28  92/03/10  16:29:58  jsb
   57  *      Add MEMORY_OBJECT_COPY_TEMPORARY case to
   58  *      memory_object_set_attributes_common.
   59  *      [92/03/06  16:57:38  jsb]
   60  * 
   61  *      NORMA_VM: don't define cover and backwards compatibility routines.
   62  *      [92/03/06  16:54:53  jsb]
   63  * 
   64  *      [David L. Black 92/02/22  17:05:17  dlb@osf.org]
   65  *      Implement no change of page lock functionality in
   66  *      memory_object_lock_request.
   67  * 
   68  * Revision 2.27  92/02/23  19:50:36  elf
   69  *      Bug fix to avoid duplicate vm_map_copy_t deallocate in
   70  *      error case of memory_object_data_supply.
   71  *      [92/02/19  17:38:17  dlb]
   72  * 
   73  * Revision 2.25.2.2  92/02/18  19:20:01  jeffreyh
   74  *      Cleaned up incorrect comment
   75  *      [92/02/18            jeffreyh]
   76  * 
   77  * Revision 2.25.2.1  92/01/21  21:55:10  jsb
   78  *      Created memory_object_set_attributes_common which provides
   79  *      functionality common to memory_object_set_attributes,
   80  *      memory_object_ready, and memory_object_change_attributes.
   81  *      Fixed memory_object_change_attributes to set use_old_pageout
   82  *      false instead of true.
   83  *      [92/01/21  18:40:20  jsb]
   84  * 
   85  * Revision 2.26  92/01/23  15:21:33  rpd
   86  *      Fixed memory_object_change_attributes.
   87  *      Created memory_object_set_attributes_common.
   88  *      [92/01/18            rpd]
   89  * 
   90  * Revision 2.25  91/10/09  16:19:18  af
   91  *      Fixed assertion in memory_object_data_supply.
   92  *      [91/10/06            rpd]
   93  * 
   94  *      Added vm_page_deactivate_hint.
   95  *      [91/09/29            rpd]
   96  * 
   97  * Revision 2.24  91/08/28  11:17:50  jsb
   98  *      Continuation bug fix.
   99  *      [91/08/05  17:42:57  dlb]
  100  * 
  101  *      Add vm_map_copy continuation support to memory_object_data_supply.
  102  *      [91/07/30  14:13:59  dlb]
  103  * 
  104  *      Turn on page lists by default: gut and remove body of memory_object_
  105  *      data_provided -- it's now a wrapper for memory_object_data_supply.
  106  *      Precious page support:
  107  *          Extensive modifications to to memory_object_lock_{request,page}
  108  *          Implement old version wrapper xxx_memory_object_lock_request
  109  *          Maintain use_old_pageout field in vm objects.
  110  *          Add memory_object_ready, memory_object_change_attributes.
  111  *      [91/07/03  14:11:35  dlb]
  112  * 
  113  * Revision 2.23  91/08/03  18:19:51  jsb
  114  *      For now, use memory_object_data_supply iff NORMA_IPC.
  115  *      [91/07/04  13:14:50  jsb]
  116  * 
  117  * Revision 2.22  91/07/31  18:20:56  dbg
  118  *      Removed explicit data_dealloc argument to
  119  *      memory_object_data_supply.  MiG now handles a user-specified
  120  *      dealloc flag.
  121  *      [91/07/29            dbg]
  122  * 
  123  * Revision 2.21  91/07/01  09:17:25  jsb
  124  *      Fixed remaining MACH_PORT references.
  125  * 
  126  * Revision 2.20  91/07/01  08:31:54  jsb
  127  *      Changed mach_port_t to ipc_port_t in memory_object_data_supply.
  128  * 
  129  * Revision 2.19  91/07/01  08:26:53  jsb
  130  *      21-Jun-91 David L. Black (dlb) at Open Software Foundation
  131  *      Add memory_object_data_supply.
  132  *      [91/06/29  16:36:43  jsb]
  133  * 
  134  * Revision 2.18  91/06/25  11:06:45  rpd
  135  *      Fixed includes to avoid norma files unless they are really needed.
  136  *      [91/06/25            rpd]
  137  * 
  138  * Revision 2.17  91/06/25  10:33:04  rpd
  139  *      Changed memory_object_t to ipc_port_t where appropriate.
  140  *      [91/05/28            rpd]
  141  * 
  142  * Revision 2.16  91/06/17  15:48:55  jsb
  143  *      NORMA_VM: include xmm_server_rename.h, for interposition.
  144  *      [91/06/17  11:09:52  jsb]
  145  * 
  146  * Revision 2.15  91/05/18  14:39:33  rpd
  147  *      Fixed memory_object_lock_page to handle fictitious pages.
  148  *      [91/04/06            rpd]
  149  *      Changed memory_object_data_provided, etc,
  150  *      to allow for fictitious pages.
  151  *      [91/03/29            rpd]
  152  *      Added vm/memory_object.h.
  153  *      [91/03/22            rpd]
  154  * 
  155  * Revision 2.14  91/05/14  17:47:54  mrt
  156  *      Correcting copyright
  157  * 
  158  * Revision 2.13  91/03/16  15:04:30  rpd
  159  *      Removed the old version of memory_object_data_provided.
  160  *      [91/03/11            rpd]
  161  *      Fixed memory_object_data_provided to return success
  162  *      iff it consumes the copy object.
  163  *      [91/02/10            rpd]
  164  * 
  165  * Revision 2.12  91/02/05  17:57:25  mrt
  166  *      Changed to new Mach copyright
  167  *      [91/02/01  16:30:46  mrt]
  168  * 
  169  * Revision 2.11  91/01/08  16:44:11  rpd
  170  *      Added continuation argument to thread_block.
  171  *      [90/12/08            rpd]
  172  * 
  173  * Revision 2.10  90/10/25  14:49:30  rwd
  174  *      Clean and not flush pages to lock request get moved to the
  175  *      inactive queue.
  176  *      [90/10/24            rwd]
  177  * 
  178  * Revision 2.9  90/08/27  22:15:49  dbg
  179  *      Fix error in initial assumptions: vm_pageout_setup must take a
  180  *      BUSY page, to prevent the page from being scrambled by pagein.
  181  *      [90/07/26            dbg]
  182  * 
  183  * Revision 2.8  90/08/06  15:08:16  rwd
  184  *      Fix locking problems in memory_object_lock_request.
  185  *      [90/07/12            rwd]
  186  *      Fix memory_object_lock_request to only send contiguous pages as
  187  *      one message.  If dirty pages were seperated by absent pages,
  188  *      then the wrong thing was done.
  189  *      [90/07/11            rwd]
  190  * 
  191  * Revision 2.7  90/06/19  23:01:38  rpd
  192  *      Bring old single_page version of memory_object_data_provided up
  193  *      to date.
  194  *      [90/06/05            dbg]
  195  * 
  196  *      Correct object locking in memory_object_lock_request.
  197  *      [90/06/05            dbg]
  198  * 
  199  * Revision 2.6  90/06/02  15:10:14  rpd
  200  *      Changed memory_object_lock_request/memory_object_lock_completed calls
  201  *      to allow both send and send-once right reply-to ports.
  202  *      [90/05/31            rpd]
  203  * 
  204  *      Added memory_manager_default_port.
  205  *      [90/04/29            rpd]
  206  *      Converted to new IPC.  Purged MACH_XP_FPD.
  207  *      [90/03/26  23:11:14  rpd]
  208  * 
  209  * Revision 2.5  90/05/29  18:38:29  rwd
  210  *      New memory_object_lock_request from dbg.
  211  *      [90/05/18  13:04:36  rwd]
  212  * 
  213  *      Picked up rfr MACH_PAGEMAP changes.
  214  *      [90/04/12  13:45:43  rwd]
  215  * 
  216  * Revision 2.4  90/05/03  15:58:23  dbg
  217  *      Pass should_flush to vm_pageout_page: don't flush page if not
  218  *      requested.
  219  *      [90/03/28            dbg]
  220  * 
  221  * Revision 2.3  90/02/22  20:05:10  dbg
  222  *      Pick up changes from mainline:
  223  * 
  224  *              Fix placeholder page handling in memory_object_data_provided.
  225  *              Old code was calling zalloc while holding a lock.
  226  *              [89/12/13  19:58:28  dlb]
  227  * 
  228  *              Don't clear busy flags on any pages in memory_object_lock_page
  229  *              (from memory_object_lock_request)!!  Implemented by changing
  230  *              PAGE_WAKEUP to not clear busy flag and using PAGE_WAKEUP_DONE
  231  *              when it must be cleared.  See vm/vm_page.h.  With dbg's help.
  232  *              [89/12/13            dlb]
  233  * 
  234  *              Don't activate fictitious pages after freeing them in
  235  *              memory_object_data_{unavailable,error}.  Add missing lock and
  236  *              unlock of page queues when activating pages in same routines.
  237  *              [89/12/11            dlb]
  238  *              Retry lookup after using CLEAN_DIRTY_PAGES in
  239  *              memory_object_lock_request().  Also delete old version of
  240  *              memory_object_data_provided().  From mwyoung.
  241  *              [89/11/17            dlb]
  242  * 
  243  *              Save all page-cleaning operations until it becomes necessary
  244  *              to block in memory_object_lock_request().
  245  *              [89/09/30  18:07:16  mwyoung]
  246  * 
  247  *              Split out a per-page routine for lock requests.
  248  *              [89/08/20  19:47:42  mwyoung]
  249  * 
  250  *              Verify that the user memory used in
  251  *              memory_object_data_provided() is valid, even if it won't
  252  *              be used to fill a page request.
  253  *              [89/08/01  14:58:21  mwyoung]
  254  * 
  255  *              Make memory_object_data_provided atomic, interruptible,
  256  *              and serializable when handling multiple pages.  Add
  257  *              optimization for single-page operations.
  258  *              [89/05/12  16:06:13  mwyoung]
  259  * 
  260  *              Simplify lock/clean/flush sequences memory_object_lock_request.
  261  *              Correct error in call to pmap_page_protect() there.
  262  *              Make error/absent pages eligible for pageout.
  263  *              [89/04/22            mwyoung]
  264  * 
  265  * Revision 2.2  89/09/08  11:28:10  dbg
  266  *      Pass keep_wired argument to vm_move.  Disabled
  267  *      host_set_memory_object_default.
  268  *      [89/07/14            dbg]
  269  * 
  270  * 28-Apr-89  David Golub (dbg) at Carnegie-Mellon University
  271  *      Clean up fast_pager_data option.  Remove pager_data_provided_inline.
  272  *
  273  * Revision 2.18  89/04/23  13:25:30  gm0w
  274  *      Fixed typo to pmap_page_protect in memory_object_lock_request().
  275  *      [89/04/23            gm0w]
  276  * 
  277  * Revision 2.17  89/04/22  15:35:09  gm0w
  278  *      Commented out check/uprintf if memory_object_data_unavailable
  279  *      was called on a permanent object.
  280  *      [89/04/14            gm0w]
  281  * 
  282  * Revision 2.16  89/04/18  21:24:24  mwyoung
  283  *      Recent history:
  284  *              Add vm_set_default_memory_manager(),
  285  *               memory_object_get_attributes().
  286  *              Whenever asked to clean a page, use pmap_is_modified, even
  287  *               if not flushing the data.
  288  *              Handle fictitious pages when accepting data (or error or
  289  *               unavailable).
  290  *              Avoid waiting in memory_object_data_error().
  291  * 
  292  *      Previous history has been integrated into the documentation below.
  293  *      [89/04/18            mwyoung]
  294  * 
  295  */
  296 /*
  297  *      File:   vm/memory_object.c
  298  *      Author: Michael Wayne Young
  299  *
  300  *      External memory management interface control functions.
  301  */
  302 
  303 /*
  304  *      Interface dependencies:
  305  */
  306 
  307 #include <mach/std_types.h>     /* For pointer_t */
  308 #include <mach/mach_types.h>
  309 
  310 #include <mach/kern_return.h>
  311 #include <vm/vm_object.h>
  312 #include <mach/memory_object.h>
  313 #include <mach/memory_object_user.h>
  314 #include <mach/memory_object_default.h>
  315 #include <mach/boolean.h>
  316 #include <mach/vm_prot.h>
  317 #include <mach/message.h>
  318 
  319 /*
  320  *      Implementation dependencies:
  321  */
  322 #include <vm/memory_object.h>
  323 #include <vm/vm_page.h>
  324 #include <vm/vm_pageout.h>
  325 #include <vm/pmap.h>            /* For copy_to_phys, pmap_clear_modify */
  326 #include <kern/thread.h>                /* For current_thread() */
  327 #include <kern/host.h>
  328 #include <vm/vm_kern.h>         /* For kernel_map, vm_move */
  329 #include <vm/vm_map.h>          /* For vm_map_pageable */
  330 #include <ipc/ipc_port.h>
  331 
  332 #include <norma_vm.h>
  333 #include <norma_ipc.h>
  334 #if     NORMA_VM
  335 #include <norma/xmm_server_rename.h>
  336 #endif  /* NORMA_VM */
  337 #include <mach_pagemap.h>
  338 #if     MACH_PAGEMAP
  339 #include <vm/vm_external.h>
  340 #endif  /* MACH_PAGEMAP */
  341 
  342 typedef int             memory_object_lock_result_t; /* moved from below */
  343 
  344 
  345 ipc_port_t      memory_manager_default = IP_NULL;
  346 decl_simple_lock_data(,memory_manager_default_lock)
  347 
  348 /*
  349  *      Important note:
  350  *              All of these routines gain a reference to the
  351  *              object (first argument) as part of the automatic
  352  *              argument conversion. Explicit deallocation is necessary.
  353  */
  354 
  355 
  356 kern_return_t memory_object_data_supply(
  357         register
  358         vm_object_t             object,
  359         register
  360         vm_offset_t             offset,
  361         vm_map_copy_t           data_copy,
  362         natural_t               data_cnt,
  363         vm_prot_t               lock_value,
  364         boolean_t               precious,
  365         ipc_port_t              reply_to,
  366         mach_msg_type_name_t    reply_to_type)
  367 {
  368         kern_return_t   result = KERN_SUCCESS;
  369         vm_offset_t     error_offset = 0;
  370         register
  371         vm_page_t       m;
  372         register
  373         vm_page_t       data_m;
  374         vm_size_t       original_length;
  375         vm_offset_t     original_offset;
  376         vm_page_t       *page_list;
  377         boolean_t       was_absent;
  378         vm_map_copy_t   orig_copy = data_copy;
  379 
  380         /*
  381          *      Look for bogus arguments
  382          */
  383 
  384         if (object == VM_OBJECT_NULL) {
  385                 return KERN_INVALID_ARGUMENT;
  386         }
  387 
  388         if (lock_value & ~VM_PROT_ALL) {
  389                 vm_object_deallocate(object);
  390                 return KERN_INVALID_ARGUMENT;
  391         }
  392 
  393         if ((data_cnt % PAGE_SIZE) != 0) {
  394             vm_object_deallocate(object);
  395             return KERN_INVALID_ARGUMENT;
  396         }
  397 
  398         /*
  399          *      Adjust the offset from the memory object to the offset
  400          *      within the vm_object.
  401          */
  402 
  403         original_length = data_cnt;
  404         original_offset = offset;
  405 
  406         assert(data_copy->type == VM_MAP_COPY_PAGE_LIST);
  407         page_list = &data_copy->cpy_page_list[0];
  408 
  409         vm_object_lock(object);
  410         vm_object_paging_begin(object);
  411         offset -= object->paging_offset;
  412 
  413         /*
  414          *      Loop over copy stealing pages for pagein.
  415          */
  416 
  417         for (; data_cnt > 0 ; data_cnt -= PAGE_SIZE, offset += PAGE_SIZE) {
  418 
  419                 assert(data_copy->cpy_npages > 0);
  420                 data_m = *page_list;
  421 
  422                 if (data_m == VM_PAGE_NULL || data_m->tabled ||
  423                     data_m->error || data_m->absent || data_m->fictitious) {
  424 
  425                         panic("Data_supply: bad page");
  426                 }
  427 
  428                 /*
  429                  *      Look up target page and check its state.
  430                  */
  431 
  432 retry_lookup:
  433                 m = vm_page_lookup(object,offset);
  434                 if (m == VM_PAGE_NULL) {
  435                     was_absent = FALSE;
  436                 }
  437                 else {
  438                     if (m->absent && m->busy) {
  439 
  440                         /*
  441                          *      Page was requested.  Free the busy
  442                          *      page waiting for it.  Insertion
  443                          *      of new page happens below.
  444                          */
  445 
  446                         VM_PAGE_FREE(m);
  447                         was_absent = TRUE;
  448                     }
  449                     else {
  450 
  451                         /*
  452                          *      Have to wait for page that is busy and
  453                          *      not absent.  This is probably going to
  454                          *      be an error, but go back and check.
  455                          */
  456                         if (m->busy) {
  457                                 PAGE_ASSERT_WAIT(m, FALSE);
  458                                 vm_object_unlock(object);
  459                                 thread_block(CONTINUE_NULL);
  460                                 vm_object_lock(object);
  461                                 goto retry_lookup;
  462                         }
  463 
  464                         /*
  465                          *      Page already present; error.
  466                          *      This is an error if data is precious.
  467                          */
  468                         result = KERN_MEMORY_PRESENT;
  469                         error_offset = offset + object->paging_offset;
  470 
  471                         break;
  472                     }
  473                 }
  474 
  475                 /*
  476                  *      Ok to pagein page.  Target object now has no page
  477                  *      at offset.  Set the page parameters, then drop
  478                  *      in new page and set up pageout state.  Object is
  479                  *      still locked here.
  480                  *
  481                  *      Must clear busy bit in page before inserting it.
  482                  *      Ok to skip wakeup logic because nobody else
  483                  *      can possibly know about this page.
  484                  */
  485 
  486                 data_m->busy = FALSE;
  487                 data_m->dirty = FALSE;
  488                 pmap_clear_modify(data_m->phys_addr);
  489 
  490                 data_m->page_lock = lock_value;
  491                 data_m->unlock_request = VM_PROT_NONE;
  492                 data_m->precious = precious;
  493 
  494                 vm_page_lock_queues();
  495                 vm_page_insert(data_m, object, offset);
  496 
  497                 if (was_absent)
  498                         vm_page_activate(data_m);
  499                 else
  500                         vm_page_deactivate(data_m);
  501 
  502                 vm_page_unlock_queues();
  503 
  504                 /*
  505                  *      Null out this page list entry, and advance to next
  506                  *      page.
  507                  */
  508 
  509                 *page_list++ = VM_PAGE_NULL;
  510 
  511                 if (--(data_copy->cpy_npages) == 0 &&
  512                     vm_map_copy_has_cont(data_copy)) {
  513                         vm_map_copy_t   new_copy;
  514 
  515                         vm_object_unlock(object);
  516 
  517                         vm_map_copy_invoke_cont(data_copy, &new_copy, &result);
  518 
  519                         if (result == KERN_SUCCESS) {
  520 
  521                             /*
  522                              *  Consume on success requires that
  523                              *  we keep the original vm_map_copy
  524                              *  around in case something fails.
  525                              *  Free the old copy if it's not the original
  526                              */
  527                             if (data_copy != orig_copy) {
  528                                 vm_map_copy_discard(data_copy);
  529                             }
  530 
  531                             if ((data_copy = new_copy) != VM_MAP_COPY_NULL)
  532                                 page_list = &data_copy->cpy_page_list[0];
  533 
  534                             vm_object_lock(object);
  535                         }
  536                         else {
  537                             vm_object_lock(object);
  538                             error_offset = offset + object->paging_offset +
  539                                                 PAGE_SIZE;
  540                             break;
  541                         }
  542                 }
  543         }
  544 
  545         /*
  546          *      Send reply if one was requested.
  547          */
  548         vm_object_paging_end(object);
  549         vm_object_unlock(object);
  550 
  551         if (vm_map_copy_has_cont(data_copy))
  552                 vm_map_copy_abort_cont(data_copy);
  553 
  554         if (IP_VALID(reply_to)) {
  555                 memory_object_supply_completed(
  556                                 reply_to, reply_to_type,
  557                                 object->pager_request,
  558                                 original_offset,
  559                                 original_length,
  560                                 result,
  561                                 error_offset);
  562         }
  563 
  564         vm_object_deallocate(object);
  565 
  566         /*
  567          *      Consume on success:  The final data copy must be
  568          *      be discarded if it is not the original.  The original
  569          *      gets discarded only if this routine succeeds.
  570          */
  571         if (data_copy != orig_copy)
  572                 vm_map_copy_discard(data_copy);
  573         if (result == KERN_SUCCESS)
  574                 vm_map_copy_discard(orig_copy);
  575 
  576 
  577         return result;
  578 }
  579 
  580 #if     !NORMA_VM
  581 /*
  582  *      [ obsolete ]
  583  *      Old version of memory_object_data_supply.
  584  *      Does not allow precious pages or reply port.
  585  *
  586  *      If successful, destroys the map copy object.
  587  */
  588 kern_return_t memory_object_data_provided(
  589         vm_object_t     object,
  590         vm_offset_t     offset,
  591         pointer_t       data,
  592         natural_t       data_cnt,
  593         vm_prot_t       lock_value)
  594 {
  595         return memory_object_data_supply(object, offset, (vm_map_copy_t) data,
  596                                          data_cnt, lock_value, FALSE, IP_NULL,
  597                                          0);
  598 }
  599 #endif  /* !NORMA_VM */
  600 
  601 kern_return_t memory_object_data_error(
  602         vm_object_t     object,
  603         vm_offset_t     offset,
  604         vm_size_t       size,
  605         kern_return_t   error_value)
  606 {
  607         if (object == VM_OBJECT_NULL)
  608                 return KERN_INVALID_ARGUMENT;
  609 
  610         if (size != round_page(size))
  611                 return KERN_INVALID_ARGUMENT;
  612 
  613 #ifdef  lint
  614         /* Error value is ignored at this time */
  615         error_value++;
  616 #endif
  617 
  618         vm_object_lock(object);
  619         offset -= object->paging_offset;
  620 
  621         while (size != 0) {
  622                 register vm_page_t m;
  623 
  624                 m = vm_page_lookup(object, offset);
  625                 if ((m != VM_PAGE_NULL) && m->busy && m->absent) {
  626                         m->error = TRUE;
  627                         m->absent = FALSE;
  628                         vm_object_absent_release(object);
  629 
  630                         PAGE_WAKEUP_DONE(m);
  631 
  632                         vm_page_lock_queues();
  633                         vm_page_activate(m);
  634                         vm_page_unlock_queues();
  635                 }
  636 
  637                 size -= PAGE_SIZE;
  638                 offset += PAGE_SIZE;
  639          }
  640         vm_object_unlock(object);
  641 
  642         vm_object_deallocate(object);
  643         return KERN_SUCCESS;
  644 }
  645 
  646 kern_return_t memory_object_data_unavailable(
  647         vm_object_t     object,
  648         vm_offset_t     offset,
  649         vm_size_t       size)
  650 {
  651 #if     MACH_PAGEMAP
  652         vm_external_t   existence_info = VM_EXTERNAL_NULL;
  653 #endif  /* MACH_PAGEMAP */
  654 
  655         if (object == VM_OBJECT_NULL)
  656                 return KERN_INVALID_ARGUMENT;
  657 
  658         if (size != round_page(size))
  659                 return KERN_INVALID_ARGUMENT;
  660 
  661 #if     MACH_PAGEMAP
  662         if ((offset == 0) && (size > VM_EXTERNAL_LARGE_SIZE) && 
  663             (object->existence_info == VM_EXTERNAL_NULL)) {
  664                 existence_info = vm_external_create(VM_EXTERNAL_SMALL_SIZE);
  665         }
  666 #endif  /* MACH_PAGEMAP */
  667 
  668         vm_object_lock(object);
  669 #if     MACH_PAGEMAP
  670         if (existence_info != VM_EXTERNAL_NULL) {
  671                 object->existence_info = existence_info;
  672         }
  673         if ((offset == 0) && (size > VM_EXTERNAL_LARGE_SIZE)) {
  674                 vm_object_unlock(object);
  675                 vm_object_deallocate(object);
  676                 return KERN_SUCCESS;
  677         }
  678 #endif  /* MACH_PAGEMAP */
  679         offset -= object->paging_offset;
  680 
  681         while (size != 0) {
  682                 register vm_page_t m;
  683 
  684                 /*
  685                  *      We're looking for pages that are both busy and
  686                  *      absent (waiting to be filled), converting them
  687                  *      to just absent.
  688                  *
  689                  *      Pages that are just busy can be ignored entirely.
  690                  */
  691 
  692                 m = vm_page_lookup(object, offset);
  693                 if ((m != VM_PAGE_NULL) && m->busy && m->absent) {
  694                         PAGE_WAKEUP_DONE(m);
  695 
  696                         vm_page_lock_queues();
  697                         vm_page_activate(m);
  698                         vm_page_unlock_queues();
  699                 }
  700                 size -= PAGE_SIZE;
  701                 offset += PAGE_SIZE;
  702         }
  703 
  704         vm_object_unlock(object);
  705 
  706         vm_object_deallocate(object);
  707         return KERN_SUCCESS;
  708 }
  709 
  710 /*
  711  *      Routine:        memory_object_lock_page
  712  *
  713  *      Description:
  714  *              Perform the appropriate lock operations on the
  715  *              given page.  See the description of
  716  *              "memory_object_lock_request" for the meanings
  717  *              of the arguments.
  718  *
  719  *              Returns an indication that the operation
  720  *              completed, blocked, or that the page must
  721  *              be cleaned.
  722  */
  723 
  724 #define MEMORY_OBJECT_LOCK_RESULT_DONE          0
  725 #define MEMORY_OBJECT_LOCK_RESULT_MUST_BLOCK    1
  726 #define MEMORY_OBJECT_LOCK_RESULT_MUST_CLEAN    2
  727 #define MEMORY_OBJECT_LOCK_RESULT_MUST_RETURN   3
  728 
  729 memory_object_lock_result_t memory_object_lock_page(
  730         vm_page_t               m,
  731         memory_object_return_t  should_return,
  732         boolean_t               should_flush,
  733         vm_prot_t               prot)
  734 {
  735         /*
  736          *      Don't worry about pages for which the kernel
  737          *      does not have any data.
  738          */
  739 
  740         if (m->absent)
  741                 return MEMORY_OBJECT_LOCK_RESULT_DONE;
  742 
  743         /*
  744          *      If we cannot change access to the page,
  745          *      either because a mapping is in progress
  746          *      (busy page) or because a mapping has been
  747          *      wired, then give up.
  748          */
  749 
  750         if (m->busy)
  751                 return MEMORY_OBJECT_LOCK_RESULT_MUST_BLOCK;
  752 
  753         assert(!m->fictitious);
  754 
  755         if (m->wire_count != 0) {
  756                 /*
  757                  *      If no change would take place
  758                  *      anyway, return successfully.
  759                  *
  760                  *      No change means:
  761                  *              Not flushing AND
  762                  *              No change to page lock [2 checks]  AND
  763                  *              Don't need to send page to manager
  764                  *
  765                  *      Don't need to send page to manager means:
  766                  *              No clean or return request OR (
  767                  *                  Page is not dirty [2 checks] AND (
  768                  *                      Page is not precious OR
  769                  *                      No request to return precious pages ))
  770                  *                    
  771                  *      Now isn't that straightforward and obvious ?? ;-)
  772                  *
  773                  * XXX  This doesn't handle sending a copy of a wired
  774                  * XXX  page to the pager, but that will require some
  775                  * XXX  significant surgery.
  776                  */
  777 
  778                 if (!should_flush &&
  779                     ((m->page_lock == prot) || (prot == VM_PROT_NO_CHANGE)) &&
  780                     ((should_return == MEMORY_OBJECT_RETURN_NONE) ||
  781                      (!m->dirty && !pmap_is_modified(m->phys_addr) &&
  782                       (!m->precious ||
  783                        should_return != MEMORY_OBJECT_RETURN_ALL)))) {
  784                         /*
  785                          *      Restart page unlock requests,
  786                          *      even though no change took place.
  787                          *      [Memory managers may be expecting
  788                          *      to see new requests.]
  789                          */
  790                         m->unlock_request = VM_PROT_NONE;
  791                         PAGE_WAKEUP(m);
  792 
  793                         return MEMORY_OBJECT_LOCK_RESULT_DONE;
  794                 }
  795 
  796                 return MEMORY_OBJECT_LOCK_RESULT_MUST_BLOCK;
  797         }
  798 
  799         /*
  800          *      If the page is to be flushed, allow
  801          *      that to be done as part of the protection.
  802          */
  803 
  804         if (should_flush)
  805                 prot = VM_PROT_ALL;
  806 
  807         /*
  808          *      Set the page lock.
  809          *
  810          *      If we are decreasing permission, do it now;
  811          *      let the fault handler take care of increases
  812          *      (pmap_page_protect may not increase protection).
  813          */
  814 
  815         if (prot != VM_PROT_NO_CHANGE) {
  816                 if ((m->page_lock ^ prot) & prot) {
  817                         pmap_page_protect(m->phys_addr, VM_PROT_ALL & ~prot);
  818                 }
  819                 m->page_lock = prot;
  820 
  821                 /*
  822                  *      Restart any past unlock requests, even if no
  823                  *      change resulted.  If the manager explicitly
  824                  *      requested no protection change, then it is assumed
  825                  *      to be remembering past requests.
  826                  */
  827 
  828                 m->unlock_request = VM_PROT_NONE;
  829                 PAGE_WAKEUP(m);
  830         }
  831 
  832         /*
  833          *      Handle cleaning.
  834          */
  835 
  836         if (should_return != MEMORY_OBJECT_RETURN_NONE) {
  837                 /*
  838                  *      Check whether the page is dirty.  If
  839                  *      write permission has not been removed,
  840                  *      this may have unpredictable results.
  841                  */
  842 
  843                 if (!m->dirty)
  844                         m->dirty = pmap_is_modified(m->phys_addr);
  845 
  846                 if (m->dirty || (m->precious &&
  847                                  should_return == MEMORY_OBJECT_RETURN_ALL)) {
  848                         /*
  849                          *      If we weren't planning
  850                          *      to flush the page anyway,
  851                          *      we may need to remove the
  852                          *      page from the pageout
  853                          *      system and from physical
  854                          *      maps now.
  855                          */
  856 
  857                         vm_page_lock_queues();
  858                         VM_PAGE_QUEUES_REMOVE(m);
  859                         vm_page_unlock_queues();
  860 
  861                         if (!should_flush)
  862                                 pmap_page_protect(m->phys_addr,
  863                                                 VM_PROT_NONE);
  864 
  865                         /*
  866                          *      Cleaning a page will cause
  867                          *      it to be flushed.
  868                          */
  869 
  870                         if (m->dirty)
  871                                 return MEMORY_OBJECT_LOCK_RESULT_MUST_CLEAN;
  872                         else
  873                                 return MEMORY_OBJECT_LOCK_RESULT_MUST_RETURN;
  874                 }
  875         }
  876 
  877         /*
  878          *      Handle flushing
  879          */
  880 
  881         if (should_flush) {
  882                 VM_PAGE_FREE(m);
  883         } else {
  884                 extern boolean_t vm_page_deactivate_hint;
  885 
  886                 /*
  887                  *      XXX Make clean but not flush a paging hint,
  888                  *      and deactivate the pages.  This is a hack
  889                  *      because it overloads flush/clean with
  890                  *      implementation-dependent meaning.  This only
  891                  *      happens to pages that are already clean.
  892                  */
  893 
  894                 if (vm_page_deactivate_hint &&
  895                     (should_return != MEMORY_OBJECT_RETURN_NONE)) {
  896                         vm_page_lock_queues();
  897                         vm_page_deactivate(m);
  898                         vm_page_unlock_queues();
  899                 }
  900         }
  901 
  902         return MEMORY_OBJECT_LOCK_RESULT_DONE;
  903 }
  904 
  905 void pageout_pages(
  906         vm_object_t     object,
  907         vm_offset_t     paging_offset,
  908         vm_object_t     new_object,
  909         vm_offset_t     new_offset,
  910         memory_object_lock_result_t pageout_action,
  911         boolean_t       should_flush,
  912         vm_page_t       holding_pages[])
  913 {
  914         vm_map_copy_t           copy;
  915         register int            i;
  916         register vm_page_t      hp;
  917 
  918         vm_object_unlock(object);
  919 
  920         (void) vm_map_copyin_object(new_object, 0, new_offset, &copy);
  921 
  922         if (object->use_old_pageout) {
  923             assert(pageout_action == MEMORY_OBJECT_LOCK_RESULT_MUST_CLEAN);
  924                 (void) memory_object_data_write(
  925                         object->pager,
  926                         object->pager_request,
  927                         paging_offset,
  928                         (pointer_t) copy,
  929                         new_offset);
  930         }
  931         else {
  932                 (void) memory_object_data_return(
  933                         object->pager,
  934                         object->pager_request,
  935                         paging_offset,
  936                         (pointer_t) copy,
  937                         new_offset,
  938              (pageout_action == MEMORY_OBJECT_LOCK_RESULT_MUST_CLEAN),
  939                         !should_flush);
  940         }
  941 
  942         vm_object_lock(object);
  943 
  944         for (i = 0; i < atop(new_offset); i++) {
  945             hp = holding_pages[i];
  946             if (hp != VM_PAGE_NULL)
  947                 VM_PAGE_FREE(hp);
  948         }
  949 }
  950 
  951 /*
  952  *      Routine:        memory_object_lock_request [user interface]
  953  *
  954  *      Description:
  955  *              Control use of the data associated with the given
  956  *              memory object.  For each page in the given range,
  957  *              perform the following operations, in order:
  958  *                      1)  restrict access to the page (disallow
  959  *                          forms specified by "prot");
  960  *                      2)  return data to the manager (if "should_return"
  961  *                          is RETURN_DIRTY and the page is dirty, or
  962  *                          "should_return" is RETURN_ALL and the page
  963  *                          is either dirty or precious); and,
  964  *                      3)  flush the cached copy (if "should_flush"
  965  *                          is asserted).
  966  *              The set of pages is defined by a starting offset
  967  *              ("offset") and size ("size").  Only pages with the
  968  *              same page alignment as the starting offset are
  969  *              considered.
  970  *
  971  *              A single acknowledgement is sent (to the "reply_to"
  972  *              port) when these actions are complete.  If successful,
  973  *              the naked send right for reply_to is consumed.
  974  */
  975 
  976 kern_return_t
  977 memory_object_lock_request(
  978         register vm_object_t    object,
  979         register vm_offset_t    offset,
  980         register vm_size_t      size,
  981         memory_object_return_t  should_return,
  982         boolean_t               should_flush,
  983         vm_prot_t               prot,
  984         ipc_port_t              reply_to,
  985         mach_msg_type_name_t    reply_to_type)
  986 {
  987         register vm_page_t      m;
  988         vm_offset_t             original_offset = offset;
  989         vm_size_t               original_size = size;
  990         vm_offset_t             paging_offset = 0;
  991         vm_object_t             new_object = VM_OBJECT_NULL;
  992         vm_offset_t             new_offset = 0;
  993         vm_offset_t             last_offset = offset;
  994         memory_object_lock_result_t page_lock_result;
  995         memory_object_lock_result_t pageout_action = 0; /* '=0' to quiet lint */
  996 
  997 #define DATA_WRITE_MAX  32
  998         vm_page_t               holding_pages[DATA_WRITE_MAX];
  999 
 1000         /*
 1001          *      Check for bogus arguments.
 1002          */
 1003         if (object == VM_OBJECT_NULL ||
 1004                 ((prot & ~VM_PROT_ALL) != 0 && prot != VM_PROT_NO_CHANGE))
 1005             return KERN_INVALID_ARGUMENT;
 1006 
 1007         size = round_page(size);
 1008 
 1009         /*
 1010          *      Lock the object, and acquire a paging reference to
 1011          *      prevent the memory_object and control ports from
 1012          *      being destroyed.
 1013          */
 1014 
 1015         vm_object_lock(object);
 1016         vm_object_paging_begin(object);
 1017         offset -= object->paging_offset;
 1018 
 1019         /*
 1020          *      To avoid blocking while scanning for pages, save
 1021          *      dirty pages to be cleaned all at once.
 1022          *
 1023          *      XXXO A similar strategy could be used to limit the
 1024          *      number of times that a scan must be restarted for
 1025          *      other reasons.  Those pages that would require blocking
 1026          *      could be temporarily collected in another list, or
 1027          *      their offsets could be recorded in a small array.
 1028          */
 1029 
 1030         /*
 1031          * XXX  NOTE: May want to consider converting this to a page list
 1032          * XXX  vm_map_copy interface.  Need to understand object
 1033          * XXX  coalescing implications before doing so.
 1034          */
 1035 
 1036 #define PAGEOUT_PAGES                                                   \
 1037 MACRO_BEGIN                                                             \
 1038         pageout_pages(object, paging_offset, new_object, new_offset,    \
 1039                       pageout_action, should_flush, holding_pages);     \
 1040         new_object = VM_OBJECT_NULL;                                    \
 1041 MACRO_END
 1042 
 1043         for (;
 1044              size != 0;
 1045              size -= PAGE_SIZE, offset += PAGE_SIZE)
 1046         {
 1047             /*
 1048              *  Limit the number of pages to be cleaned at once.
 1049              */
 1050             if (new_object != VM_OBJECT_NULL &&
 1051                     new_offset >= PAGE_SIZE * DATA_WRITE_MAX)
 1052             {
 1053                 PAGEOUT_PAGES;
 1054             }
 1055 
 1056             while ((m = vm_page_lookup(object, offset)) != VM_PAGE_NULL) {
 1057                 switch ((page_lock_result = memory_object_lock_page(m,
 1058                                         should_return,
 1059                                         should_flush,
 1060                                         prot)))
 1061                 {
 1062                     case MEMORY_OBJECT_LOCK_RESULT_DONE:
 1063                         /*
 1064                          *      End of a cluster of dirty pages.
 1065                          */
 1066                         if (new_object != VM_OBJECT_NULL) {
 1067                             PAGEOUT_PAGES;
 1068                             continue;
 1069                         }
 1070                         break;
 1071 
 1072                     case MEMORY_OBJECT_LOCK_RESULT_MUST_BLOCK:
 1073                         /*
 1074                          *      Since it is necessary to block,
 1075                          *      clean any dirty pages now.
 1076                          */
 1077                         if (new_object != VM_OBJECT_NULL) {
 1078                             PAGEOUT_PAGES;
 1079                             continue;
 1080                         }
 1081 
 1082                         PAGE_ASSERT_WAIT(m, FALSE);
 1083                         vm_object_unlock(object);
 1084                         thread_block(CONTINUE_NULL);
 1085                         vm_object_lock(object);
 1086                         continue;
 1087 
 1088                     case MEMORY_OBJECT_LOCK_RESULT_MUST_CLEAN:
 1089                     case MEMORY_OBJECT_LOCK_RESULT_MUST_RETURN:
 1090                         /*
 1091                          * The clean and return cases are similar.
 1092                          *
 1093                          * Mark the page busy since we unlock the
 1094                          * object below.
 1095                          */
 1096                         m->busy = TRUE;
 1097 
 1098                         /*
 1099                          * if this would form a discontiguous block,
 1100                          * clean the old pages and start anew.
 1101                          *
 1102                          * NOTE: The first time through here, new_object
 1103                          * is null, hiding the fact that pageout_action
 1104                          * is not initialized.
 1105                          */
 1106                         if (new_object != VM_OBJECT_NULL &&
 1107                             (last_offset != offset ||
 1108                              pageout_action != page_lock_result)) {
 1109                                 PAGEOUT_PAGES;
 1110                         }
 1111 
 1112                         vm_object_unlock(object);
 1113 
 1114                         /*
 1115                          *      If we have not already allocated an object
 1116                          *      for a range of pages to be written, do so
 1117                          *      now.
 1118                          */
 1119                         if (new_object == VM_OBJECT_NULL) {
 1120                             new_object = vm_object_allocate(original_size);
 1121                             new_offset = 0;
 1122                             paging_offset = m->offset +
 1123                                         object->paging_offset;
 1124                             pageout_action = page_lock_result;
 1125                         }
 1126 
 1127                         /*
 1128                          *      Move or copy the dirty page into the
 1129                          *      new object.
 1130                          */
 1131                         m = vm_pageout_setup(m,
 1132                                         m->offset + object->paging_offset,
 1133                                         new_object,
 1134                                         new_offset,
 1135                                         should_flush);
 1136 
 1137                         /*
 1138                          *      Save the holding page if there is one.
 1139                          */
 1140                         holding_pages[atop(new_offset)] = m;
 1141                         new_offset += PAGE_SIZE;
 1142                         last_offset = offset + PAGE_SIZE;
 1143 
 1144                         vm_object_lock(object);
 1145                         break;
 1146                 }
 1147                 break;
 1148             }
 1149         }
 1150 
 1151         /*
 1152          *      We have completed the scan for applicable pages.
 1153          *      Clean any pages that have been saved.
 1154          */
 1155         if (new_object != VM_OBJECT_NULL) {
 1156             PAGEOUT_PAGES;
 1157         }
 1158 
 1159         if (IP_VALID(reply_to)) {
 1160                 vm_object_unlock(object);
 1161 
 1162                 /* consumes our naked send-once/send right for reply_to */
 1163                 (void) memory_object_lock_completed(reply_to, reply_to_type,
 1164                         object->pager_request, original_offset, original_size);
 1165 
 1166                 vm_object_lock(object);
 1167         }
 1168 
 1169         vm_object_paging_end(object);
 1170         vm_object_unlock(object);
 1171         vm_object_deallocate(object);
 1172 
 1173         return KERN_SUCCESS;
 1174 }
 1175 
 1176 #if     !NORMA_VM
 1177 /*
 1178  *      Old version of memory_object_lock_request.  
 1179  */
 1180 kern_return_t
 1181 xxx_memory_object_lock_request(
 1182         register vm_object_t    object,
 1183         register vm_offset_t    offset,
 1184         register vm_size_t      size,
 1185         boolean_t               should_clean,
 1186         boolean_t               should_flush,
 1187         vm_prot_t               prot,
 1188         ipc_port_t              reply_to,
 1189         mach_msg_type_name_t    reply_to_type)
 1190 {
 1191         register int            should_return;
 1192 
 1193         if (should_clean)
 1194                 should_return = MEMORY_OBJECT_RETURN_DIRTY;
 1195         else
 1196                 should_return = MEMORY_OBJECT_RETURN_NONE;
 1197 
 1198         return memory_object_lock_request(object,offset,size,
 1199                       should_return, should_flush, prot,
 1200                       reply_to, reply_to_type);
 1201 }
 1202 #endif  /* !NORMA_VM */
 1203           
 1204 kern_return_t
 1205 memory_object_set_attributes_common(
 1206         vm_object_t     object,
 1207         boolean_t       object_ready,
 1208         boolean_t       may_cache,
 1209         memory_object_copy_strategy_t copy_strategy,
 1210         boolean_t use_old_pageout)
 1211 {
 1212         if (object == VM_OBJECT_NULL)
 1213                 return KERN_INVALID_ARGUMENT;
 1214 
 1215         /*
 1216          *      Verify the attributes of importance
 1217          */
 1218 
 1219         switch(copy_strategy) {
 1220                 case MEMORY_OBJECT_COPY_NONE:
 1221                 case MEMORY_OBJECT_COPY_CALL:
 1222                 case MEMORY_OBJECT_COPY_DELAY:
 1223                 case MEMORY_OBJECT_COPY_TEMPORARY:
 1224                         break;
 1225                 default:
 1226                         vm_object_deallocate(object);
 1227                         return KERN_INVALID_ARGUMENT;
 1228         }
 1229 
 1230         if (object_ready)
 1231                 object_ready = TRUE;
 1232         if (may_cache)
 1233                 may_cache = TRUE;
 1234 
 1235         vm_object_lock(object);
 1236 
 1237         /*
 1238          *      Wake up anyone waiting for the ready attribute
 1239          *      to become asserted.
 1240          */
 1241 
 1242         if (object_ready && !object->pager_ready) {
 1243                 object->use_old_pageout = use_old_pageout;
 1244                 vm_object_wakeup(object, VM_OBJECT_EVENT_PAGER_READY);
 1245         }
 1246 
 1247         /*
 1248          *      Copy the attributes
 1249          */
 1250 
 1251         object->can_persist = may_cache;
 1252         object->pager_ready = object_ready;
 1253         if (copy_strategy == MEMORY_OBJECT_COPY_TEMPORARY) {
 1254                 object->temporary = TRUE;
 1255         } else {
 1256                 object->copy_strategy = copy_strategy;
 1257         }
 1258 
 1259         vm_object_unlock(object);
 1260 
 1261         vm_object_deallocate(object);
 1262 
 1263         return KERN_SUCCESS;
 1264 }
 1265 
 1266 #if     !NORMA_VM
 1267 
 1268 /*
 1269  * XXX  rpd claims that reply_to could be obviated in favor of a client
 1270  * XXX  stub that made change_attributes an RPC.  Need investigation.
 1271  */
 1272 
 1273 kern_return_t   memory_object_change_attributes(
 1274         vm_object_t     object,
 1275         boolean_t       may_cache,
 1276         memory_object_copy_strategy_t copy_strategy,
 1277         ipc_port_t              reply_to,
 1278         mach_msg_type_name_t    reply_to_type)
 1279 {
 1280         kern_return_t   result;
 1281 
 1282         /*
 1283          *      Do the work and throw away our object reference.  It
 1284          *      is important that the object reference be deallocated
 1285          *      BEFORE sending the reply.  The whole point of the reply
 1286          *      is that it shows up after the terminate message that
 1287          *      may be generated by setting the object uncacheable.
 1288          *
 1289          * XXX  may_cache may become a tri-valued variable to handle
 1290          * XXX  uncache if not in use.
 1291          */
 1292         result = memory_object_set_attributes_common(object, TRUE,
 1293                                                      may_cache, copy_strategy,
 1294                                                      FALSE);
 1295 
 1296         if (IP_VALID(reply_to)) {
 1297 
 1298                 /* consumes our naked send-once/send right for reply_to */
 1299                 (void) memory_object_change_completed(reply_to, reply_to_type,
 1300                         may_cache, copy_strategy);
 1301 
 1302         }
 1303 
 1304         return result;
 1305 }
 1306 
 1307 kern_return_t
 1308 memory_object_set_attributes(
 1309         vm_object_t     object,
 1310         boolean_t       object_ready,
 1311         boolean_t       may_cache,
 1312         memory_object_copy_strategy_t copy_strategy)
 1313 {
 1314         return memory_object_set_attributes_common(object, object_ready,
 1315                                                    may_cache, copy_strategy,
 1316                                                    TRUE);
 1317 }
 1318 
 1319 kern_return_t   memory_object_ready(
 1320         vm_object_t     object,
 1321         boolean_t       may_cache,
 1322         memory_object_copy_strategy_t copy_strategy)
 1323 {
 1324         return memory_object_set_attributes_common(object, TRUE,
 1325                                                    may_cache, copy_strategy,
 1326                                                    FALSE);
 1327 }
 1328 #endif  /* !NORMA_VM */
 1329 
 1330 kern_return_t   memory_object_get_attributes(
 1331         vm_object_t     object,
 1332         boolean_t       *object_ready,
 1333         boolean_t       *may_cache,
 1334         memory_object_copy_strategy_t *copy_strategy)
 1335 {
 1336         if (object == VM_OBJECT_NULL)
 1337                 return KERN_INVALID_ARGUMENT;
 1338 
 1339         vm_object_lock(object);
 1340         *may_cache = object->can_persist;
 1341         *object_ready = object->pager_ready;
 1342         *copy_strategy = object->copy_strategy;
 1343         vm_object_unlock(object);
 1344 
 1345         vm_object_deallocate(object);
 1346 
 1347         return KERN_SUCCESS;
 1348 }
 1349 
 1350 /*
 1351  *      If successful, consumes the supplied naked send right.
 1352  */
 1353 kern_return_t   vm_set_default_memory_manager(
 1354         host_t          host,
 1355         ipc_port_t      *default_manager)
 1356 {
 1357         ipc_port_t current_manager;
 1358         ipc_port_t new_manager;
 1359         ipc_port_t returned_manager;
 1360 
 1361         if (host == HOST_NULL)
 1362                 return KERN_INVALID_HOST;
 1363 
 1364         new_manager = *default_manager;
 1365         simple_lock(&memory_manager_default_lock);
 1366         current_manager = memory_manager_default;
 1367 
 1368         if (new_manager == IP_NULL) {
 1369                 /*
 1370                  *      Retrieve the current value.
 1371                  */
 1372 
 1373                 returned_manager = ipc_port_copy_send(current_manager);
 1374         } else {
 1375                 /*
 1376                  *      Retrieve the current value,
 1377                  *      and replace it with the supplied value.
 1378                  *      We consume the supplied naked send right.
 1379                  */
 1380 
 1381                 returned_manager = current_manager;
 1382                 memory_manager_default = new_manager;
 1383 
 1384                 /*
 1385                  *      In case anyone's been waiting for a memory
 1386                  *      manager to be established, wake them up.
 1387                  */
 1388 
 1389                 thread_wakeup((event_t) &memory_manager_default);
 1390         }
 1391 
 1392         simple_unlock(&memory_manager_default_lock);
 1393 
 1394         *default_manager = returned_manager;
 1395         return KERN_SUCCESS;
 1396 }
 1397 
 1398 /*
 1399  *      Routine:        memory_manager_default_reference
 1400  *      Purpose:
 1401  *              Returns a naked send right for the default
 1402  *              memory manager.  The returned right is always
 1403  *              valid (not IP_NULL or IP_DEAD).
 1404  */
 1405 
 1406 ipc_port_t      memory_manager_default_reference(void)
 1407 {
 1408         ipc_port_t current_manager;
 1409 
 1410         simple_lock(&memory_manager_default_lock);
 1411 
 1412         while (current_manager = ipc_port_copy_send(memory_manager_default),
 1413                !IP_VALID(current_manager)) {
 1414                 thread_sleep((event_t) &memory_manager_default,
 1415                              simple_lock_addr(memory_manager_default_lock),
 1416                              FALSE);
 1417                 simple_lock(&memory_manager_default_lock);
 1418         }
 1419 
 1420         simple_unlock(&memory_manager_default_lock);
 1421 
 1422         return current_manager;
 1423 }
 1424 
 1425 /*
 1426  *      Routine:        memory_manager_default_port
 1427  *      Purpose:
 1428  *              Returns true if the receiver for the port
 1429  *              is the default memory manager.
 1430  *
 1431  *              This is a hack to let ds_read_done
 1432  *              know when it should keep memory wired.
 1433  */
 1434 
 1435 boolean_t       memory_manager_default_port(
 1436         ipc_port_t port)
 1437 {
 1438         ipc_port_t current;
 1439         boolean_t result;
 1440 
 1441         simple_lock(&memory_manager_default_lock);
 1442         current = memory_manager_default;
 1443         if (IP_VALID(current)) {
 1444                 /*
 1445                  *      There is no point in bothering to lock
 1446                  *      both ports, which would be painful to do.
 1447                  *      If the receive rights are moving around,
 1448                  *      we might be inaccurate.
 1449                  */
 1450 
 1451                 result = port->ip_receiver == current->ip_receiver;
 1452         } else
 1453                 result = FALSE;
 1454         simple_unlock(&memory_manager_default_lock);
 1455 
 1456         return result;
 1457 }
 1458 
 1459 void            memory_manager_default_init(void)
 1460 {
 1461         memory_manager_default = IP_NULL;
 1462         simple_lock_init(&memory_manager_default_lock);
 1463 }

Cache object: 85539c9611f6755bdbec442f5eeedbb7


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.