The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/vm/vm_user.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /* 
    2  * Mach Operating System
    3  * Copyright (c) 1991,1990,1989,1988 Carnegie Mellon University
    4  * All Rights Reserved.
    5  * 
    6  * Permission to use, copy, modify and distribute this software and its
    7  * documentation is hereby granted, provided that both the copyright
    8  * notice and this permission notice appear in all copies of the
    9  * software, derivative works or modified versions, and any portions
   10  * thereof, and that both notices appear in supporting documentation.
   11  * 
   12  * CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
   13  * CONDITION.  CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND FOR
   14  * ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
   15  * 
   16  * Carnegie Mellon requests users of this software to return to
   17  * 
   18  *  Software Distribution Coordinator  or  Software.Distribution@CS.CMU.EDU
   19  *  School of Computer Science
   20  *  Carnegie Mellon University
   21  *  Pittsburgh PA 15213-3890
   22  * 
   23  * any improvements or extensions that they make and grant Carnegie Mellon
   24  * the rights to redistribute these changes.
   25  */
   26 /*
   27  * HISTORY
   28  * $Log:        vm_user.c,v $
   29  * Revision 2.23  93/08/10  15:13:57  mrt
   30  *      Included calls to projected_buffer_in_range to deny user direct 
   31  *      manipulation of protection, inheritance, machine attributes or 
   32  *      wiring of projected buffers. These can be altered only by other code inside
   33  *      the kernel, presumably the device driver that created the projected buffer.
   34  *      [93/02/16  09:48:40  jcb]
   35  * 
   36  * Revision 2.22  92/08/03  18:02:30  jfriedl
   37  *      removed silly prototypes
   38  *      [92/08/02            jfriedl]
   39  * 
   40  * Revision 2.21  92/05/21  17:27:02  jfriedl
   41  *      tried prototypes.
   42  *      [92/05/20            jfriedl]
   43  * 
   44  * Revision 2.20  92/03/10  16:30:43  jsb
   45  *      Removed NORMA_VM workaround.
   46  *      [92/02/11  17:42:41  jsb]
   47  *      Add checks for protection and inheritance arguments.
   48  *      [92/02/22  17:07:18  dlb@osf.org]
   49  * 
   50  * Revision 2.19  92/02/23  19:51:23  elf
   51  *      Eliminate keep_wired argument from vm_map_copyin().
   52  *      [92/02/21  10:16:59  dlb]
   53  * 
   54  * Revision 2.18  91/12/11  08:44:21  jsb
   55  *      Fixed vm_write and vm_copy to check for a null map.
   56  *      Fixed vm_write and vm_copy to not check for misalignment.
   57  *      Fixed vm_copy to discard the copy if the overwrite fails.
   58  *      [91/12/09            rpd]
   59  * 
   60  * Revision 2.17  91/12/10  13:27:17  jsb
   61  *      Apply temporary NORMA_VM workaround to XMM problem.
   62  *      This leaks objects if vm_map() fails.
   63  *      [91/12/10  12:55:27  jsb]
   64  * 
   65  * Revision 2.16  91/08/28  11:19:07  jsb
   66  *      Fixed vm_map to check memory_object with IP_VALID.
   67  *      Changed vm_wire to use KERN_INVALID_{HOST,TASK,VALUE}
   68  *      instead of a generic KERN_INVALID_ARGUMENT return code.
   69  *      [91/07/12            rpd]
   70  * 
   71  * Revision 2.15  91/07/31  18:22:40  dbg
   72  *      Change vm_pageable to vm_wire.  Require host_priv port to gain
   73  *      wiring privileges.
   74  *      [91/07/30  17:28:22  dbg]
   75  * 
   76  * Revision 2.14  91/05/14  17:51:35  mrt
   77  *      Correcting copyright
   78  * 
   79  * Revision 2.13  91/03/16  15:07:13  rpd
   80  *      Removed temporary extra stats.
   81  *      [91/02/10            rpd]
   82  * 
   83  * Revision 2.12  91/02/05  18:00:35  mrt
   84  *      Changed to new Mach copyright
   85  *      [91/02/01  16:35:00  mrt]
   86  * 
   87  * Revision 2.11  90/08/06  15:08:59  rwd
   88  *      Vm_read should check that the map is non null.
   89  *      [90/07/26            rwd]
   90  * 
   91  * Revision 2.10  90/06/02  15:12:07  rpd
   92  *      Moved trap versions of syscalls to kern/ipc_mig.c.
   93  *      Removed syscall_vm_allocate_with_pager.
   94  *      [90/05/31            rpd]
   95  * 
   96  *      Purged vm_allocate_with_pager.
   97  *      [90/04/09            rpd]
   98  *      Purged MACH_XP_FPD.  Use vm_map_pageable_user for vm_pageable.
   99  *      Converted to new IPC kernel call semantics.
  100  *      [90/03/26  23:21:55  rpd]
  101  * 
  102  * Revision 2.9  90/05/29  18:39:57  rwd
  103  *      New trap versions of exported vm calls from rfr.
  104  *      [90/04/20            rwd]
  105  * 
  106  * Revision 2.8  90/05/03  15:53:30  dbg
  107  *      Set current protection to VM_PROT_DEFAULT in
  108  *      vm_allocate_with_pager.
  109  *      [90/04/12            dbg]
  110  * 
  111  * Revision 2.7  90/03/14  21:11:49  rwd
  112  *      Get rfr bug fix.
  113  *      [90/03/07            rwd]
  114  * 
  115  * Revision 2.6  90/02/22  20:07:02  dbg
  116  *      Use new vm_object_copy routines.  Use new vm_map_copy
  117  *      technology.  vm_read() no longer requires page alignment.
  118  *      Change PAGE_WAKEUP to PAGE_WAKEUP_DONE to reflect the fact
  119  *      that it clears the busy flag.
  120  *      [90/01/25            dbg]
  121  * 
  122  * Revision 2.5  90/01/24  14:08:30  af
  123  *      Fixed bug in optimized vm_write: now that we relaxed the restriction
  124  *      on the page-alignment of the size arg we must be able to cope with
  125  *      e.g. one-and-a-half pages as well.
  126  *      Also, by simple measures on my pmax turns out that mapping is a win
  127  *      versus copyin even for a single page. IF you can map.
  128  *      [90/01/24  11:37:35  af]
  129  * 
  130  * Revision 2.4  90/01/22  23:09:42  af
  131  *      Go through the map module for machine attributes.
  132  *      [90/01/20  17:23:35  af]
  133  * 
  134  *      Added vm_machine_attribute(), which only invokes the
  135  *      corresponding pmap operation, for now.  Just a first
  136  *      shot at it, lacks proper locking and keeping the info
  137  *      around, someplace.
  138  *      [89/12/08            af]
  139  * 
  140  * Revision 2.3  90/01/19  14:36:22  rwd
  141  *      Disable vm_write optimization on mips since it doesn't appear to
  142  *      work.
  143  *      [90/01/19            rwd]
  144  * 
  145  *      Get version that works on multiprocessor from rfr
  146  *      [90/01/10            rwd]
  147  *      Get new user copyout code from rfr.
  148  *      [90/01/05            rwd]
  149  * 
  150  * Revision 2.2  89/09/08  11:29:05  dbg
  151  *      Pass keep_wired parameter to vm_map_move.
  152  *      [89/07/14            dbg]
  153  * 
  154  * 28-Apr-89  David Golub (dbg) at Carnegie-Mellon University
  155  *      Changes for MACH_KERNEL:
  156  *      . Removed non-MACH include files and all conditionals.
  157  *      . Added vm_pageable, for privileged tasks only.
  158  *      . vm_read now uses vm_map_move to consolidate map operations.
  159  *      . If using FAST_PAGER_DATA, vm_write expects data to be in
  160  *        current task's address space.
  161  *
  162  * Revision 2.12  89/04/18  21:30:56  mwyoung
  163  *      All relevant history has been integrated into the documentation below.
  164  * 
  165  */
  166 /*
  167  *      File:   vm/vm_user.c
  168  *      Author: Avadis Tevanian, Jr., Michael Wayne Young
  169  * 
  170  *      User-exported virtual memory functions.
  171  */
  172 
  173 #include <mach/boolean.h>
  174 #include <mach/kern_return.h>
  175 #include <mach/mach_types.h>    /* to get vm_address_t */
  176 #include <mach/memory_object.h>
  177 #include <mach/std_types.h>     /* to get pointer_t */
  178 #include <mach/vm_attributes.h>
  179 #include <mach/vm_param.h>
  180 #include <mach/vm_statistics.h>
  181 #include <kern/host.h>
  182 #include <kern/task.h>
  183 #include <vm/vm_fault.h>
  184 #include <vm/vm_map.h>
  185 #include <vm/vm_object.h>
  186 #include <vm/vm_page.h>
  187 
  188 
  189 
  190 vm_statistics_data_t    vm_stat;
  191 
  192 /*
  193  *      vm_allocate allocates "zero fill" memory in the specfied
  194  *      map.
  195  */
  196 kern_return_t vm_allocate(map, addr, size, anywhere)
  197         register vm_map_t       map;
  198         register vm_offset_t    *addr;
  199         register vm_size_t      size;
  200         boolean_t               anywhere;
  201 {
  202         kern_return_t   result;
  203 
  204         if (map == VM_MAP_NULL)
  205                 return(KERN_INVALID_ARGUMENT);
  206         if (size == 0) {
  207                 *addr = 0;
  208                 return(KERN_SUCCESS);
  209         }
  210 
  211         if (anywhere)
  212                 *addr = vm_map_min(map);
  213         else
  214                 *addr = trunc_page(*addr);
  215         size = round_page(size);
  216 
  217         result = vm_map_enter(
  218                         map,
  219                         addr,
  220                         size,
  221                         (vm_offset_t)0,
  222                         anywhere,
  223                         VM_OBJECT_NULL,
  224                         (vm_offset_t)0,
  225                         FALSE,
  226                         VM_PROT_DEFAULT,
  227                         VM_PROT_ALL,
  228                         VM_INHERIT_DEFAULT);
  229 
  230         return(result);
  231 }
  232 
  233 /*
  234  *      vm_deallocate deallocates the specified range of addresses in the
  235  *      specified address map.
  236  */
  237 kern_return_t vm_deallocate(map, start, size)
  238         register vm_map_t       map;
  239         vm_offset_t             start;
  240         vm_size_t               size;
  241 {
  242         if (map == VM_MAP_NULL)
  243                 return(KERN_INVALID_ARGUMENT);
  244 
  245         if (size == (vm_offset_t) 0)
  246                 return(KERN_SUCCESS);
  247 
  248         return(vm_map_remove(map, trunc_page(start), round_page(start+size)));
  249 }
  250 
  251 /*
  252  *      vm_inherit sets the inheritance of the specified range in the
  253  *      specified map.
  254  */
  255 kern_return_t vm_inherit(map, start, size, new_inheritance)
  256         register vm_map_t       map;
  257         vm_offset_t             start;
  258         vm_size_t               size;
  259         vm_inherit_t            new_inheritance;
  260 {
  261         if (map == VM_MAP_NULL)
  262                 return(KERN_INVALID_ARGUMENT);
  263 
  264         switch (new_inheritance) {
  265         case VM_INHERIT_NONE:
  266         case VM_INHERIT_COPY:
  267         case VM_INHERIT_SHARE:
  268                 break;
  269         default:
  270                 return(KERN_INVALID_ARGUMENT);
  271         }
  272 
  273         /*Check if range includes projected buffer;
  274           user is not allowed direct manipulation in that case*/
  275         if (projected_buffer_in_range(map, start, start+size))
  276                 return(KERN_INVALID_ARGUMENT);
  277 
  278         return(vm_map_inherit(map,
  279                               trunc_page(start),
  280                               round_page(start+size),
  281                               new_inheritance));
  282 }
  283 
  284 /*
  285  *      vm_protect sets the protection of the specified range in the
  286  *      specified map.
  287  */
  288 
  289 kern_return_t vm_protect(map, start, size, set_maximum, new_protection)
  290         register vm_map_t       map;
  291         vm_offset_t             start;
  292         vm_size_t               size;
  293         boolean_t               set_maximum;
  294         vm_prot_t               new_protection;
  295 {
  296         if ((map == VM_MAP_NULL) || (new_protection & ~VM_PROT_ALL))
  297                 return(KERN_INVALID_ARGUMENT);
  298 
  299         /*Check if range includes projected buffer;
  300           user is not allowed direct manipulation in that case*/
  301         if (projected_buffer_in_range(map, start, start+size))
  302                 return(KERN_INVALID_ARGUMENT);
  303 
  304         return(vm_map_protect(map,
  305                               trunc_page(start),
  306                               round_page(start+size),
  307                               new_protection,
  308                               set_maximum));
  309 }
  310 
  311 kern_return_t vm_statistics(map, stat)
  312         vm_map_t        map;
  313         vm_statistics_data_t    *stat;
  314 {
  315         if (map == VM_MAP_NULL)
  316                 return(KERN_INVALID_ARGUMENT);
  317 
  318         *stat = vm_stat;
  319 
  320         stat->pagesize = PAGE_SIZE;
  321         stat->free_count = vm_page_free_count;
  322         stat->active_count = vm_page_active_count;
  323         stat->inactive_count = vm_page_inactive_count;
  324         stat->wire_count = vm_page_wire_count;
  325 
  326         return(KERN_SUCCESS);
  327 }
  328 
  329 /*
  330  * Handle machine-specific attributes for a mapping, such
  331  * as cachability, migrability, etc.
  332  */
  333 kern_return_t vm_machine_attribute(map, address, size, attribute, value)
  334         vm_map_t        map;
  335         vm_address_t    address;
  336         vm_size_t       size;
  337         vm_machine_attribute_t  attribute;
  338         vm_machine_attribute_val_t* value;              /* IN/OUT */
  339 {
  340         extern kern_return_t    vm_map_machine_attribute();
  341 
  342         if (map == VM_MAP_NULL)
  343                 return(KERN_INVALID_ARGUMENT);
  344 
  345         /*Check if range includes projected buffer;
  346           user is not allowed direct manipulation in that case*/
  347         if (projected_buffer_in_range(map, address, address+size))
  348                 return(KERN_INVALID_ARGUMENT);
  349 
  350         return vm_map_machine_attribute(map, address, size, attribute, value);
  351 }
  352 
  353 kern_return_t vm_read(map, address, size, data, data_size)
  354         vm_map_t        map;
  355         vm_address_t    address;
  356         vm_size_t       size;
  357         pointer_t       *data;
  358         vm_size_t       *data_size;
  359 {
  360         kern_return_t   error;
  361         vm_map_copy_t   ipc_address;
  362 
  363         if (map == VM_MAP_NULL)
  364                 return(KERN_INVALID_ARGUMENT);
  365 
  366         if ((error = vm_map_copyin(map,
  367                                 address,
  368                                 size,
  369                                 FALSE,  /* src_destroy */
  370                                 &ipc_address)) == KERN_SUCCESS) {
  371                 *data = (pointer_t) ipc_address;
  372                 *data_size = size;
  373         }
  374         return(error);
  375 }
  376 
  377 kern_return_t vm_write(map, address, data, size)
  378         vm_map_t        map;
  379         vm_address_t    address;
  380         pointer_t       data;
  381         vm_size_t       size;
  382 {
  383         if (map == VM_MAP_NULL)
  384                 return KERN_INVALID_ARGUMENT;
  385 
  386         return vm_map_copy_overwrite(map, address, (vm_map_copy_t) data,
  387                                      FALSE /* interruptible XXX */);
  388 }
  389 
  390 kern_return_t vm_copy(map, source_address, size, dest_address)
  391         vm_map_t        map;
  392         vm_address_t    source_address;
  393         vm_size_t       size;
  394         vm_address_t    dest_address;
  395 {
  396         vm_map_copy_t copy;
  397         kern_return_t kr;
  398 
  399         if (map == VM_MAP_NULL)
  400                 return KERN_INVALID_ARGUMENT;
  401 
  402         kr = vm_map_copyin(map, source_address, size,
  403                            FALSE, &copy);
  404         if (kr != KERN_SUCCESS)
  405                 return kr;
  406 
  407         kr = vm_map_copy_overwrite(map, dest_address, copy,
  408                                    FALSE /* interruptible XXX */);
  409         if (kr != KERN_SUCCESS) {
  410                 vm_map_copy_discard(copy);
  411                 return kr;
  412         }
  413 
  414         return KERN_SUCCESS;
  415 }
  416 
  417 /*
  418  *      Routine:        vm_map
  419  */
  420 kern_return_t vm_map(
  421                 target_map,
  422                 address, size, mask, anywhere,
  423                 memory_object, offset,
  424                 copy,
  425                 cur_protection, max_protection, inheritance)
  426         vm_map_t        target_map;
  427         vm_offset_t     *address;
  428         vm_size_t       size;
  429         vm_offset_t     mask;
  430         boolean_t       anywhere;
  431         ipc_port_t      memory_object;
  432         vm_offset_t     offset;
  433         boolean_t       copy;
  434         vm_prot_t       cur_protection;
  435         vm_prot_t       max_protection;
  436         vm_inherit_t    inheritance;
  437 {
  438         register
  439         vm_object_t     object;
  440         register
  441         kern_return_t   result;
  442 
  443         if ((target_map == VM_MAP_NULL) ||
  444             (cur_protection & ~VM_PROT_ALL) ||
  445             (max_protection & ~VM_PROT_ALL))
  446                 return(KERN_INVALID_ARGUMENT);
  447 
  448         switch (inheritance) {
  449         case VM_INHERIT_NONE:
  450         case VM_INHERIT_COPY:
  451         case VM_INHERIT_SHARE:
  452                 break;
  453         default:
  454                 return(KERN_INVALID_ARGUMENT);
  455         }
  456 
  457         *address = trunc_page(*address);
  458         size = round_page(size);
  459 
  460         if (!IP_VALID(memory_object)) {
  461                 object = VM_OBJECT_NULL;
  462                 offset = 0;
  463                 copy = FALSE;
  464         } else if ((object = vm_object_enter(memory_object, size, FALSE))
  465                         == VM_OBJECT_NULL)
  466                 return(KERN_INVALID_ARGUMENT);
  467 
  468         /*
  469          *      Perform the copy if requested
  470          */
  471 
  472         if (copy) {
  473                 vm_object_t     new_object;
  474                 vm_offset_t     new_offset;
  475 
  476                 result = vm_object_copy_strategically(object, offset, size,
  477                                 &new_object, &new_offset,
  478                                 &copy);
  479 
  480                 /*
  481                  *      Throw away the reference to the
  482                  *      original object, as it won't be mapped.
  483                  */
  484 
  485                 vm_object_deallocate(object);
  486 
  487                 if (result != KERN_SUCCESS)
  488                         return (result);
  489 
  490                 object = new_object;
  491                 offset = new_offset;
  492         }
  493 
  494         if ((result = vm_map_enter(target_map,
  495                                 address, size, mask, anywhere,
  496                                 object, offset,
  497                                 copy,
  498                                 cur_protection, max_protection, inheritance
  499                                 )) != KERN_SUCCESS)
  500                 vm_object_deallocate(object);
  501         return(result);
  502 }
  503 
  504 /*
  505  *      Specify that the range of the virtual address space
  506  *      of the target task must not cause page faults for
  507  *      the indicated accesses.
  508  *
  509  *      [ To unwire the pages, specify VM_PROT_NONE. ]
  510  */
  511 kern_return_t vm_wire(host, map, start, size, access)
  512         host_t                  host;
  513         register vm_map_t       map;
  514         vm_offset_t             start;
  515         vm_size_t               size;
  516         vm_prot_t               access;
  517 {
  518         if (host == HOST_NULL)
  519                 return KERN_INVALID_HOST;
  520 
  521         if (map == VM_MAP_NULL)
  522                 return KERN_INVALID_TASK;
  523 
  524         if (access & ~VM_PROT_ALL)
  525                 return KERN_INVALID_ARGUMENT;
  526 
  527         /*Check if range includes projected buffer;
  528           user is not allowed direct manipulation in that case*/
  529         if (projected_buffer_in_range(map, start, start+size))
  530                 return(KERN_INVALID_ARGUMENT);
  531 
  532         return vm_map_pageable_user(map,
  533                                     trunc_page(start),
  534                                     round_page(start+size),
  535                                     access);
  536 }

Cache object: 04d30c3c5b79415a1d57681848e7c80e


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.