The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/contrib/openzfs/include/os/linux/spl/sys/vmem.h

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /*
    2  *  Copyright (C) 2007-2010 Lawrence Livermore National Security, LLC.
    3  *  Copyright (C) 2007 The Regents of the University of California.
    4  *  Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
    5  *  Written by Brian Behlendorf <behlendorf1@llnl.gov>.
    6  *  UCRL-CODE-235197
    7  *
    8  *  This file is part of the SPL, Solaris Porting Layer.
    9  *
   10  *  The SPL is free software; you can redistribute it and/or modify it
   11  *  under the terms of the GNU General Public License as published by the
   12  *  Free Software Foundation; either version 2 of the License, or (at your
   13  *  option) any later version.
   14  *
   15  *  The SPL is distributed in the hope that it will be useful, but WITHOUT
   16  *  ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
   17  *  FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
   18  *  for more details.
   19  *
   20  *  You should have received a copy of the GNU General Public License along
   21  *  with the SPL.  If not, see <http://www.gnu.org/licenses/>.
   22  */
   23 
   24 #ifndef _SPL_VMEM_H
   25 #define _SPL_VMEM_H
   26 
   27 #include <sys/kmem.h>
   28 #include <linux/sched.h>
   29 #include <linux/vmalloc.h>
   30 
   31 typedef struct vmem { } vmem_t;
   32 
   33 /*
   34  * Memory allocation interfaces
   35  */
   36 #define VMEM_ALLOC      0x01
   37 #define VMEM_FREE       0x02
   38 
   39 #ifndef VMALLOC_TOTAL
   40 #define VMALLOC_TOTAL   (VMALLOC_END - VMALLOC_START)
   41 #endif
   42 
   43 /*
   44  * vmem_* is an interface to a low level arena-based memory allocator on
   45  * Illumos that is used to allocate virtual address space. The kmem SLAB
   46  * allocator allocates slabs from it. Then the generic allocation functions
   47  * kmem_{alloc,zalloc,free}() are layered on top of SLAB allocators.
   48  *
   49  * On Linux, the primary means of doing allocations is via kmalloc(), which
   50  * is similarly layered on top of something called the buddy allocator. The
   51  * buddy allocator is not available to kernel modules, it uses physical
   52  * memory addresses rather than virtual memory addresses and is prone to
   53  * fragmentation.
   54  *
   55  * Linux sets aside a relatively small address space for in-kernel virtual
   56  * memory from which allocations can be done using vmalloc().  It might seem
   57  * like a good idea to use vmalloc() to implement something similar to
   58  * Illumos' allocator. However, this has the following problems:
   59  *
   60  * 1. Page directory table allocations are hard coded to use GFP_KERNEL.
   61  *    Consequently, any KM_PUSHPAGE or KM_NOSLEEP allocations done using
   62  *    vmalloc() will not have proper semantics.
   63  *
   64  * 2. Address space exhaustion is a real issue on 32-bit platforms where
   65  *    only a few 100MB are available. The kernel will handle it by spinning
   66  *    when it runs out of address space.
   67  *
   68  * 3. All vmalloc() allocations and frees are protected by a single global
   69  *    lock which serializes all allocations.
   70  *
   71  * 4. Accessing /proc/meminfo and /proc/vmallocinfo will iterate the entire
   72  *    list. The former will sum the allocations while the latter will print
   73  *    them to user space in a way that user space can keep the lock held
   74  *    indefinitely.  When the total number of mapped allocations is large
   75  *    (several 100,000) a large amount of time will be spent waiting on locks.
   76  *
   77  * 5. Linux has a wait_on_bit() locking primitive that assumes physical
   78  *    memory is used, it simply does not work on virtual memory.  Certain
   79  *    Linux structures (e.g. the superblock) use them and might be embedded
   80  *    into a structure from Illumos.  This makes using Linux virtual memory
   81  *    unsafe in certain situations.
   82  *
   83  * It follows that we cannot obtain identical semantics to those on Illumos.
   84  * Consequently, we implement the kmem_{alloc,zalloc,free}() functions in
   85  * such a way that they can be used as drop-in replacements for small vmem_*
   86  * allocations (8MB in size or smaller) and map vmem_{alloc,zalloc,free}()
   87  * to them.
   88  */
   89 
   90 #define vmem_alloc(sz, fl)      spl_vmem_alloc((sz), (fl), __func__, __LINE__)
   91 #define vmem_zalloc(sz, fl)     spl_vmem_zalloc((sz), (fl), __func__, __LINE__)
   92 #define vmem_free(ptr, sz)      spl_vmem_free((ptr), (sz))
   93 
   94 extern void *spl_vmem_alloc(size_t sz, int fl, const char *func, int line);
   95 extern void *spl_vmem_zalloc(size_t sz, int fl, const char *func, int line);
   96 extern void spl_vmem_free(const void *ptr, size_t sz);
   97 
   98 int spl_vmem_init(void);
   99 void spl_vmem_fini(void);
  100 
  101 #endif  /* _SPL_VMEM_H */

Cache object: 188e42e9f3e54c012f9cb499a093c88b


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.