The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/kern/subr_blist.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /*      $NetBSD: subr_blist.c,v 1.15 2022/05/31 08:43:16 andvar Exp $   */
    2 
    3 /*-
    4  * Copyright (c) 1998 Matthew Dillon.  All Rights Reserved.
    5  * Redistribution and use in source and binary forms, with or without
    6  * modification, are permitted provided that the following conditions
    7  * are met:
    8  * 1. Redistributions of source code must retain the above copyright
    9  *    notice, this list of conditions and the following disclaimer.
   10  * 2. Redistributions in binary form must reproduce the above copyright
   11  *    notice, this list of conditions and the following disclaimer in the
   12  *    documentation and/or other materials provided with the distribution.
   13  * 4. Neither the name of the University nor the names of its contributors
   14  *    may be used to endorse or promote products derived from this software
   15  *    without specific prior written permission.
   16  *
   17  * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS
   18  * OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
   19  * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
   20  * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
   21  * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
   22  * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
   23  * GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
   24  * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
   25  * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
   26  * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
   27  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
   28  */
   29 /*
   30  * BLIST.C -    Bitmap allocator/deallocator, using a radix tree with hinting
   31  *
   32  *      This module implements a general bitmap allocator/deallocator.  The
   33  *      allocator eats around 2 bits per 'block'.  The module does not 
   34  *      try to interpret the meaning of a 'block' other than to return 
   35  *      BLIST_NONE on an allocation failure.
   36  *
   37  *      A radix tree is used to maintain the bitmap.  Two radix constants are
   38  *      involved:  One for the bitmaps contained in the leaf nodes (typically
   39  *      32), and one for the meta nodes (typically 16).  Both meta and leaf
   40  *      nodes have a hint field.  This field gives us a hint as to the largest
   41  *      free contiguous range of blocks under the node.  It may contain a
   42  *      value that is too high, but will never contain a value that is too 
   43  *      low.  When the radix tree is searched, allocation failures in subtrees
   44  *      update the hint. 
   45  *
   46  *      The radix tree also implements two collapsed states for meta nodes:
   47  *      the ALL-ALLOCATED state and the ALL-FREE state.  If a meta node is
   48  *      in either of these two states, all information contained underneath
   49  *      the node is considered stale.  These states are used to optimize
   50  *      allocation and freeing operations.
   51  *
   52  *      The hinting greatly increases code efficiency for allocations while
   53  *      the general radix structure optimizes both allocations and frees.  The
   54  *      radix tree should be able to operate well no matter how much 
   55  *      fragmentation there is and no matter how large a bitmap is used.
   56  *
   57  *      Unlike the rlist code, the blist code wires all necessary memory at
   58  *      creation time.  Neither allocations nor frees require interaction with
   59  *      the memory subsystem.  In contrast, the rlist code may allocate memory 
   60  *      on an rlist_free() call.  The non-blocking features of the blist code
   61  *      are used to great advantage in the swap code (vm/nswap_pager.c).  The
   62  *      rlist code uses a little less overall memory than the blist code (but
   63  *      due to swap interleaving not all that much less), but the blist code 
   64  *      scales much, much better.
   65  *
   66  *      LAYOUT: The radix tree is laid out recursively using a
   67  *      linear array.  Each meta node is immediately followed (laid out
   68  *      sequentially in memory) by BLIST_META_RADIX lower level nodes.  This
   69  *      is a recursive structure but one that can be easily scanned through
   70  *      a very simple 'skip' calculation.  In order to support large radixes, 
   71  *      portions of the tree may reside outside our memory allocation.  We 
   72  *      handle this with an early-termination optimization (when bighint is 
   73  *      set to -1) on the scan.  The memory allocation is only large enough 
   74  *      to cover the number of blocks requested at creation time even if it
   75  *      must be encompassed in larger root-node radix.
   76  *
   77  *      NOTE: the allocator cannot currently allocate more than 
   78  *      BLIST_BMAP_RADIX blocks per call.  It will panic with 'allocation too 
   79  *      large' if you try.  This is an area that could use improvement.  The 
   80  *      radix is large enough that this restriction does not effect the swap 
   81  *      system, though.  Currently only the allocation code is effected by
   82  *      this algorithmic unfeature.  The freeing code can handle arbitrary
   83  *      ranges.
   84  *
   85  *      This code can be compiled stand-alone for debugging.
   86  */
   87 
   88 #include <sys/cdefs.h>
   89 __KERNEL_RCSID(0, "$NetBSD: subr_blist.c,v 1.15 2022/05/31 08:43:16 andvar Exp $");
   90 #if 0
   91 __FBSDID("$FreeBSD: src/sys/kern/subr_blist.c,v 1.17 2004/06/04 04:03:25 alc Exp $");
   92 #endif
   93 
   94 #ifdef _KERNEL
   95 
   96 #include <sys/param.h>
   97 #include <sys/systm.h>
   98 #include <sys/blist.h>
   99 #include <sys/kmem.h>
  100 
  101 #else
  102 
  103 #ifndef BLIST_NO_DEBUG
  104 #define BLIST_DEBUG
  105 #endif
  106 
  107 #include <sys/types.h>
  108 #include <stdio.h>
  109 #include <string.h>
  110 #include <stdlib.h>
  111 #include <stdarg.h>
  112 #include <inttypes.h>
  113 
  114 #define KM_SLEEP 1
  115 #define kmem_zalloc(a,b) calloc(1, (a))
  116 #define kmem_alloc(a,b) malloc(a)
  117 #define kmem_free(a,b) free(a)
  118 
  119 #include "../sys/blist.h"
  120 
  121 void panic(const char *ctl, ...) __printflike(1, 2);
  122 
  123 #endif
  124 
  125 /*
  126  * blmeta and bl_bitmap_t MUST be a power of 2 in size.
  127  */
  128 
  129 typedef struct blmeta {
  130         union {
  131                 blist_blkno_t   bmu_avail; /* space available under us  */
  132                 blist_bitmap_t  bmu_bitmap; /* bitmap if we are a leaf  */
  133         } u;
  134         blist_blkno_t   bm_bighint;     /* biggest contiguous block hint*/
  135 } blmeta_t;
  136 
  137 struct blist {
  138         blist_blkno_t           bl_blocks;      /* area of coverage             */
  139         blist_blkno_t           bl_radix;       /* coverage radix               */
  140         blist_blkno_t           bl_skip;        /* starting skip                */
  141         blist_blkno_t           bl_free;        /* number of free blocks        */
  142         blmeta_t        *bl_root;       /* root of radix tree           */
  143         blist_blkno_t           bl_rootblks;    /* blks allocated for tree */
  144 };
  145 
  146 #define BLIST_META_RADIX        16
  147 
  148 /*
  149  * static support functions
  150  */
  151 
  152 static blist_blkno_t blst_leaf_alloc(blmeta_t *scan, blist_blkno_t blk,
  153     int count);
  154 static blist_blkno_t blst_meta_alloc(blmeta_t *scan, blist_blkno_t blk, 
  155     blist_blkno_t count, blist_blkno_t radix, blist_blkno_t skip);
  156 static void blst_leaf_free(blmeta_t *scan, blist_blkno_t relblk, int count);
  157 static void blst_meta_free(blmeta_t *scan, blist_blkno_t freeBlk,
  158     blist_blkno_t count, blist_blkno_t radix, blist_blkno_t skip,
  159     blist_blkno_t blk);
  160 static void blst_copy(blmeta_t *scan, blist_blkno_t blk, blist_blkno_t radix, 
  161     blist_blkno_t skip, blist_t dest, blist_blkno_t count);
  162 static int blst_leaf_fill(blmeta_t *scan, blist_blkno_t blk, int count);
  163 static blist_blkno_t blst_meta_fill(blmeta_t *scan, blist_blkno_t allocBlk,
  164     blist_blkno_t count, blist_blkno_t radix, blist_blkno_t skip,
  165     blist_blkno_t blk);
  166 static blist_blkno_t blst_radix_init(blmeta_t *scan, blist_blkno_t radix, 
  167     blist_blkno_t skip, blist_blkno_t count);
  168 #ifndef _KERNEL
  169 static void blst_radix_print(blmeta_t *scan, blist_blkno_t blk,
  170     blist_blkno_t radix, blist_blkno_t skip, int tab);
  171 #endif
  172 
  173 /*
  174  * blist_create() - create a blist capable of handling up to the specified
  175  *                  number of blocks
  176  *
  177  *      blocks must be greater than 0
  178  *
  179  *      The smallest blist consists of a single leaf node capable of 
  180  *      managing BLIST_BMAP_RADIX blocks.
  181  */
  182 
  183 blist_t 
  184 blist_create(blist_blkno_t blocks)
  185 {
  186         blist_t bl;
  187         blist_blkno_t radix;
  188         blist_blkno_t skip = 0;
  189 
  190         /*
  191          * Calculate radix and skip field used for scanning.
  192          *
  193          * XXX check overflow
  194          */
  195         radix = BLIST_BMAP_RADIX;
  196 
  197         while (radix < blocks) {
  198                 radix *= BLIST_META_RADIX;
  199                 skip = (skip + 1) * BLIST_META_RADIX;
  200         }
  201 
  202         bl = kmem_zalloc(sizeof(struct blist), KM_SLEEP);
  203 
  204         bl->bl_blocks = blocks;
  205         bl->bl_radix = radix;
  206         bl->bl_skip = skip;
  207         bl->bl_rootblks = 1 +
  208             blst_radix_init(NULL, bl->bl_radix, bl->bl_skip, blocks);
  209         bl->bl_root = kmem_alloc(sizeof(blmeta_t) * bl->bl_rootblks, KM_SLEEP);
  210 
  211 #if defined(BLIST_DEBUG)
  212         printf(
  213                 "BLIST representing %" PRIu64 " blocks (%" PRIu64 " MB of swap)"
  214                 ", requiring %" PRIu64 "K of ram\n",
  215                 (uint64_t)bl->bl_blocks,
  216                 (uint64_t)bl->bl_blocks * 4 / 1024,
  217                 ((uint64_t)bl->bl_rootblks * sizeof(blmeta_t) + 1023) / 1024
  218         );
  219         printf("BLIST raw radix tree contains %" PRIu64 " records\n",
  220             (uint64_t)bl->bl_rootblks);
  221 #endif
  222         blst_radix_init(bl->bl_root, bl->bl_radix, bl->bl_skip, blocks);
  223 
  224         return(bl);
  225 }
  226 
  227 void 
  228 blist_destroy(blist_t bl)
  229 {
  230 
  231         kmem_free(bl->bl_root, sizeof(blmeta_t) * bl->bl_rootblks);
  232         kmem_free(bl, sizeof(struct blist));
  233 }
  234 
  235 /*
  236  * blist_alloc() - reserve space in the block bitmap.  Return the base
  237  *                   of a contiguous region or BLIST_NONE if space could
  238  *                   not be allocated.
  239  */
  240 
  241 blist_blkno_t 
  242 blist_alloc(blist_t bl, blist_blkno_t count)
  243 {
  244         blist_blkno_t blk = BLIST_NONE;
  245 
  246         if (bl) {
  247                 if (bl->bl_radix == BLIST_BMAP_RADIX)
  248                         blk = blst_leaf_alloc(bl->bl_root, 0, count);
  249                 else
  250                         blk = blst_meta_alloc(bl->bl_root, 0, count, bl->bl_radix, bl->bl_skip);
  251                 if (blk != BLIST_NONE)
  252                         bl->bl_free -= count;
  253         }
  254         return(blk);
  255 }
  256 
  257 /*
  258  * blist_free() -       free up space in the block bitmap.  Return the base
  259  *                      of a contiguous region.  Panic if an inconsistency is
  260  *                      found.
  261  */
  262 
  263 void 
  264 blist_free(blist_t bl, blist_blkno_t blkno, blist_blkno_t count)
  265 {
  266         if (bl) {
  267                 if (bl->bl_radix == BLIST_BMAP_RADIX)
  268                         blst_leaf_free(bl->bl_root, blkno, count);
  269                 else
  270                         blst_meta_free(bl->bl_root, blkno, count, bl->bl_radix, bl->bl_skip, 0);
  271                 bl->bl_free += count;
  272         }
  273 }
  274 
  275 /*
  276  * blist_fill() -       mark a region in the block bitmap as off-limits
  277  *                      to the allocator (i.e. allocate it), ignoring any
  278  *                      existing allocations.  Return the number of blocks
  279  *                      actually filled that were free before the call.
  280  */
  281 
  282 blist_blkno_t
  283 blist_fill(blist_t bl, blist_blkno_t blkno, blist_blkno_t count)
  284 {
  285         blist_blkno_t filled;
  286 
  287         if (bl) {
  288                 if (bl->bl_radix == BLIST_BMAP_RADIX)
  289                         filled = blst_leaf_fill(bl->bl_root, blkno, count);
  290                 else
  291                         filled = blst_meta_fill(bl->bl_root, blkno, count,
  292                             bl->bl_radix, bl->bl_skip, 0);
  293                 bl->bl_free -= filled;
  294                 return filled;
  295         } else
  296                 return 0;
  297 }
  298 
  299 /*
  300  * blist_resize() -     resize an existing radix tree to handle the
  301  *                      specified number of blocks.  This will reallocate
  302  *                      the tree and transfer the previous bitmap to the new
  303  *                      one.  When extending the tree you can specify whether
  304  *                      the new blocks are to left allocated or freed.
  305  */
  306 
  307 void
  308 blist_resize(blist_t *pbl, blist_blkno_t count, int freenew)
  309 {
  310     blist_t newbl = blist_create(count);
  311     blist_t save = *pbl;
  312 
  313     *pbl = newbl;
  314     if (count > save->bl_blocks)
  315             count = save->bl_blocks;
  316     blst_copy(save->bl_root, 0, save->bl_radix, save->bl_skip, newbl, count);
  317 
  318     /*
  319      * If resizing upwards, should we free the new space or not?
  320      */
  321     if (freenew && count < newbl->bl_blocks) {
  322             blist_free(newbl, count, newbl->bl_blocks - count);
  323     }
  324     blist_destroy(save);
  325 }
  326 
  327 #ifdef BLIST_DEBUG
  328 
  329 /*
  330  * blist_print()    - dump radix tree
  331  */
  332 
  333 void
  334 blist_print(blist_t bl)
  335 {
  336         printf("BLIST {\n");
  337         blst_radix_print(bl->bl_root, 0, bl->bl_radix, bl->bl_skip, 4);
  338         printf("}\n");
  339 }
  340 
  341 #endif
  342 
  343 /************************************************************************
  344  *                        ALLOCATION SUPPORT FUNCTIONS                  *
  345  ************************************************************************
  346  *
  347  *      These support functions do all the actual work.  They may seem 
  348  *      rather longish, but that's because I've commented them up.  The
  349  *      actual code is straight forward.
  350  *
  351  */
  352 
  353 /*
  354  * blist_leaf_alloc() - allocate at a leaf in the radix tree (a bitmap).
  355  *
  356  *      This is the core of the allocator and is optimized for the 1 block
  357  *      and the BLIST_BMAP_RADIX block allocation cases.  Other cases are
  358  *      somewhat slower.  The 1 block allocation case is log2 and extremely
  359  *      quick.
  360  */
  361 
  362 static blist_blkno_t
  363 blst_leaf_alloc(
  364         blmeta_t *scan,
  365         blist_blkno_t blk,
  366         int count
  367 ) {
  368         blist_bitmap_t orig = scan->u.bmu_bitmap;
  369 
  370         if (orig == 0) {
  371                 /*
  372                  * Optimize bitmap all-allocated case.  Also, count = 1
  373                  * case assumes at least 1 bit is free in the bitmap, so
  374                  * we have to take care of this case here.
  375                  */
  376                 scan->bm_bighint = 0;
  377                 return(BLIST_NONE);
  378         }
  379         if (count == 1) {
  380                 /*
  381                  * Optimized code to allocate one bit out of the bitmap
  382                  */
  383                 blist_bitmap_t mask;
  384                 int j = BLIST_BMAP_RADIX/2;
  385                 int r = 0;
  386 
  387                 mask = (blist_bitmap_t)-1 >> (BLIST_BMAP_RADIX/2);
  388 
  389                 while (j) {
  390                         if ((orig & mask) == 0) {
  391                             r += j;
  392                             orig >>= j;
  393                         }
  394                         j >>= 1;
  395                         mask >>= j;
  396                 }
  397                 scan->u.bmu_bitmap &= ~((blist_bitmap_t)1 << r);
  398                 return(blk + r);
  399         }
  400         if (count <= BLIST_BMAP_RADIX) {
  401                 /*
  402                  * non-optimized code to allocate N bits out of the bitmap.
  403                  * The more bits, the faster the code runs.  It will run
  404                  * the slowest allocating 2 bits, but since there aren't any
  405                  * memory ops in the core loop (or shouldn't be, anyway),
  406                  * you probably won't notice the difference.
  407                  */
  408                 int j;
  409                 int n = BLIST_BMAP_RADIX - count;
  410                 blist_bitmap_t mask;
  411 
  412                 mask = (blist_bitmap_t)-1 >> n;
  413 
  414                 for (j = 0; j <= n; ++j) {
  415                         if ((orig & mask) == mask) {
  416                                 scan->u.bmu_bitmap &= ~mask;
  417                                 return(blk + j);
  418                         }
  419                         mask = (mask << 1);
  420                 }
  421         }
  422         /*
  423          * We couldn't allocate count in this subtree, update bighint.
  424          */
  425         scan->bm_bighint = count - 1;
  426         return(BLIST_NONE);
  427 }
  428 
  429 /*
  430  * blist_meta_alloc() - allocate at a meta in the radix tree.
  431  *
  432  *      Attempt to allocate at a meta node.  If we can't, we update
  433  *      bighint and return a failure.  Updating bighint optimize future
  434  *      calls that hit this node.  We have to check for our collapse cases
  435  *      and we have a few optimizations strewn in as well.
  436  */
  437 
  438 static blist_blkno_t
  439 blst_meta_alloc(
  440         blmeta_t *scan, 
  441         blist_blkno_t blk,
  442         blist_blkno_t count,
  443         blist_blkno_t radix, 
  444         blist_blkno_t skip
  445 ) {
  446         blist_blkno_t i;
  447         blist_blkno_t next_skip = (skip / BLIST_META_RADIX);
  448 
  449         if (scan->u.bmu_avail == 0)  {
  450                 /*
  451                  * ALL-ALLOCATED special case
  452                  */
  453                 scan->bm_bighint = count;
  454                 return(BLIST_NONE);
  455         }
  456 
  457         if (scan->u.bmu_avail == radix) {
  458                 radix /= BLIST_META_RADIX;
  459 
  460                 /*
  461                  * ALL-FREE special case, initialize uninitialize
  462                  * sublevel.
  463                  */
  464                 for (i = 1; i <= skip; i += next_skip) {
  465                         if (scan[i].bm_bighint == (blist_blkno_t)-1)
  466                                 break;
  467                         if (next_skip == 1) {
  468                                 scan[i].u.bmu_bitmap = (blist_bitmap_t)-1;
  469                                 scan[i].bm_bighint = BLIST_BMAP_RADIX;
  470                         } else {
  471                                 scan[i].bm_bighint = radix;
  472                                 scan[i].u.bmu_avail = radix;
  473                         }
  474                 }
  475         } else {
  476                 radix /= BLIST_META_RADIX;
  477         }
  478 
  479         for (i = 1; i <= skip; i += next_skip) {
  480                 if (scan[i].bm_bighint == (blist_blkno_t)-1) {
  481                         /*
  482                          * Terminator
  483                          */
  484                         break;
  485                 } else if (count <= scan[i].bm_bighint) {
  486                         /*
  487                          * count fits in object
  488                          */
  489                         blist_blkno_t r;
  490                         if (next_skip == 1) {
  491                                 r = blst_leaf_alloc(&scan[i], blk, count);
  492                         } else {
  493                                 r = blst_meta_alloc(&scan[i], blk, count, radix, next_skip - 1);
  494                         }
  495                         if (r != BLIST_NONE) {
  496                                 scan->u.bmu_avail -= count;
  497                                 if (scan->bm_bighint > scan->u.bmu_avail)
  498                                         scan->bm_bighint = scan->u.bmu_avail;
  499                                 return(r);
  500                         }
  501                 } else if (count > radix) {
  502                         /*
  503                          * count does not fit in object even if it were
  504                          * complete free.
  505                          */
  506                         panic("blist_meta_alloc: allocation too large");
  507                 }
  508                 blk += radix;
  509         }
  510 
  511         /*
  512          * We couldn't allocate count in this subtree, update bighint.
  513          */
  514         if (scan->bm_bighint >= count)
  515                 scan->bm_bighint = count - 1;
  516         return(BLIST_NONE);
  517 }
  518 
  519 /*
  520  * BLST_LEAF_FREE() -   free allocated block from leaf bitmap
  521  *
  522  */
  523 
  524 static void
  525 blst_leaf_free(
  526         blmeta_t *scan,
  527         blist_blkno_t blk,
  528         int count
  529 ) {
  530         /*
  531          * free some data in this bitmap
  532          *
  533          * e.g.
  534          *      0000111111111110000
  535          *          \_________/\__/
  536          *              v        n
  537          */
  538         int n = blk & (BLIST_BMAP_RADIX - 1);
  539         blist_bitmap_t mask;
  540 
  541         mask = ((blist_bitmap_t)-1 << n) &
  542             ((blist_bitmap_t)-1 >> (BLIST_BMAP_RADIX - count - n));
  543 
  544         if (scan->u.bmu_bitmap & mask)
  545                 panic("blst_radix_free: freeing free block");
  546         scan->u.bmu_bitmap |= mask;
  547 
  548         /*
  549          * We could probably do a better job here.  We are required to make
  550          * bighint at least as large as the biggest contiguous block of 
  551          * data.  If we just shoehorn it, a little extra overhead will
  552          * be incured on the next allocation (but only that one typically).
  553          */
  554         scan->bm_bighint = BLIST_BMAP_RADIX;
  555 }
  556 
  557 /*
  558  * BLST_META_FREE() - free allocated blocks from radix tree meta info
  559  *
  560  *      This support routine frees a range of blocks from the bitmap.
  561  *      The range must be entirely enclosed by this radix node.  If a
  562  *      meta node, we break the range down recursively to free blocks
  563  *      in subnodes (which means that this code can free an arbitrary
  564  *      range whereas the allocation code cannot allocate an arbitrary
  565  *      range).
  566  */
  567 
  568 static void 
  569 blst_meta_free(
  570         blmeta_t *scan, 
  571         blist_blkno_t freeBlk,
  572         blist_blkno_t count,
  573         blist_blkno_t radix, 
  574         blist_blkno_t skip,
  575         blist_blkno_t blk
  576 ) {
  577         blist_blkno_t i;
  578         blist_blkno_t next_skip = (skip / BLIST_META_RADIX);
  579 
  580 #if 0
  581         printf("FREE (%" PRIx64 ",%" PRIu64
  582             ") FROM (%" PRIx64 ",%" PRIu64 ")\n",
  583             (uint64_t)freeBlk, (uint64_t)count,
  584             (uint64_t)blk, (uint64_t)radix
  585         );
  586 #endif
  587 
  588         if (scan->u.bmu_avail == 0) {
  589                 /*
  590                  * ALL-ALLOCATED special case, with possible
  591                  * shortcut to ALL-FREE special case.
  592                  */
  593                 scan->u.bmu_avail = count;
  594                 scan->bm_bighint = count;
  595 
  596                 if (count != radix)  {
  597                         for (i = 1; i <= skip; i += next_skip) {
  598                                 if (scan[i].bm_bighint == (blist_blkno_t)-1)
  599                                         break;
  600                                 scan[i].bm_bighint = 0;
  601                                 if (next_skip == 1) {
  602                                         scan[i].u.bmu_bitmap = 0;
  603                                 } else {
  604                                         scan[i].u.bmu_avail = 0;
  605                                 }
  606                         }
  607                         /* fall through */
  608                 }
  609         } else {
  610                 scan->u.bmu_avail += count;
  611                 /* scan->bm_bighint = radix; */
  612         }
  613 
  614         /*
  615          * ALL-FREE special case.
  616          */
  617 
  618         if (scan->u.bmu_avail == radix)
  619                 return;
  620         if (scan->u.bmu_avail > radix)
  621                 panic("blst_meta_free: freeing already free blocks (%"
  622                     PRIu64 ") %" PRIu64 "/%" PRIu64,
  623                     (uint64_t)count,
  624                     (uint64_t)scan->u.bmu_avail,
  625                     (uint64_t)radix);
  626 
  627         /*
  628          * Break the free down into its components
  629          */
  630 
  631         radix /= BLIST_META_RADIX;
  632 
  633         i = (freeBlk - blk) / radix;
  634         blk += i * radix;
  635         i = i * next_skip + 1;
  636 
  637         while (i <= skip && blk < freeBlk + count) {
  638                 blist_blkno_t v;
  639 
  640                 v = blk + radix - freeBlk;
  641                 if (v > count)
  642                         v = count;
  643 
  644                 if (scan->bm_bighint == (blist_blkno_t)-1)
  645                         panic("blst_meta_free: freeing unexpected range");
  646 
  647                 if (next_skip == 1) {
  648                         blst_leaf_free(&scan[i], freeBlk, v);
  649                 } else {
  650                         blst_meta_free(&scan[i], freeBlk, v, radix, next_skip - 1, blk);
  651                 }
  652                 if (scan->bm_bighint < scan[i].bm_bighint)
  653                     scan->bm_bighint = scan[i].bm_bighint;
  654                 count -= v;
  655                 freeBlk += v;
  656                 blk += radix;
  657                 i += next_skip;
  658         }
  659 }
  660 
  661 /*
  662  * BLIST_RADIX_COPY() - copy one radix tree to another
  663  *
  664  *      Locates free space in the source tree and frees it in the destination
  665  *      tree.  The space may not already be free in the destination.
  666  */
  667 
  668 static void blst_copy(
  669         blmeta_t *scan, 
  670         blist_blkno_t blk,
  671         blist_blkno_t radix, 
  672         blist_blkno_t skip, 
  673         blist_t dest,
  674         blist_blkno_t count
  675 ) {
  676         blist_blkno_t next_skip;
  677         blist_blkno_t i;
  678 
  679         /*
  680          * Leaf node
  681          */
  682 
  683         if (radix == BLIST_BMAP_RADIX) {
  684                 blist_bitmap_t v = scan->u.bmu_bitmap;
  685 
  686                 if (v == (blist_bitmap_t)-1) {
  687                         blist_free(dest, blk, count);
  688                 } else if (v != 0) {
  689                         int j;
  690 
  691                         for (j = 0; j < BLIST_BMAP_RADIX && j < count; ++j) {
  692                                 if (v & (1 << j))
  693                                         blist_free(dest, blk + j, 1);
  694                         }
  695                 }
  696                 return;
  697         }
  698 
  699         /*
  700          * Meta node
  701          */
  702 
  703         if (scan->u.bmu_avail == 0) {
  704                 /*
  705                  * Source all allocated, leave dest allocated
  706                  */
  707                 return;
  708         } 
  709         if (scan->u.bmu_avail == radix) {
  710                 /*
  711                  * Source all free, free entire dest
  712                  */
  713                 if (count < radix)
  714                         blist_free(dest, blk, count);
  715                 else
  716                         blist_free(dest, blk, radix);
  717                 return;
  718         }
  719 
  720 
  721         radix /= BLIST_META_RADIX;
  722         next_skip = (skip / BLIST_META_RADIX);
  723 
  724         for (i = 1; count && i <= skip; i += next_skip) {
  725                 if (scan[i].bm_bighint == (blist_blkno_t)-1)
  726                         break;
  727 
  728                 if (count >= radix) {
  729                         blst_copy(
  730                             &scan[i],
  731                             blk,
  732                             radix,
  733                             next_skip - 1,
  734                             dest,
  735                             radix
  736                         );
  737                         count -= radix;
  738                 } else {
  739                         if (count) {
  740                                 blst_copy(
  741                                     &scan[i],
  742                                     blk,
  743                                     radix,
  744                                     next_skip - 1,
  745                                     dest,
  746                                     count
  747                                 );
  748                         }
  749                         count = 0;
  750                 }
  751                 blk += radix;
  752         }
  753 }
  754 
  755 /*
  756  * BLST_LEAF_FILL() -   allocate specific blocks in leaf bitmap
  757  *
  758  *      This routine allocates all blocks in the specified range
  759  *      regardless of any existing allocations in that range.  Returns
  760  *      the number of blocks allocated by the call.
  761  */
  762 
  763 static int
  764 blst_leaf_fill(blmeta_t *scan, blist_blkno_t blk, int count)
  765 {
  766         int n = blk & (BLIST_BMAP_RADIX - 1);
  767         int nblks;
  768         blist_bitmap_t mask, bitmap;
  769 
  770         mask = ((blist_bitmap_t)-1 << n) &
  771             ((blist_bitmap_t)-1 >> (BLIST_BMAP_RADIX - count - n));
  772 
  773         /* Count the number of blocks we're about to allocate */
  774         bitmap = scan->u.bmu_bitmap & mask;
  775         for (nblks = 0; bitmap != 0; nblks++)
  776                 bitmap &= bitmap - 1;
  777 
  778         scan->u.bmu_bitmap &= ~mask;
  779         return nblks;
  780 }
  781 
  782 /*
  783  * BLIST_META_FILL() -  allocate specific blocks at a meta node
  784  *
  785  *      This routine allocates the specified range of blocks,
  786  *      regardless of any existing allocations in the range.  The
  787  *      range must be within the extent of this node.  Returns the
  788  *      number of blocks allocated by the call.
  789  */
  790 static blist_blkno_t
  791 blst_meta_fill(
  792         blmeta_t *scan,
  793         blist_blkno_t allocBlk,
  794         blist_blkno_t count,
  795         blist_blkno_t radix, 
  796         blist_blkno_t skip,
  797         blist_blkno_t blk
  798 ) {
  799         blist_blkno_t i;
  800         blist_blkno_t next_skip = (skip / BLIST_META_RADIX);
  801         blist_blkno_t nblks = 0;
  802 
  803         if (count == radix || scan->u.bmu_avail == 0)  {
  804                 /*
  805                  * ALL-ALLOCATED special case
  806                  */
  807                 nblks = scan->u.bmu_avail;
  808                 scan->u.bmu_avail = 0;
  809                 scan->bm_bighint = count;
  810                 return nblks;
  811         }
  812 
  813         if (count > radix)
  814                 panic("blist_meta_fill: allocation too large");
  815 
  816         if (scan->u.bmu_avail == radix) {
  817                 radix /= BLIST_META_RADIX;
  818 
  819                 /*
  820                  * ALL-FREE special case, initialize sublevel
  821                  */
  822                 for (i = 1; i <= skip; i += next_skip) {
  823                         if (scan[i].bm_bighint == (blist_blkno_t)-1)
  824                                 break;
  825                         if (next_skip == 1) {
  826                                 scan[i].u.bmu_bitmap = (blist_bitmap_t)-1;
  827                                 scan[i].bm_bighint = BLIST_BMAP_RADIX;
  828                         } else {
  829                                 scan[i].bm_bighint = radix;
  830                                 scan[i].u.bmu_avail = radix;
  831                         }
  832                 }
  833         } else {
  834                 radix /= BLIST_META_RADIX;
  835         }
  836 
  837         i = (allocBlk - blk) / radix;
  838         blk += i * radix;
  839         i = i * next_skip + 1;
  840 
  841         while (i <= skip && blk < allocBlk + count) {
  842                 blist_blkno_t v;
  843 
  844                 v = blk + radix - allocBlk;
  845                 if (v > count)
  846                         v = count;
  847 
  848                 if (scan->bm_bighint == (blist_blkno_t)-1)
  849                         panic("blst_meta_fill: filling unexpected range");
  850 
  851                 if (next_skip == 1) {
  852                         nblks += blst_leaf_fill(&scan[i], allocBlk, v);
  853                 } else {
  854                         nblks += blst_meta_fill(&scan[i], allocBlk, v,
  855                             radix, next_skip - 1, blk);
  856                 }
  857                 count -= v;
  858                 allocBlk += v;
  859                 blk += radix;
  860                 i += next_skip;
  861         }
  862         scan->u.bmu_avail -= nblks;
  863         return nblks;
  864 }
  865 
  866 /*
  867  * BLST_RADIX_INIT() - initialize radix tree
  868  *
  869  *      Initialize our meta structures and bitmaps and calculate the exact
  870  *      amount of space required to manage 'count' blocks - this space may
  871  *      be considerably less than the calculated radix due to the large
  872  *      RADIX values we use.
  873  */
  874 
  875 static blist_blkno_t    
  876 blst_radix_init(blmeta_t *scan, blist_blkno_t radix, blist_blkno_t skip,
  877     blist_blkno_t count)
  878 {
  879         blist_blkno_t i;
  880         blist_blkno_t next_skip;
  881         blist_blkno_t memindex = 0;
  882 
  883         /*
  884          * Leaf node
  885          */
  886 
  887         if (radix == BLIST_BMAP_RADIX) {
  888                 if (scan) {
  889                         scan->bm_bighint = 0;
  890                         scan->u.bmu_bitmap = 0;
  891                 }
  892                 return(memindex);
  893         }
  894 
  895         /*
  896          * Meta node.  If allocating the entire object we can special
  897          * case it.  However, we need to figure out how much memory
  898          * is required to manage 'count' blocks, so we continue on anyway.
  899          */
  900 
  901         if (scan) {
  902                 scan->bm_bighint = 0;
  903                 scan->u.bmu_avail = 0;
  904         }
  905 
  906         radix /= BLIST_META_RADIX;
  907         next_skip = (skip / BLIST_META_RADIX);
  908 
  909         for (i = 1; i <= skip; i += next_skip) {
  910                 if (count >= radix) {
  911                         /*
  912                          * Allocate the entire object
  913                          */
  914                         memindex = i + blst_radix_init(
  915                             ((scan) ? &scan[i] : NULL),
  916                             radix,
  917                             next_skip - 1,
  918                             radix
  919                         );
  920                         count -= radix;
  921                 } else if (count > 0) {
  922                         /*
  923                          * Allocate a partial object
  924                          */
  925                         memindex = i + blst_radix_init(
  926                             ((scan) ? &scan[i] : NULL),
  927                             radix,
  928                             next_skip - 1,
  929                             count
  930                         );
  931                         count = 0;
  932                 } else {
  933                         /*
  934                          * Add terminator and break out
  935                          */
  936                         if (scan)
  937                                 scan[i].bm_bighint = (blist_blkno_t)-1;
  938                         break;
  939                 }
  940         }
  941         if (memindex < i)
  942                 memindex = i;
  943         return(memindex);
  944 }
  945 
  946 #ifdef BLIST_DEBUG
  947 
  948 static void     
  949 blst_radix_print(blmeta_t *scan, blist_blkno_t blk, blist_blkno_t radix,
  950     blist_blkno_t skip, int tab)
  951 {
  952         blist_blkno_t i;
  953         blist_blkno_t next_skip;
  954         int lastState = 0;
  955 
  956         if (radix == BLIST_BMAP_RADIX) {
  957                 printf(
  958                     "%*.*s(%0*" PRIx64 ",%" PRIu64
  959                     "): bitmap %0*" PRIx64 " big=%" PRIu64 "\n", 
  960                     tab, tab, "",
  961                     sizeof(blk) * 2,
  962                     (uint64_t)blk,
  963                     (uint64_t)radix,
  964                     sizeof(scan->u.bmu_bitmap) * 2,
  965                     (uint64_t)scan->u.bmu_bitmap,
  966                     (uint64_t)scan->bm_bighint
  967                 );
  968                 return;
  969         }
  970 
  971         if (scan->u.bmu_avail == 0) {
  972                 printf(
  973                     "%*.*s(%0*" PRIx64 ",%" PRIu64") ALL ALLOCATED\n",
  974                     tab, tab, "",
  975                     sizeof(blk) * 2,
  976                     (uint64_t)blk,
  977                     (uint64_t)radix
  978                 );
  979                 return;
  980         }
  981         if (scan->u.bmu_avail == radix) {
  982                 printf(
  983                     "%*.*s(%0*" PRIx64 ",%" PRIu64 ") ALL FREE\n",
  984                     tab, tab, "",
  985                     sizeof(blk) * 2,
  986                     (uint64_t)blk,
  987                     (uint64_t)radix
  988                 );
  989                 return;
  990         }
  991 
  992         printf(
  993             "%*.*s(%0*" PRIx64 ",%" PRIu64 "): subtree (%" PRIu64 "/%"
  994             PRIu64 ") big=%" PRIu64 " {\n",
  995             tab, tab, "",
  996             sizeof(blk) * 2,
  997             (uint64_t)blk,
  998             (uint64_t)radix,
  999             (uint64_t)scan->u.bmu_avail,
 1000             (uint64_t)radix,
 1001             (uint64_t)scan->bm_bighint
 1002         );
 1003 
 1004         radix /= BLIST_META_RADIX;
 1005         next_skip = (skip / BLIST_META_RADIX);
 1006         tab += 4;
 1007 
 1008         for (i = 1; i <= skip; i += next_skip) {
 1009                 if (scan[i].bm_bighint == (blist_blkno_t)-1) {
 1010                         printf(
 1011                             "%*.*s(%0*" PRIx64 ",%" PRIu64 "): Terminator\n",
 1012                             tab, tab, "",
 1013                             sizeof(blk) * 2,
 1014                             (uint64_t)blk,
 1015                             (uint64_t)radix
 1016                         );
 1017                         lastState = 0;
 1018                         break;
 1019                 }
 1020                 blst_radix_print(
 1021                     &scan[i],
 1022                     blk,
 1023                     radix,
 1024                     next_skip - 1,
 1025                     tab
 1026                 );
 1027                 blk += radix;
 1028         }
 1029         tab -= 4;
 1030 
 1031         printf(
 1032             "%*.*s}\n",
 1033             tab, tab, ""
 1034         );
 1035 }
 1036 
 1037 #endif
 1038 
 1039 #ifdef BLIST_DEBUG
 1040 
 1041 int
 1042 main(int ac, char **av)
 1043 {
 1044         blist_blkno_t size = 1024;
 1045         int i;
 1046         blist_t bl;
 1047 
 1048         for (i = 1; i < ac; ++i) {
 1049                 const char *ptr = av[i];
 1050                 if (*ptr != '-') {
 1051                         size = strtol(ptr, NULL, 0);
 1052                         continue;
 1053                 }
 1054                 ptr += 2;
 1055                 fprintf(stderr, "Bad option: %s\n", ptr - 2);
 1056                 exit(1);
 1057         }
 1058         bl = blist_create(size);
 1059         blist_free(bl, 0, size);
 1060 
 1061         for (;;) {
 1062                 char buf[1024];
 1063                 uint64_t da = 0;
 1064                 uint64_t count = 0;
 1065 
 1066                 printf("%" PRIu64 "/%" PRIu64 "/%" PRIu64 "> ",
 1067                     (uint64_t)bl->bl_free,
 1068                     (uint64_t)size,
 1069                     (uint64_t)bl->bl_radix);
 1070                 fflush(stdout);
 1071                 if (fgets(buf, sizeof(buf), stdin) == NULL)
 1072                         break;
 1073                 switch(buf[0]) {
 1074                 case 'r':
 1075                         if (sscanf(buf + 1, "%" SCNu64, &count) == 1) {
 1076                                 blist_resize(&bl, count, 1);
 1077                         } else {
 1078                                 printf("?\n");
 1079                         }
 1080                 case 'p':
 1081                         blist_print(bl);
 1082                         break;
 1083                 case 'a':
 1084                         if (sscanf(buf + 1, "%" SCNu64, &count) == 1) {
 1085                                 blist_blkno_t blk = blist_alloc(bl, count);
 1086                                 printf("    R=%0*" PRIx64 "\n",
 1087                                     sizeof(blk) * 2,
 1088                                     (uint64_t)blk);
 1089                         } else {
 1090                                 printf("?\n");
 1091                         }
 1092                         break;
 1093                 case 'f':
 1094                         if (sscanf(buf + 1, "%" SCNx64 " %" SCNu64,
 1095                             &da, &count) == 2) {
 1096                                 blist_free(bl, da, count);
 1097                         } else {
 1098                                 printf("?\n");
 1099                         }
 1100                         break;
 1101                 case 'l':
 1102                         if (sscanf(buf + 1, "%" SCNx64 " %" SCNu64,
 1103                             &da, &count) == 2) {
 1104                                 printf("    n=%" PRIu64 "\n",
 1105                                     (uint64_t)blist_fill(bl, da, count));
 1106                         } else {
 1107                                 printf("?\n");
 1108                         }
 1109                         break;
 1110                 case '?':
 1111                 case 'h':
 1112                         puts(
 1113                             "p          -print\n"
 1114                             "a %d       -allocate\n"
 1115                             "f %x %d    -free\n"
 1116                             "l %x %d    -fill\n"
 1117                             "r %d       -resize\n"
 1118                             "h/?        -help"
 1119                         );
 1120                         break;
 1121                 default:
 1122                         printf("?\n");
 1123                         break;
 1124                 }
 1125         }
 1126         return(0);
 1127 }
 1128 
 1129 void
 1130 panic(const char *ctl, ...)
 1131 {
 1132         va_list va;
 1133 
 1134         va_start(va, ctl);
 1135         vfprintf(stderr, ctl, va);
 1136         fprintf(stderr, "\n");
 1137         va_end(va);
 1138         exit(1);
 1139 }
 1140 
 1141 #endif
 1142 

Cache object: d6a1a811702b72cee14cbce10ae8af5d


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.