The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/Documentation/DMA-API-HOWTO.txt

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1                      Dynamic DMA mapping Guide
    2                      =========================
    3 
    4                  David S. Miller <davem@redhat.com>
    5                  Richard Henderson <rth@cygnus.com>
    6                   Jakub Jelinek <jakub@redhat.com>
    7 
    8 This is a guide to device driver writers on how to use the DMA API
    9 with example pseudo-code.  For a concise description of the API, see
   10 DMA-API.txt.
   11 
   12 Most of the 64bit platforms have special hardware that translates bus
   13 addresses (DMA addresses) into physical addresses.  This is similar to
   14 how page tables and/or a TLB translates virtual addresses to physical
   15 addresses on a CPU.  This is needed so that e.g. PCI devices can
   16 access with a Single Address Cycle (32bit DMA address) any page in the
   17 64bit physical address space.  Previously in Linux those 64bit
   18 platforms had to set artificial limits on the maximum RAM size in the
   19 system, so that the virt_to_bus() static scheme works (the DMA address
   20 translation tables were simply filled on bootup to map each bus
   21 address to the physical page __pa(bus_to_virt())).
   22 
   23 So that Linux can use the dynamic DMA mapping, it needs some help from the
   24 drivers, namely it has to take into account that DMA addresses should be
   25 mapped only for the time they are actually used and unmapped after the DMA
   26 transfer.
   27 
   28 The following API will work of course even on platforms where no such
   29 hardware exists.
   30 
   31 Note that the DMA API works with any bus independent of the underlying
   32 microprocessor architecture. You should use the DMA API rather than
   33 the bus specific DMA API (e.g. pci_dma_*).
   34 
   35 First of all, you should make sure
   36 
   37 #include <linux/dma-mapping.h>
   38 
   39 is in your driver. This file will obtain for you the definition of the
   40 dma_addr_t (which can hold any valid DMA address for the platform)
   41 type which should be used everywhere you hold a DMA (bus) address
   42 returned from the DMA mapping functions.
   43 
   44                          What memory is DMA'able?
   45 
   46 The first piece of information you must know is what kernel memory can
   47 be used with the DMA mapping facilities.  There has been an unwritten
   48 set of rules regarding this, and this text is an attempt to finally
   49 write them down.
   50 
   51 If you acquired your memory via the page allocator
   52 (i.e. __get_free_page*()) or the generic memory allocators
   53 (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
   54 that memory using the addresses returned from those routines.
   55 
   56 This means specifically that you may _not_ use the memory/addresses
   57 returned from vmalloc() for DMA.  It is possible to DMA to the
   58 _underlying_ memory mapped into a vmalloc() area, but this requires
   59 walking page tables to get the physical addresses, and then
   60 translating each of those pages back to a kernel address using
   61 something like __va().  [ EDIT: Update this when we integrate
   62 Gerd Knorr's generic code which does this. ]
   63 
   64 This rule also means that you may use neither kernel image addresses
   65 (items in data/text/bss segments), nor module image addresses, nor
   66 stack addresses for DMA.  These could all be mapped somewhere entirely
   67 different than the rest of physical memory.  Even if those classes of
   68 memory could physically work with DMA, you'd need to ensure the I/O
   69 buffers were cacheline-aligned.  Without that, you'd see cacheline
   70 sharing problems (data corruption) on CPUs with DMA-incoherent caches.
   71 (The CPU could write to one word, DMA would write to a different one
   72 in the same cache line, and one of them could be overwritten.)
   73 
   74 Also, this means that you cannot take the return of a kmap()
   75 call and DMA to/from that.  This is similar to vmalloc().
   76 
   77 What about block I/O and networking buffers?  The block I/O and
   78 networking subsystems make sure that the buffers they use are valid
   79 for you to DMA from/to.
   80 
   81                         DMA addressing limitations
   82 
   83 Does your device have any DMA addressing limitations?  For example, is
   84 your device only capable of driving the low order 24-bits of address?
   85 If so, you need to inform the kernel of this fact.
   86 
   87 By default, the kernel assumes that your device can address the full
   88 32-bits.  For a 64-bit capable device, this needs to be increased.
   89 And for a device with limitations, as discussed in the previous
   90 paragraph, it needs to be decreased.
   91 
   92 Special note about PCI: PCI-X specification requires PCI-X devices to
   93 support 64-bit addressing (DAC) for all transactions.  And at least
   94 one platform (SGI SN2) requires 64-bit consistent allocations to
   95 operate correctly when the IO bus is in PCI-X mode.
   96 
   97 For correct operation, you must interrogate the kernel in your device
   98 probe routine to see if the DMA controller on the machine can properly
   99 support the DMA addressing limitation your device has.  It is good
  100 style to do this even if your device holds the default setting,
  101 because this shows that you did think about these issues wrt. your
  102 device.
  103 
  104 The query is performed via a call to dma_set_mask():
  105 
  106         int dma_set_mask(struct device *dev, u64 mask);
  107 
  108 The query for consistent allocations is performed via a call to
  109 dma_set_coherent_mask():
  110 
  111         int dma_set_coherent_mask(struct device *dev, u64 mask);
  112 
  113 Here, dev is a pointer to the device struct of your device, and mask
  114 is a bit mask describing which bits of an address your device
  115 supports.  It returns zero if your card can perform DMA properly on
  116 the machine given the address mask you provided.  In general, the
  117 device struct of your device is embedded in the bus specific device
  118 struct of your device.  For example, a pointer to the device struct of
  119 your PCI device is pdev->dev (pdev is a pointer to the PCI device
  120 struct of your device).
  121 
  122 If it returns non-zero, your device cannot perform DMA properly on
  123 this platform, and attempting to do so will result in undefined
  124 behavior.  You must either use a different mask, or not use DMA.
  125 
  126 This means that in the failure case, you have three options:
  127 
  128 1) Use another DMA mask, if possible (see below).
  129 2) Use some non-DMA mode for data transfer, if possible.
  130 3) Ignore this device and do not initialize it.
  131 
  132 It is recommended that your driver print a kernel KERN_WARNING message
  133 when you end up performing either #2 or #3.  In this manner, if a user
  134 of your driver reports that performance is bad or that the device is not
  135 even detected, you can ask them for the kernel messages to find out
  136 exactly why.
  137 
  138 The standard 32-bit addressing device would do something like this:
  139 
  140         if (dma_set_mask(dev, DMA_BIT_MASK(32))) {
  141                 printk(KERN_WARNING
  142                        "mydev: No suitable DMA available.\n");
  143                 goto ignore_this_device;
  144         }
  145 
  146 Another common scenario is a 64-bit capable device.  The approach here
  147 is to try for 64-bit addressing, but back down to a 32-bit mask that
  148 should not fail.  The kernel may fail the 64-bit mask not because the
  149 platform is not capable of 64-bit addressing.  Rather, it may fail in
  150 this case simply because 32-bit addressing is done more efficiently
  151 than 64-bit addressing.  For example, Sparc64 PCI SAC addressing is
  152 more efficient than DAC addressing.
  153 
  154 Here is how you would handle a 64-bit capable device which can drive
  155 all 64-bits when accessing streaming DMA:
  156 
  157         int using_dac;
  158 
  159         if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
  160                 using_dac = 1;
  161         } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
  162                 using_dac = 0;
  163         } else {
  164                 printk(KERN_WARNING
  165                        "mydev: No suitable DMA available.\n");
  166                 goto ignore_this_device;
  167         }
  168 
  169 If a card is capable of using 64-bit consistent allocations as well,
  170 the case would look like this:
  171 
  172         int using_dac, consistent_using_dac;
  173 
  174         if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
  175                 using_dac = 1;
  176                 consistent_using_dac = 1;
  177                 dma_set_coherent_mask(dev, DMA_BIT_MASK(64));
  178         } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
  179                 using_dac = 0;
  180                 consistent_using_dac = 0;
  181                 dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
  182         } else {
  183                 printk(KERN_WARNING
  184                        "mydev: No suitable DMA available.\n");
  185                 goto ignore_this_device;
  186         }
  187 
  188 dma_set_coherent_mask() will always be able to set the same or a
  189 smaller mask as dma_set_mask(). However for the rare case that a
  190 device driver only uses consistent allocations, one would have to
  191 check the return value from dma_set_coherent_mask().
  192 
  193 Finally, if your device can only drive the low 24-bits of
  194 address you might do something like:
  195 
  196         if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
  197                 printk(KERN_WARNING
  198                        "mydev: 24-bit DMA addressing not available.\n");
  199                 goto ignore_this_device;
  200         }
  201 
  202 When dma_set_mask() is successful, and returns zero, the kernel saves
  203 away this mask you have provided.  The kernel will use this
  204 information later when you make DMA mappings.
  205 
  206 There is a case which we are aware of at this time, which is worth
  207 mentioning in this documentation.  If your device supports multiple
  208 functions (for example a sound card provides playback and record
  209 functions) and the various different functions have _different_
  210 DMA addressing limitations, you may wish to probe each mask and
  211 only provide the functionality which the machine can handle.  It
  212 is important that the last call to dma_set_mask() be for the
  213 most specific mask.
  214 
  215 Here is pseudo-code showing how this might be done:
  216 
  217         #define PLAYBACK_ADDRESS_BITS   DMA_BIT_MASK(32)
  218         #define RECORD_ADDRESS_BITS     DMA_BIT_MASK(24)
  219 
  220         struct my_sound_card *card;
  221         struct device *dev;
  222 
  223         ...
  224         if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
  225                 card->playback_enabled = 1;
  226         } else {
  227                 card->playback_enabled = 0;
  228                 printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n",
  229                        card->name);
  230         }
  231         if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
  232                 card->record_enabled = 1;
  233         } else {
  234                 card->record_enabled = 0;
  235                 printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n",
  236                        card->name);
  237         }
  238 
  239 A sound card was used as an example here because this genre of PCI
  240 devices seems to be littered with ISA chips given a PCI front end,
  241 and thus retaining the 16MB DMA addressing limitations of ISA.
  242 
  243                         Types of DMA mappings
  244 
  245 There are two types of DMA mappings:
  246 
  247 - Consistent DMA mappings which are usually mapped at driver
  248   initialization, unmapped at the end and for which the hardware should
  249   guarantee that the device and the CPU can access the data
  250   in parallel and will see updates made by each other without any
  251   explicit software flushing.
  252 
  253   Think of "consistent" as "synchronous" or "coherent".
  254 
  255   The current default is to return consistent memory in the low 32
  256   bits of the bus space.  However, for future compatibility you should
  257   set the consistent mask even if this default is fine for your
  258   driver.
  259 
  260   Good examples of what to use consistent mappings for are:
  261 
  262         - Network card DMA ring descriptors.
  263         - SCSI adapter mailbox command data structures.
  264         - Device firmware microcode executed out of
  265           main memory.
  266 
  267   The invariant these examples all require is that any CPU store
  268   to memory is immediately visible to the device, and vice
  269   versa.  Consistent mappings guarantee this.
  270 
  271   IMPORTANT: Consistent DMA memory does not preclude the usage of
  272              proper memory barriers.  The CPU may reorder stores to
  273              consistent memory just as it may normal memory.  Example:
  274              if it is important for the device to see the first word
  275              of a descriptor updated before the second, you must do
  276              something like:
  277 
  278                 desc->word0 = address;
  279                 wmb();
  280                 desc->word1 = DESC_VALID;
  281 
  282              in order to get correct behavior on all platforms.
  283 
  284              Also, on some platforms your driver may need to flush CPU write
  285              buffers in much the same way as it needs to flush write buffers
  286              found in PCI bridges (such as by reading a register's value
  287              after writing it).
  288 
  289 - Streaming DMA mappings which are usually mapped for one DMA
  290   transfer, unmapped right after it (unless you use dma_sync_* below)
  291   and for which hardware can optimize for sequential accesses.
  292 
  293   This of "streaming" as "asynchronous" or "outside the coherency
  294   domain".
  295 
  296   Good examples of what to use streaming mappings for are:
  297 
  298         - Networking buffers transmitted/received by a device.
  299         - Filesystem buffers written/read by a SCSI device.
  300 
  301   The interfaces for using this type of mapping were designed in
  302   such a way that an implementation can make whatever performance
  303   optimizations the hardware allows.  To this end, when using
  304   such mappings you must be explicit about what you want to happen.
  305 
  306 Neither type of DMA mapping has alignment restrictions that come from
  307 the underlying bus, although some devices may have such restrictions.
  308 Also, systems with caches that aren't DMA-coherent will work better
  309 when the underlying buffers don't share cache lines with other data.
  310 
  311 
  312                  Using Consistent DMA mappings.
  313 
  314 To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
  315 you should do:
  316 
  317         dma_addr_t dma_handle;
  318 
  319         cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
  320 
  321 where device is a struct device *. This may be called in interrupt
  322 context with the GFP_ATOMIC flag.
  323 
  324 Size is the length of the region you want to allocate, in bytes.
  325 
  326 This routine will allocate RAM for that region, so it acts similarly to
  327 __get_free_pages (but takes size instead of a page order).  If your
  328 driver needs regions sized smaller than a page, you may prefer using
  329 the dma_pool interface, described below.
  330 
  331 The consistent DMA mapping interfaces, for non-NULL dev, will by
  332 default return a DMA address which is 32-bit addressable.  Even if the
  333 device indicates (via DMA mask) that it may address the upper 32-bits,
  334 consistent allocation will only return > 32-bit addresses for DMA if
  335 the consistent DMA mask has been explicitly changed via
  336 dma_set_coherent_mask().  This is true of the dma_pool interface as
  337 well.
  338 
  339 dma_alloc_coherent returns two values: the virtual address which you
  340 can use to access it from the CPU and dma_handle which you pass to the
  341 card.
  342 
  343 The cpu return address and the DMA bus master address are both
  344 guaranteed to be aligned to the smallest PAGE_SIZE order which
  345 is greater than or equal to the requested size.  This invariant
  346 exists (for example) to guarantee that if you allocate a chunk
  347 which is smaller than or equal to 64 kilobytes, the extent of the
  348 buffer you receive will not cross a 64K boundary.
  349 
  350 To unmap and free such a DMA region, you call:
  351 
  352         dma_free_coherent(dev, size, cpu_addr, dma_handle);
  353 
  354 where dev, size are the same as in the above call and cpu_addr and
  355 dma_handle are the values dma_alloc_coherent returned to you.
  356 This function may not be called in interrupt context.
  357 
  358 If your driver needs lots of smaller memory regions, you can write
  359 custom code to subdivide pages returned by dma_alloc_coherent,
  360 or you can use the dma_pool API to do that.  A dma_pool is like
  361 a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages.
  362 Also, it understands common hardware constraints for alignment,
  363 like queue heads needing to be aligned on N byte boundaries.
  364 
  365 Create a dma_pool like this:
  366 
  367         struct dma_pool *pool;
  368 
  369         pool = dma_pool_create(name, dev, size, align, alloc);
  370 
  371 The "name" is for diagnostics (like a kmem_cache name); dev and size
  372 are as above.  The device's hardware alignment requirement for this
  373 type of data is "align" (which is expressed in bytes, and must be a
  374 power of two).  If your device has no boundary crossing restrictions,
  375 pass 0 for alloc; passing 4096 says memory allocated from this pool
  376 must not cross 4KByte boundaries (but at that time it may be better to
  377 go for dma_alloc_coherent directly instead).
  378 
  379 Allocate memory from a dma pool like this:
  380 
  381         cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
  382 
  383 flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
  384 holding SMP locks), SLAB_ATOMIC otherwise.  Like dma_alloc_coherent,
  385 this returns two values, cpu_addr and dma_handle.
  386 
  387 Free memory that was allocated from a dma_pool like this:
  388 
  389         dma_pool_free(pool, cpu_addr, dma_handle);
  390 
  391 where pool is what you passed to dma_pool_alloc, and cpu_addr and
  392 dma_handle are the values dma_pool_alloc returned. This function
  393 may be called in interrupt context.
  394 
  395 Destroy a dma_pool by calling:
  396 
  397         dma_pool_destroy(pool);
  398 
  399 Make sure you've called dma_pool_free for all memory allocated
  400 from a pool before you destroy the pool. This function may not
  401 be called in interrupt context.
  402 
  403                         DMA Direction
  404 
  405 The interfaces described in subsequent portions of this document
  406 take a DMA direction argument, which is an integer and takes on
  407 one of the following values:
  408 
  409  DMA_BIDIRECTIONAL
  410  DMA_TO_DEVICE
  411  DMA_FROM_DEVICE
  412  DMA_NONE
  413 
  414 One should provide the exact DMA direction if you know it.
  415 
  416 DMA_TO_DEVICE means "from main memory to the device"
  417 DMA_FROM_DEVICE means "from the device to main memory"
  418 It is the direction in which the data moves during the DMA
  419 transfer.
  420 
  421 You are _strongly_ encouraged to specify this as precisely
  422 as you possibly can.
  423 
  424 If you absolutely cannot know the direction of the DMA transfer,
  425 specify DMA_BIDIRECTIONAL.  It means that the DMA can go in
  426 either direction.  The platform guarantees that you may legally
  427 specify this, and that it will work, but this may be at the
  428 cost of performance for example.
  429 
  430 The value DMA_NONE is to be used for debugging.  One can
  431 hold this in a data structure before you come to know the
  432 precise direction, and this will help catch cases where your
  433 direction tracking logic has failed to set things up properly.
  434 
  435 Another advantage of specifying this value precisely (outside of
  436 potential platform-specific optimizations of such) is for debugging.
  437 Some platforms actually have a write permission boolean which DMA
  438 mappings can be marked with, much like page protections in the user
  439 program address space.  Such platforms can and do report errors in the
  440 kernel logs when the DMA controller hardware detects violation of the
  441 permission setting.
  442 
  443 Only streaming mappings specify a direction, consistent mappings
  444 implicitly have a direction attribute setting of
  445 DMA_BIDIRECTIONAL.
  446 
  447 The SCSI subsystem tells you the direction to use in the
  448 'sc_data_direction' member of the SCSI command your driver is
  449 working on.
  450 
  451 For Networking drivers, it's a rather simple affair.  For transmit
  452 packets, map/unmap them with the DMA_TO_DEVICE direction
  453 specifier.  For receive packets, just the opposite, map/unmap them
  454 with the DMA_FROM_DEVICE direction specifier.
  455 
  456                   Using Streaming DMA mappings
  457 
  458 The streaming DMA mapping routines can be called from interrupt
  459 context.  There are two versions of each map/unmap, one which will
  460 map/unmap a single memory region, and one which will map/unmap a
  461 scatterlist.
  462 
  463 To map a single region, you do:
  464 
  465         struct device *dev = &my_dev->dev;
  466         dma_addr_t dma_handle;
  467         void *addr = buffer->ptr;
  468         size_t size = buffer->len;
  469 
  470         dma_handle = dma_map_single(dev, addr, size, direction);
  471         if (dma_mapping_error(dma_handle)) {
  472                 /*
  473                  * reduce current DMA mapping usage,
  474                  * delay and try again later or
  475                  * reset driver.
  476                  */
  477                 goto map_error_handling;
  478         }
  479 
  480 and to unmap it:
  481 
  482         dma_unmap_single(dev, dma_handle, size, direction);
  483 
  484 You should call dma_mapping_error() as dma_map_single() could fail and return
  485 error. Not all dma implementations support dma_mapping_error() interface.
  486 However, it is a good practice to call dma_mapping_error() interface, which
  487 will invoke the generic mapping error check interface. Doing so will ensure
  488 that the mapping code will work correctly on all dma implementations without
  489 any dependency on the specifics of the underlying implementation. Using the
  490 returned address without checking for errors could result in failures ranging
  491 from panics to silent data corruption. Couple of example of incorrect ways to
  492 check for errors that make assumptions about the underlying dma implementation
  493 are as follows and these are applicable to dma_map_page() as well.
  494 
  495 Incorrect example 1:
  496         dma_addr_t dma_handle;
  497 
  498         dma_handle = dma_map_single(dev, addr, size, direction);
  499         if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) {
  500                 goto map_error;
  501         }
  502 
  503 Incorrect example 2:
  504         dma_addr_t dma_handle;
  505 
  506         dma_handle = dma_map_single(dev, addr, size, direction);
  507         if (dma_handle == DMA_ERROR_CODE) {
  508                 goto map_error;
  509         }
  510 
  511 You should call dma_unmap_single when the DMA activity is finished, e.g.
  512 from the interrupt which told you that the DMA transfer is done.
  513 
  514 Using cpu pointers like this for single mappings has a disadvantage,
  515 you cannot reference HIGHMEM memory in this way.  Thus, there is a
  516 map/unmap interface pair akin to dma_{map,unmap}_single.  These
  517 interfaces deal with page/offset pairs instead of cpu pointers.
  518 Specifically:
  519 
  520         struct device *dev = &my_dev->dev;
  521         dma_addr_t dma_handle;
  522         struct page *page = buffer->page;
  523         unsigned long offset = buffer->offset;
  524         size_t size = buffer->len;
  525 
  526         dma_handle = dma_map_page(dev, page, offset, size, direction);
  527         if (dma_mapping_error(dma_handle)) {
  528                 /*
  529                  * reduce current DMA mapping usage,
  530                  * delay and try again later or
  531                  * reset driver.
  532                  */
  533                 goto map_error_handling;
  534         }
  535 
  536         ...
  537 
  538         dma_unmap_page(dev, dma_handle, size, direction);
  539 
  540 Here, "offset" means byte offset within the given page.
  541 
  542 You should call dma_mapping_error() as dma_map_page() could fail and return
  543 error as outlined under the dma_map_single() discussion.
  544 
  545 You should call dma_unmap_page when the DMA activity is finished, e.g.
  546 from the interrupt which told you that the DMA transfer is done.
  547 
  548 With scatterlists, you map a region gathered from several regions by:
  549 
  550         int i, count = dma_map_sg(dev, sglist, nents, direction);
  551         struct scatterlist *sg;
  552 
  553         for_each_sg(sglist, sg, count, i) {
  554                 hw_address[i] = sg_dma_address(sg);
  555                 hw_len[i] = sg_dma_len(sg);
  556         }
  557 
  558 where nents is the number of entries in the sglist.
  559 
  560 The implementation is free to merge several consecutive sglist entries
  561 into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
  562 consecutive sglist entries can be merged into one provided the first one
  563 ends and the second one starts on a page boundary - in fact this is a huge
  564 advantage for cards which either cannot do scatter-gather or have very
  565 limited number of scatter-gather entries) and returns the actual number
  566 of sg entries it mapped them to. On failure 0 is returned.
  567 
  568 Then you should loop count times (note: this can be less than nents times)
  569 and use sg_dma_address() and sg_dma_len() macros where you previously
  570 accessed sg->address and sg->length as shown above.
  571 
  572 To unmap a scatterlist, just call:
  573 
  574         dma_unmap_sg(dev, sglist, nents, direction);
  575 
  576 Again, make sure DMA activity has already finished.
  577 
  578 PLEASE NOTE:  The 'nents' argument to the dma_unmap_sg call must be
  579               the _same_ one you passed into the dma_map_sg call,
  580               it should _NOT_ be the 'count' value _returned_ from the
  581               dma_map_sg call.
  582 
  583 Every dma_map_{single,sg} call should have its dma_unmap_{single,sg}
  584 counterpart, because the bus address space is a shared resource (although
  585 in some ports the mapping is per each BUS so less devices contend for the
  586 same bus address space) and you could render the machine unusable by eating
  587 all bus addresses.
  588 
  589 If you need to use the same streaming DMA region multiple times and touch
  590 the data in between the DMA transfers, the buffer needs to be synced
  591 properly in order for the cpu and device to see the most uptodate and
  592 correct copy of the DMA buffer.
  593 
  594 So, firstly, just map it with dma_map_{single,sg}, and after each DMA
  595 transfer call either:
  596 
  597         dma_sync_single_for_cpu(dev, dma_handle, size, direction);
  598 
  599 or:
  600 
  601         dma_sync_sg_for_cpu(dev, sglist, nents, direction);
  602 
  603 as appropriate.
  604 
  605 Then, if you wish to let the device get at the DMA area again,
  606 finish accessing the data with the cpu, and then before actually
  607 giving the buffer to the hardware call either:
  608 
  609         dma_sync_single_for_device(dev, dma_handle, size, direction);
  610 
  611 or:
  612 
  613         dma_sync_sg_for_device(dev, sglist, nents, direction);
  614 
  615 as appropriate.
  616 
  617 After the last DMA transfer call one of the DMA unmap routines
  618 dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_*
  619 call till dma_unmap_*, then you don't have to call the dma_sync_*
  620 routines at all.
  621 
  622 Here is pseudo code which shows a situation in which you would need
  623 to use the dma_sync_*() interfaces.
  624 
  625         my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
  626         {
  627                 dma_addr_t mapping;
  628 
  629                 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
  630                 if (dma_mapping_error(dma_handle)) {
  631                         /*
  632                          * reduce current DMA mapping usage,
  633                          * delay and try again later or
  634                          * reset driver.
  635                          */
  636                         goto map_error_handling;
  637                 }
  638 
  639                 cp->rx_buf = buffer;
  640                 cp->rx_len = len;
  641                 cp->rx_dma = mapping;
  642 
  643                 give_rx_buf_to_card(cp);
  644         }
  645 
  646         ...
  647 
  648         my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
  649         {
  650                 struct my_card *cp = devid;
  651 
  652                 ...
  653                 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
  654                         struct my_card_header *hp;
  655 
  656                         /* Examine the header to see if we wish
  657                          * to accept the data.  But synchronize
  658                          * the DMA transfer with the CPU first
  659                          * so that we see updated contents.
  660                          */
  661                         dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
  662                                                 cp->rx_len,
  663                                                 DMA_FROM_DEVICE);
  664 
  665                         /* Now it is safe to examine the buffer. */
  666                         hp = (struct my_card_header *) cp->rx_buf;
  667                         if (header_is_ok(hp)) {
  668                                 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
  669                                                  DMA_FROM_DEVICE);
  670                                 pass_to_upper_layers(cp->rx_buf);
  671                                 make_and_setup_new_rx_buf(cp);
  672                         } else {
  673                                 /* CPU should not write to
  674                                  * DMA_FROM_DEVICE-mapped area,
  675                                  * so dma_sync_single_for_device() is
  676                                  * not needed here. It would be required
  677                                  * for DMA_BIDIRECTIONAL mapping if
  678                                  * the memory was modified.
  679                                  */
  680                                 give_rx_buf_to_card(cp);
  681                         }
  682                 }
  683         }
  684 
  685 Drivers converted fully to this interface should not use virt_to_bus any
  686 longer, nor should they use bus_to_virt. Some drivers have to be changed a
  687 little bit, because there is no longer an equivalent to bus_to_virt in the
  688 dynamic DMA mapping scheme - you have to always store the DMA addresses
  689 returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single
  690 calls (dma_map_sg stores them in the scatterlist itself if the platform
  691 supports dynamic DMA mapping in hardware) in your driver structures and/or
  692 in the card registers.
  693 
  694 All drivers should be using these interfaces with no exceptions.  It
  695 is planned to completely remove virt_to_bus() and bus_to_virt() as
  696 they are entirely deprecated.  Some ports already do not provide these
  697 as it is impossible to correctly support them.
  698 
  699                         Handling Errors
  700 
  701 DMA address space is limited on some architectures and an allocation
  702 failure can be determined by:
  703 
  704 - checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0
  705 
  706 - checking the returned dma_addr_t of dma_map_single and dma_map_page
  707   by using dma_mapping_error():
  708 
  709         dma_addr_t dma_handle;
  710 
  711         dma_handle = dma_map_single(dev, addr, size, direction);
  712         if (dma_mapping_error(dev, dma_handle)) {
  713                 /*
  714                  * reduce current DMA mapping usage,
  715                  * delay and try again later or
  716                  * reset driver.
  717                  */
  718                 goto map_error_handling;
  719         }
  720 
  721 - unmap pages that are already mapped, when mapping error occurs in the middle
  722   of a multiple page mapping attempt. These example are applicable to
  723   dma_map_page() as well.
  724 
  725 Example 1:
  726         dma_addr_t dma_handle1;
  727         dma_addr_t dma_handle2;
  728 
  729         dma_handle1 = dma_map_single(dev, addr, size, direction);
  730         if (dma_mapping_error(dev, dma_handle1)) {
  731                 /*
  732                  * reduce current DMA mapping usage,
  733                  * delay and try again later or
  734                  * reset driver.
  735                  */
  736                 goto map_error_handling1;
  737         }
  738         dma_handle2 = dma_map_single(dev, addr, size, direction);
  739         if (dma_mapping_error(dev, dma_handle2)) {
  740                 /*
  741                  * reduce current DMA mapping usage,
  742                  * delay and try again later or
  743                  * reset driver.
  744                  */
  745                 goto map_error_handling2;
  746         }
  747 
  748         ...
  749 
  750         map_error_handling2:
  751                 dma_unmap_single(dma_handle1);
  752         map_error_handling1:
  753 
  754 Example 2: (if buffers are allocated a loop, unmap all mapped buffers when
  755             mapping error is detected in the middle)
  756 
  757         dma_addr_t dma_addr;
  758         dma_addr_t array[DMA_BUFFERS];
  759         int save_index = 0;
  760 
  761         for (i = 0; i < DMA_BUFFERS; i++) {
  762 
  763                 ...
  764 
  765                 dma_addr = dma_map_single(dev, addr, size, direction);
  766                 if (dma_mapping_error(dev, dma_addr)) {
  767                         /*
  768                          * reduce current DMA mapping usage,
  769                          * delay and try again later or
  770                          * reset driver.
  771                          */
  772                         goto map_error_handling;
  773                 }
  774                 array[i].dma_addr = dma_addr;
  775                 save_index++;
  776         }
  777 
  778         ...
  779 
  780         map_error_handling:
  781 
  782         for (i = 0; i < save_index; i++) {
  783 
  784                 ...
  785 
  786                 dma_unmap_single(array[i].dma_addr);
  787         }
  788 
  789 Networking drivers must call dev_kfree_skb to free the socket buffer
  790 and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
  791 (ndo_start_xmit). This means that the socket buffer is just dropped in
  792 the failure case.
  793 
  794 SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
  795 fails in the queuecommand hook. This means that the SCSI subsystem
  796 passes the command to the driver again later.
  797 
  798                 Optimizing Unmap State Space Consumption
  799 
  800 On many platforms, dma_unmap_{single,page}() is simply a nop.
  801 Therefore, keeping track of the mapping address and length is a waste
  802 of space.  Instead of filling your drivers up with ifdefs and the like
  803 to "work around" this (which would defeat the whole purpose of a
  804 portable API) the following facilities are provided.
  805 
  806 Actually, instead of describing the macros one by one, we'll
  807 transform some example code.
  808 
  809 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
  810    Example, before:
  811 
  812         struct ring_state {
  813                 struct sk_buff *skb;
  814                 dma_addr_t mapping;
  815                 __u32 len;
  816         };
  817 
  818    after:
  819 
  820         struct ring_state {
  821                 struct sk_buff *skb;
  822                 DEFINE_DMA_UNMAP_ADDR(mapping);
  823                 DEFINE_DMA_UNMAP_LEN(len);
  824         };
  825 
  826 2) Use dma_unmap_{addr,len}_set to set these values.
  827    Example, before:
  828 
  829         ringp->mapping = FOO;
  830         ringp->len = BAR;
  831 
  832    after:
  833 
  834         dma_unmap_addr_set(ringp, mapping, FOO);
  835         dma_unmap_len_set(ringp, len, BAR);
  836 
  837 3) Use dma_unmap_{addr,len} to access these values.
  838    Example, before:
  839 
  840         dma_unmap_single(dev, ringp->mapping, ringp->len,
  841                          DMA_FROM_DEVICE);
  842 
  843    after:
  844 
  845         dma_unmap_single(dev,
  846                          dma_unmap_addr(ringp, mapping),
  847                          dma_unmap_len(ringp, len),
  848                          DMA_FROM_DEVICE);
  849 
  850 It really should be self-explanatory.  We treat the ADDR and LEN
  851 separately, because it is possible for an implementation to only
  852 need the address in order to perform the unmap operation.
  853 
  854                         Platform Issues
  855 
  856 If you are just writing drivers for Linux and do not maintain
  857 an architecture port for the kernel, you can safely skip down
  858 to "Closing".
  859 
  860 1) Struct scatterlist requirements.
  861 
  862    Don't invent the architecture specific struct scatterlist; just use
  863    <asm-generic/scatterlist.h>. You need to enable
  864    CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs
  865    (including software IOMMU).
  866 
  867 2) ARCH_DMA_MINALIGN
  868 
  869    Architectures must ensure that kmalloc'ed buffer is
  870    DMA-safe. Drivers and subsystems depend on it. If an architecture
  871    isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
  872    the CPU cache is identical to data in main memory),
  873    ARCH_DMA_MINALIGN must be set so that the memory allocator
  874    makes sure that kmalloc'ed buffer doesn't share a cache line with
  875    the others. See arch/arm/include/asm/cache.h as an example.
  876 
  877    Note that ARCH_DMA_MINALIGN is about DMA memory alignment
  878    constraints. You don't need to worry about the architecture data
  879    alignment constraints (e.g. the alignment constraints about 64-bit
  880    objects).
  881 
  882 3) Supporting multiple types of IOMMUs
  883 
  884    If your architecture needs to support multiple types of IOMMUs, you
  885    can use include/linux/asm-generic/dma-mapping-common.h. It's a
  886    library to support the DMA API with multiple types of IOMMUs. Lots
  887    of architectures (x86, powerpc, sh, alpha, ia64, microblaze and
  888    sparc) use it. Choose one to see how it can be used. If you need to
  889    support multiple types of IOMMUs in a single system, the example of
  890    x86 or powerpc helps.
  891 
  892                            Closing
  893 
  894 This document, and the API itself, would not be in its current
  895 form without the feedback and suggestions from numerous individuals.
  896 We would like to specifically mention, in no particular order, the
  897 following people:
  898 
  899         Russell King <rmk@arm.linux.org.uk>
  900         Leo Dagum <dagum@barrel.engr.sgi.com>
  901         Ralf Baechle <ralf@oss.sgi.com>
  902         Grant Grundler <grundler@cup.hp.com>
  903         Jay Estabrook <Jay.Estabrook@compaq.com>
  904         Thomas Sailer <sailer@ife.ee.ethz.ch>
  905         Andrea Arcangeli <andrea@suse.de>
  906         Jens Axboe <jens.axboe@oracle.com>
  907         David Mosberger-Tang <davidm@hpl.hp.com>

Cache object: 851673c803136eac319cd7427fce6a7f


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.