The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/dev/ic/rtl8169.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /*      $NetBSD: rtl8169.c,v 1.72.2.11 2009/08/18 09:46:51 bouyer Exp $ */
    2 
    3 /*
    4  * Copyright (c) 1997, 1998-2003
    5  *      Bill Paul <wpaul@windriver.com>.  All rights reserved.
    6  *
    7  * Redistribution and use in source and binary forms, with or without
    8  * modification, are permitted provided that the following conditions
    9  * are met:
   10  * 1. Redistributions of source code must retain the above copyright
   11  *    notice, this list of conditions and the following disclaimer.
   12  * 2. Redistributions in binary form must reproduce the above copyright
   13  *    notice, this list of conditions and the following disclaimer in the
   14  *    documentation and/or other materials provided with the distribution.
   15  * 3. All advertising materials mentioning features or use of this software
   16  *    must display the following acknowledgement:
   17  *      This product includes software developed by Bill Paul.
   18  * 4. Neither the name of the author nor the names of any co-contributors
   19  *    may be used to endorse or promote products derived from this software
   20  *    without specific prior written permission.
   21  *
   22  * THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND
   23  * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
   24  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
   25  * ARE DISCLAIMED.  IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD
   26  * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
   27  * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
   28  * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
   29  * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
   30  * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
   31  * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
   32  * THE POSSIBILITY OF SUCH DAMAGE.
   33  */
   34 
   35 #include <sys/cdefs.h>
   36 /* $FreeBSD: /repoman/r/ncvs/src/sys/dev/re/if_re.c,v 1.20 2004/04/11 20:34:08 ru Exp $ */
   37 
   38 /*
   39  * RealTek 8139C+/8169/8169S/8110S PCI NIC driver
   40  *
   41  * Written by Bill Paul <wpaul@windriver.com>
   42  * Senior Networking Software Engineer
   43  * Wind River Systems
   44  */
   45 
   46 /*
   47  * This driver is designed to support RealTek's next generation of
   48  * 10/100 and 10/100/1000 PCI ethernet controllers. There are currently
   49  * four devices in this family: the RTL8139C+, the RTL8169, the RTL8169S
   50  * and the RTL8110S.
   51  *
   52  * The 8139C+ is a 10/100 ethernet chip. It is backwards compatible
   53  * with the older 8139 family, however it also supports a special
   54  * C+ mode of operation that provides several new performance enhancing
   55  * features. These include:
   56  *
   57  *      o Descriptor based DMA mechanism. Each descriptor represents
   58  *        a single packet fragment. Data buffers may be aligned on
   59  *        any byte boundary.
   60  *
   61  *      o 64-bit DMA
   62  *
   63  *      o TCP/IP checksum offload for both RX and TX
   64  *
   65  *      o High and normal priority transmit DMA rings
   66  *
   67  *      o VLAN tag insertion and extraction
   68  *
   69  *      o TCP large send (segmentation offload)
   70  *
   71  * Like the 8139, the 8139C+ also has a built-in 10/100 PHY. The C+
   72  * programming API is fairly straightforward. The RX filtering, EEPROM
   73  * access and PHY access is the same as it is on the older 8139 series
   74  * chips.
   75  *
   76  * The 8169 is a 64-bit 10/100/1000 gigabit ethernet MAC. It has almost the
   77  * same programming API and feature set as the 8139C+ with the following
   78  * differences and additions:
   79  *
   80  *      o 1000Mbps mode
   81  *
   82  *      o Jumbo frames
   83  *
   84  *      o GMII and TBI ports/registers for interfacing with copper
   85  *        or fiber PHYs
   86  *
   87  *      o RX and TX DMA rings can have up to 1024 descriptors
   88  *        (the 8139C+ allows a maximum of 64)
   89  *
   90  *      o Slight differences in register layout from the 8139C+
   91  *
   92  * The TX start and timer interrupt registers are at different locations
   93  * on the 8169 than they are on the 8139C+. Also, the status word in the
   94  * RX descriptor has a slightly different bit layout. The 8169 does not
   95  * have a built-in PHY. Most reference boards use a Marvell 88E1000 'Alaska'
   96  * copper gigE PHY.
   97  *
   98  * The 8169S/8110S 10/100/1000 devices have built-in copper gigE PHYs
   99  * (the 'S' stands for 'single-chip'). These devices have the same
  100  * programming API as the older 8169, but also have some vendor-specific
  101  * registers for the on-board PHY. The 8110S is a LAN-on-motherboard
  102  * part designed to be pin-compatible with the RealTek 8100 10/100 chip.
  103  *
  104  * This driver takes advantage of the RX and TX checksum offload and
  105  * VLAN tag insertion/extraction features. It also implements TX
  106  * interrupt moderation using the timer interrupt registers, which
  107  * significantly reduces TX interrupt load. There is also support
  108  * for jumbo frames, however the 8169/8169S/8110S can not transmit
  109  * jumbo frames larger than 7.5K, so the max MTU possible with this
  110  * driver is 7500 bytes.
  111  */
  112 
  113 #include "bpfilter.h"
  114 #include "vlan.h"
  115 
  116 #include <sys/param.h>
  117 #include <sys/endian.h>
  118 #include <sys/systm.h>
  119 #include <sys/sockio.h>
  120 #include <sys/mbuf.h>
  121 #include <sys/malloc.h>
  122 #include <sys/kernel.h>
  123 #include <sys/socket.h>
  124 #include <sys/device.h>
  125 
  126 #include <net/if.h>
  127 #include <net/if_arp.h>
  128 #include <net/if_dl.h>
  129 #include <net/if_ether.h>
  130 #include <net/if_media.h>
  131 #include <net/if_vlanvar.h>
  132 
  133 #include <netinet/in_systm.h>   /* XXX for IP_MAXPACKET */
  134 #include <netinet/in.h>         /* XXX for IP_MAXPACKET */
  135 #include <netinet/ip.h>         /* XXX for IP_MAXPACKET */
  136 
  137 #if NBPFILTER > 0
  138 #include <net/bpf.h>
  139 #endif
  140 
  141 #include <machine/bus.h>
  142 
  143 #include <dev/mii/mii.h>
  144 #include <dev/mii/miivar.h>
  145 
  146 #include <dev/pci/pcireg.h>
  147 #include <dev/pci/pcivar.h>
  148 #include <dev/pci/pcidevs.h>
  149 
  150 #include <dev/ic/rtl81x9reg.h>
  151 #include <dev/ic/rtl81x9var.h>
  152 
  153 #include <dev/ic/rtl8169var.h>
  154 
  155 static inline void re_set_bufaddr(struct re_desc *, bus_addr_t);
  156 
  157 static int re_newbuf(struct rtk_softc *, int, struct mbuf *);
  158 static int re_rx_list_init(struct rtk_softc *);
  159 static int re_tx_list_init(struct rtk_softc *);
  160 static void re_rxeof(struct rtk_softc *);
  161 static void re_txeof(struct rtk_softc *);
  162 static void re_tick(void *);
  163 static void re_start(struct ifnet *);
  164 static int re_ioctl(struct ifnet *, u_long, caddr_t);
  165 static int re_init(struct ifnet *);
  166 static void re_stop(struct ifnet *, int);
  167 static void re_watchdog(struct ifnet *);
  168 
  169 static void re_shutdown(void *);
  170 static int re_enable(struct rtk_softc *);
  171 static void re_disable(struct rtk_softc *);
  172 static void re_power(int, void *);
  173 
  174 static int re_ifmedia_upd(struct ifnet *);
  175 static void re_ifmedia_sts(struct ifnet *, struct ifmediareq *);
  176 
  177 static int re_gmii_readreg(struct device *, int, int);
  178 static void re_gmii_writereg(struct device *, int, int, int);
  179 
  180 static int re_miibus_readreg(struct device *, int, int);
  181 static void re_miibus_writereg(struct device *, int, int, int);
  182 static void re_miibus_statchg(struct device *);
  183 
  184 static void re_reset(struct rtk_softc *);
  185 
  186 static inline void
  187 re_set_bufaddr(struct re_desc *d, bus_addr_t addr)
  188 {
  189 
  190         d->re_bufaddr_lo = htole32((uint32_t)addr);
  191         if (sizeof(bus_addr_t) == sizeof(uint64_t))
  192                 d->re_bufaddr_hi = htole32((uint64_t)addr >> 32);
  193         else
  194                 d->re_bufaddr_hi = 0;
  195 }
  196 
  197 static int
  198 re_gmii_readreg(struct device *self, int phy, int reg)
  199 {
  200         struct rtk_softc        *sc = (void *)self;
  201         uint32_t                rval;
  202         int                     i;
  203 
  204         if (phy != 7)
  205                 return 0;
  206 
  207         /* Let the rgephy driver read the GMEDIASTAT register */
  208 
  209         if (reg == RTK_GMEDIASTAT) {
  210                 rval = CSR_READ_1(sc, RTK_GMEDIASTAT);
  211                 return rval;
  212         }
  213 
  214         CSR_WRITE_4(sc, RTK_PHYAR, reg << 16);
  215         DELAY(1000);
  216 
  217         for (i = 0; i < RTK_TIMEOUT; i++) {
  218                 rval = CSR_READ_4(sc, RTK_PHYAR);
  219                 if (rval & RTK_PHYAR_BUSY)
  220                         break;
  221                 DELAY(100);
  222         }
  223 
  224         if (i == RTK_TIMEOUT) {
  225                 aprint_error("%s: PHY read failed\n", sc->sc_dev.dv_xname);
  226                 return 0;
  227         }
  228 
  229         return rval & RTK_PHYAR_PHYDATA;
  230 }
  231 
  232 static void
  233 re_gmii_writereg(struct device *dev, int phy, int reg, int data)
  234 {
  235         struct rtk_softc        *sc = (void *)dev;
  236         uint32_t                rval;
  237         int                     i;
  238 
  239         CSR_WRITE_4(sc, RTK_PHYAR, (reg << 16) |
  240             (data & RTK_PHYAR_PHYDATA) | RTK_PHYAR_BUSY);
  241         DELAY(1000);
  242 
  243         for (i = 0; i < RTK_TIMEOUT; i++) {
  244                 rval = CSR_READ_4(sc, RTK_PHYAR);
  245                 if (!(rval & RTK_PHYAR_BUSY))
  246                         break;
  247                 DELAY(100);
  248         }
  249 
  250         if (i == RTK_TIMEOUT) {
  251                 aprint_error("%s: PHY write reg %x <- %x failed\n",
  252                     sc->sc_dev.dv_xname, reg, data);
  253         }
  254 }
  255 
  256 static int
  257 re_miibus_readreg(struct device *dev, int phy, int reg)
  258 {
  259         struct rtk_softc        *sc = (void *)dev;
  260         uint16_t                rval = 0;
  261         uint16_t                re8139_reg = 0;
  262         int                     s;
  263 
  264         s = splnet();
  265 
  266         if ((sc->sc_quirk & RTKQ_8139CPLUS) == 0) {
  267                 rval = re_gmii_readreg(dev, phy, reg);
  268                 splx(s);
  269                 return rval;
  270         }
  271 
  272         /* Pretend the internal PHY is only at address 0 */
  273         if (phy) {
  274                 splx(s);
  275                 return 0;
  276         }
  277         switch (reg) {
  278         case MII_BMCR:
  279                 re8139_reg = RTK_BMCR;
  280                 break;
  281         case MII_BMSR:
  282                 re8139_reg = RTK_BMSR;
  283                 break;
  284         case MII_ANAR:
  285                 re8139_reg = RTK_ANAR;
  286                 break;
  287         case MII_ANER:
  288                 re8139_reg = RTK_ANER;
  289                 break;
  290         case MII_ANLPAR:
  291                 re8139_reg = RTK_LPAR;
  292                 break;
  293         case MII_PHYIDR1:
  294         case MII_PHYIDR2:
  295                 splx(s);
  296                 return 0;
  297         /*
  298          * Allow the rlphy driver to read the media status
  299          * register. If we have a link partner which does not
  300          * support NWAY, this is the register which will tell
  301          * us the results of parallel detection.
  302          */
  303         case RTK_MEDIASTAT:
  304                 rval = CSR_READ_1(sc, RTK_MEDIASTAT);
  305                 splx(s);
  306                 return rval;
  307         default:
  308                 aprint_error("%s: bad phy register\n", sc->sc_dev.dv_xname);
  309                 splx(s);
  310                 return 0;
  311         }
  312         rval = CSR_READ_2(sc, re8139_reg);
  313         if ((sc->sc_quirk & RTKQ_8139CPLUS) != 0 && re8139_reg == RTK_BMCR) {
  314                 /* 8139C+ has different bit layout. */
  315                 rval &= ~(BMCR_LOOP | BMCR_ISO);
  316         }
  317         splx(s);
  318         return rval;
  319 }
  320 
  321 static void
  322 re_miibus_writereg(struct device *dev, int phy, int reg, int data)
  323 {
  324         struct rtk_softc        *sc = (void *)dev;
  325         uint16_t                re8139_reg = 0;
  326         int                     s;
  327 
  328         s = splnet();
  329 
  330         if ((sc->sc_quirk & RTKQ_8139CPLUS) == 0) {
  331                 re_gmii_writereg(dev, phy, reg, data);
  332                 splx(s);
  333                 return;
  334         }
  335 
  336         /* Pretend the internal PHY is only at address 0 */
  337         if (phy) {
  338                 splx(s);
  339                 return;
  340         }
  341         switch (reg) {
  342         case MII_BMCR:
  343                 re8139_reg = RTK_BMCR;
  344                 if ((sc->sc_quirk & RTKQ_8139CPLUS) != 0) {
  345                         /* 8139C+ has different bit layout. */
  346                         data &= ~(BMCR_LOOP | BMCR_ISO);
  347                 }
  348                 break;
  349         case MII_BMSR:
  350                 re8139_reg = RTK_BMSR;
  351                 break;
  352         case MII_ANAR:
  353                 re8139_reg = RTK_ANAR;
  354                 break;
  355         case MII_ANER:
  356                 re8139_reg = RTK_ANER;
  357                 break;
  358         case MII_ANLPAR:
  359                 re8139_reg = RTK_LPAR;
  360                 break;
  361         case MII_PHYIDR1:
  362         case MII_PHYIDR2:
  363                 splx(s);
  364                 return;
  365                 break;
  366         default:
  367                 aprint_error("%s: bad phy register\n", sc->sc_dev.dv_xname);
  368                 splx(s);
  369                 return;
  370         }
  371         CSR_WRITE_2(sc, re8139_reg, data);
  372         splx(s);
  373         return;
  374 }
  375 
  376 static void
  377 re_miibus_statchg(struct device *dev)
  378 {
  379 
  380         return;
  381 }
  382 
  383 static void
  384 re_reset(struct rtk_softc *sc)
  385 {
  386         int             i;
  387 
  388         CSR_WRITE_1(sc, RTK_COMMAND, RTK_CMD_RESET);
  389 
  390         for (i = 0; i < RTK_TIMEOUT; i++) {
  391                 DELAY(10);
  392                 if ((CSR_READ_1(sc, RTK_COMMAND) & RTK_CMD_RESET) == 0)
  393                         break;
  394         }
  395         if (i == RTK_TIMEOUT)
  396                 aprint_error("%s: reset never completed!\n",
  397                     sc->sc_dev.dv_xname);
  398 
  399         /*
  400          * NB: Realtek-supplied FreeBSD driver does this only for MACFG_3,
  401          *     but also says "Rtl8169s sigle chip detected".
  402          */
  403         if ((sc->sc_quirk & RTKQ_MACLDPS) != 0)
  404                 CSR_WRITE_1(sc, RTK_LDPS, 1);
  405 
  406         return;
  407 }
  408 
  409 /*
  410  * The following routine is designed to test for a defect on some
  411  * 32-bit 8169 cards. Some of these NICs have the REQ64# and ACK64#
  412  * lines connected to the bus, however for a 32-bit only card, they
  413  * should be pulled high. The result of this defect is that the
  414  * NIC will not work right if you plug it into a 64-bit slot: DMA
  415  * operations will be done with 64-bit transfers, which will fail
  416  * because the 64-bit data lines aren't connected.
  417  *
  418  * There's no way to work around this (short of talking a soldering
  419  * iron to the board), however we can detect it. The method we use
  420  * here is to put the NIC into digital loopback mode, set the receiver
  421  * to promiscuous mode, and then try to send a frame. We then compare
  422  * the frame data we sent to what was received. If the data matches,
  423  * then the NIC is working correctly, otherwise we know the user has
  424  * a defective NIC which has been mistakenly plugged into a 64-bit PCI
  425  * slot. In the latter case, there's no way the NIC can work correctly,
  426  * so we print out a message on the console and abort the device attach.
  427  */
  428 
  429 int
  430 re_diag(struct rtk_softc *sc)
  431 {
  432         struct ifnet            *ifp = &sc->ethercom.ec_if;
  433         struct mbuf             *m0;
  434         struct ether_header     *eh;
  435         struct re_rxsoft        *rxs;
  436         struct re_desc          *cur_rx;
  437         bus_dmamap_t            dmamap;
  438         uint16_t                status;
  439         uint32_t                rxstat;
  440         int                     total_len, i, s, error = 0;
  441         static const uint8_t    dst[] = { 0x00, 'h', 'e', 'l', 'l', 'o' };
  442         static const uint8_t    src[] = { 0x00, 'w', 'o', 'r', 'l', 'd' };
  443 
  444         /* Allocate a single mbuf */
  445 
  446         MGETHDR(m0, M_DONTWAIT, MT_DATA);
  447         if (m0 == NULL)
  448                 return ENOBUFS;
  449 
  450         /*
  451          * Initialize the NIC in test mode. This sets the chip up
  452          * so that it can send and receive frames, but performs the
  453          * following special functions:
  454          * - Puts receiver in promiscuous mode
  455          * - Enables digital loopback mode
  456          * - Leaves interrupts turned off
  457          */
  458 
  459         ifp->if_flags |= IFF_PROMISC;
  460         sc->re_testmode = 1;
  461         re_init(ifp);
  462         re_stop(ifp, 0);
  463         DELAY(100000);
  464         re_init(ifp);
  465 
  466         /* Put some data in the mbuf */
  467 
  468         eh = mtod(m0, struct ether_header *);
  469         memcpy(eh->ether_dhost, (char *)&dst, ETHER_ADDR_LEN);
  470         memcpy(eh->ether_shost, (char *)&src, ETHER_ADDR_LEN);
  471         eh->ether_type = htons(ETHERTYPE_IP);
  472         m0->m_pkthdr.len = m0->m_len = ETHER_MIN_LEN - ETHER_CRC_LEN;
  473 
  474         /*
  475          * Queue the packet, start transmission.
  476          */
  477 
  478         CSR_WRITE_2(sc, RTK_ISR, 0xFFFF);
  479         s = splnet();
  480         IF_ENQUEUE(&ifp->if_snd, m0);
  481         re_start(ifp);
  482         splx(s);
  483         m0 = NULL;
  484 
  485         /* Wait for it to propagate through the chip */
  486 
  487         DELAY(100000);
  488         for (i = 0; i < RTK_TIMEOUT; i++) {
  489                 status = CSR_READ_2(sc, RTK_ISR);
  490                 if ((status & (RTK_ISR_TIMEOUT_EXPIRED | RTK_ISR_RX_OK)) ==
  491                     (RTK_ISR_TIMEOUT_EXPIRED | RTK_ISR_RX_OK))
  492                         break;
  493                 DELAY(10);
  494         }
  495         if (i == RTK_TIMEOUT) {
  496                 aprint_error("%s: diagnostic failed, failed to receive packet "
  497                     "in loopback mode\n", sc->sc_dev.dv_xname);
  498                 error = EIO;
  499                 goto done;
  500         }
  501 
  502         /*
  503          * The packet should have been dumped into the first
  504          * entry in the RX DMA ring. Grab it from there.
  505          */
  506 
  507         rxs = &sc->re_ldata.re_rxsoft[0];
  508         dmamap = rxs->rxs_dmamap;
  509         bus_dmamap_sync(sc->sc_dmat, dmamap, 0, dmamap->dm_mapsize,
  510             BUS_DMASYNC_POSTREAD);
  511         bus_dmamap_unload(sc->sc_dmat, dmamap);
  512 
  513         m0 = rxs->rxs_mbuf;
  514         rxs->rxs_mbuf = NULL;
  515         eh = mtod(m0, struct ether_header *);
  516 
  517         RE_RXDESCSYNC(sc, 0, BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE);
  518         cur_rx = &sc->re_ldata.re_rx_list[0];
  519         rxstat = le32toh(cur_rx->re_cmdstat);
  520         total_len = rxstat & sc->re_rxlenmask;
  521 
  522         if (total_len != ETHER_MIN_LEN) {
  523                 aprint_error("%s: diagnostic failed, received short packet\n",
  524                     sc->sc_dev.dv_xname);
  525                 error = EIO;
  526                 goto done;
  527         }
  528 
  529         /* Test that the received packet data matches what we sent. */
  530 
  531         if (memcmp((char *)&eh->ether_dhost, (char *)&dst, ETHER_ADDR_LEN) ||
  532             memcmp((char *)&eh->ether_shost, (char *)&src, ETHER_ADDR_LEN) ||
  533             ntohs(eh->ether_type) != ETHERTYPE_IP) {
  534                 aprint_error("%s: WARNING, DMA FAILURE!\n"
  535                     "expected TX data: %s/%s/0x%x\n"
  536                     "received RX data: %s/%s/0x%x\n"
  537                     "You may have a defective 32-bit NIC plugged "
  538                     "into a 64-bit PCI slot.\n"
  539                     "Please re-install the NIC in a 32-bit slot "
  540                     "for proper operation.\n"
  541                     "Read the re(4) man page for more details.\n" ,
  542                     sc->sc_dev.dv_xname,
  543                     ether_sprintf(dst),  ether_sprintf(src), ETHERTYPE_IP,
  544                     ether_sprintf(eh->ether_dhost),
  545                     ether_sprintf(eh->ether_shost), ntohs(eh->ether_type));
  546                 error = EIO;
  547         }
  548 
  549  done:
  550         /* Turn interface off, release resources */
  551 
  552         sc->re_testmode = 0;
  553         ifp->if_flags &= ~IFF_PROMISC;
  554         re_stop(ifp, 0);
  555         if (m0 != NULL)
  556                 m_freem(m0);
  557 
  558         return error;
  559 }
  560 
  561 
  562 /*
  563  * Attach the interface. Allocate softc structures, do ifmedia
  564  * setup and ethernet/BPF attach.
  565  */
  566 void
  567 re_attach(struct rtk_softc *sc)
  568 {
  569         u_char                  eaddr[ETHER_ADDR_LEN];
  570         uint16_t                val;
  571         struct ifnet            *ifp;
  572         int                     error = 0, i, addr_len;
  573 
  574         if ((sc->sc_quirk & RTKQ_8139CPLUS) == 0) {
  575                 uint32_t hwrev;
  576 
  577                 /* Revision of 8169/8169S/8110s in bits 30..26, 23 */
  578                 hwrev = CSR_READ_4(sc, RTK_TXCFG) & RTK_TXCFG_HWREV;
  579                 switch (hwrev) {
  580                 case RTK_HWREV_8169:
  581                         sc->sc_quirk |= RTKQ_8169NONS;
  582                         break;
  583                 case RTK_HWREV_8169S:
  584                 case RTK_HWREV_8110S:
  585                 case RTK_HWREV_8169_8110SB:
  586                 case RTK_HWREV_8169_8110SC:
  587                         sc->sc_quirk |= RTKQ_MACLDPS;
  588                         break;
  589                 case RTK_HWREV_8168_SPIN1:
  590                 case RTK_HWREV_8168_SPIN2:
  591                 case RTK_HWREV_8168_SPIN3:
  592                         sc->sc_quirk |= RTKQ_MACSTAT;
  593                         break;
  594                 case RTK_HWREV_8168C:
  595                 case RTK_HWREV_8168C_SPIN2:
  596                 case RTK_HWREV_8168CP:
  597                 case RTK_HWREV_8168D:
  598                         sc->sc_quirk |= RTKQ_DESCV2 | RTKQ_NOEECMD |
  599                             RTKQ_MACSTAT | RTKQ_CMDSTOP;
  600                         /*
  601                          * From FreeBSD driver:
  602                          * 
  603                          * These (8168/8111) controllers support jumbo frame
  604                          * but it seems that enabling it requires touching
  605                          * additional magic registers. Depending on MAC
  606                          * revisions some controllers need to disable
  607                          * checksum offload. So disable jumbo frame until
  608                          * I have better idea what it really requires to
  609                          * make it support.
  610                          * RTL8168C/CP : supports up to 6KB jumbo frame.
  611                          * RTL8111C/CP : supports up to 9KB jumbo frame.
  612                          */
  613                         sc->sc_quirk |= RTKQ_NOJUMBO;
  614                         break;
  615                 case RTK_HWREV_8100E:
  616                 case RTK_HWREV_8100E_SPIN2:
  617                 case RTK_HWREV_8101E:
  618                         sc->sc_quirk |= RTKQ_NOJUMBO;
  619                         break;
  620                 case RTK_HWREV_8102E:
  621                 case RTK_HWREV_8102EL:
  622                 case RTK_HWREV_8103E:
  623                         sc->sc_quirk |= RTKQ_DESCV2 | RTKQ_NOEECMD |
  624                             RTKQ_MACSTAT | RTKQ_CMDSTOP | RTKQ_NOJUMBO;
  625                         break;
  626                 default:
  627                         aprint_normal("%s: Unknown revision (0x%08x)\n",
  628                             sc->sc_dev.dv_xname, hwrev);
  629                         /* assume the latest features */
  630                         sc->sc_quirk |= RTKQ_DESCV2 | RTKQ_NOEECMD;
  631                         sc->sc_quirk |= RTKQ_NOJUMBO;
  632                 }
  633 
  634                 /* Set RX length mask */
  635                 sc->re_rxlenmask = RE_RDESC_STAT_GFRAGLEN;
  636                 sc->re_ldata.re_tx_desc_cnt = RE_TX_DESC_CNT_8169;
  637         } else {
  638                 sc->sc_quirk |= RTKQ_NOJUMBO;
  639 
  640                 /* Set RX length mask */
  641                 sc->re_rxlenmask = RE_RDESC_STAT_FRAGLEN;
  642                 sc->re_ldata.re_tx_desc_cnt = RE_TX_DESC_CNT_8139;
  643         }
  644 
  645         /* Reset the adapter. */
  646         re_reset(sc);
  647 
  648         if ((sc->sc_quirk & RTKQ_NOEECMD) != 0) {
  649                 /*
  650                  * Get station address from ID registers.
  651                  */
  652                 for (i = 0; i < ETHER_ADDR_LEN; i++)
  653                         eaddr[i] = CSR_READ_1(sc, RTK_IDR0 + i);
  654         } else {
  655                 /*
  656                  * Get station address from the EEPROM.
  657                  */
  658                 if (rtk_read_eeprom(sc, RTK_EE_ID, RTK_EEADDR_LEN1) == 0x8129)
  659                         addr_len = RTK_EEADDR_LEN1;
  660                 else
  661                         addr_len = RTK_EEADDR_LEN0;
  662 
  663                 /*
  664                  * Get station address from the EEPROM.
  665                  */
  666                 for (i = 0; i < ETHER_ADDR_LEN / 2; i++) {
  667                         val = rtk_read_eeprom(sc, RTK_EE_EADDR0 + i, addr_len);
  668                         eaddr[(i * 2) + 0] = val & 0xff;
  669                         eaddr[(i * 2) + 1] = val >> 8;
  670                 }
  671         }
  672 
  673         aprint_normal("%s: Ethernet address %s\n",
  674             sc->sc_dev.dv_xname, ether_sprintf(eaddr));
  675 
  676         if (sc->re_ldata.re_tx_desc_cnt >
  677             PAGE_SIZE / sizeof(struct re_desc)) {
  678                 sc->re_ldata.re_tx_desc_cnt =
  679                     PAGE_SIZE / sizeof(struct re_desc);
  680         }
  681 
  682         aprint_verbose("%s: using %d tx descriptors\n",
  683             sc->sc_dev.dv_xname, sc->re_ldata.re_tx_desc_cnt);
  684         KASSERT(RE_NEXT_TX_DESC(sc, RE_TX_DESC_CNT(sc) - 1) == 0);
  685 
  686         /* Allocate DMA'able memory for the TX ring */
  687         if ((error = bus_dmamem_alloc(sc->sc_dmat, RE_TX_LIST_SZ(sc),
  688             RE_RING_ALIGN, 0, &sc->re_ldata.re_tx_listseg, 1,
  689             &sc->re_ldata.re_tx_listnseg, BUS_DMA_NOWAIT)) != 0) {
  690                 aprint_error("%s: can't allocate tx listseg, error = %d\n",
  691                     sc->sc_dev.dv_xname, error);
  692                 goto fail_0;
  693         }
  694 
  695         /* Load the map for the TX ring. */
  696         if ((error = bus_dmamem_map(sc->sc_dmat, &sc->re_ldata.re_tx_listseg,
  697             sc->re_ldata.re_tx_listnseg, RE_TX_LIST_SZ(sc),
  698             (caddr_t *)&sc->re_ldata.re_tx_list,
  699             BUS_DMA_COHERENT | BUS_DMA_NOWAIT)) != 0) {
  700                 aprint_error("%s: can't map tx list, error = %d\n",
  701                     sc->sc_dev.dv_xname, error);
  702                 goto fail_1;
  703         }
  704         memset(sc->re_ldata.re_tx_list, 0, RE_TX_LIST_SZ(sc));
  705 
  706         if ((error = bus_dmamap_create(sc->sc_dmat, RE_TX_LIST_SZ(sc), 1,
  707             RE_TX_LIST_SZ(sc), 0, 0,
  708             &sc->re_ldata.re_tx_list_map)) != 0) {
  709                 aprint_error("%s: can't create tx list map, error = %d\n",
  710                     sc->sc_dev.dv_xname, error);
  711                 goto fail_2;
  712         }
  713 
  714 
  715         if ((error = bus_dmamap_load(sc->sc_dmat,
  716             sc->re_ldata.re_tx_list_map, sc->re_ldata.re_tx_list,
  717             RE_TX_LIST_SZ(sc), NULL, BUS_DMA_NOWAIT)) != 0) {
  718                 aprint_error("%s: can't load tx list, error = %d\n",
  719                     sc->sc_dev.dv_xname, error);
  720                 goto fail_3;
  721         }
  722 
  723         /* Create DMA maps for TX buffers */
  724         for (i = 0; i < RE_TX_QLEN; i++) {
  725                 error = bus_dmamap_create(sc->sc_dmat,
  726                     round_page(IP_MAXPACKET),
  727                     RE_TX_DESC_CNT(sc), RE_TDESC_CMD_FRAGLEN,
  728                     0, 0, &sc->re_ldata.re_txq[i].txq_dmamap);
  729                 if (error) {
  730                         aprint_error("%s: can't create DMA map for TX\n",
  731                             sc->sc_dev.dv_xname);
  732                         goto fail_4;
  733                 }
  734         }
  735 
  736         /* Allocate DMA'able memory for the RX ring */
  737         /* XXX see also a comment about RE_RX_DMAMEM_SZ in rtl81x9var.h */
  738         if ((error = bus_dmamem_alloc(sc->sc_dmat,
  739             RE_RX_DMAMEM_SZ, RE_RING_ALIGN, 0, &sc->re_ldata.re_rx_listseg, 1,
  740             &sc->re_ldata.re_rx_listnseg, BUS_DMA_NOWAIT)) != 0) {
  741                 aprint_error("%s: can't allocate rx listseg, error = %d\n",
  742                     sc->sc_dev.dv_xname, error);
  743                 goto fail_4;
  744         }
  745 
  746         /* Load the map for the RX ring. */
  747         if ((error = bus_dmamem_map(sc->sc_dmat, &sc->re_ldata.re_rx_listseg,
  748             sc->re_ldata.re_rx_listnseg, RE_RX_DMAMEM_SZ,
  749             (caddr_t *)&sc->re_ldata.re_rx_list,
  750             BUS_DMA_COHERENT | BUS_DMA_NOWAIT)) != 0) {
  751                 aprint_error("%s: can't map rx list, error = %d\n",
  752                     sc->sc_dev.dv_xname, error);
  753                 goto fail_5;
  754         }
  755         memset(sc->re_ldata.re_rx_list, 0, RE_RX_DMAMEM_SZ);
  756 
  757         if ((error = bus_dmamap_create(sc->sc_dmat,
  758             RE_RX_DMAMEM_SZ, 1, RE_RX_DMAMEM_SZ, 0, 0,
  759             &sc->re_ldata.re_rx_list_map)) != 0) {
  760                 aprint_error("%s: can't create rx list map, error = %d\n",
  761                     sc->sc_dev.dv_xname, error);
  762                 goto fail_6;
  763         }
  764 
  765         if ((error = bus_dmamap_load(sc->sc_dmat,
  766             sc->re_ldata.re_rx_list_map, sc->re_ldata.re_rx_list,
  767             RE_RX_DMAMEM_SZ, NULL, BUS_DMA_NOWAIT)) != 0) {
  768                 aprint_error("%s: can't load rx list, error = %d\n",
  769                     sc->sc_dev.dv_xname, error);
  770                 goto fail_7;
  771         }
  772 
  773         /* Create DMA maps for RX buffers */
  774         for (i = 0; i < RE_RX_DESC_CNT; i++) {
  775                 error = bus_dmamap_create(sc->sc_dmat, MCLBYTES, 1, MCLBYTES,
  776                     0, 0, &sc->re_ldata.re_rxsoft[i].rxs_dmamap);
  777                 if (error) {
  778                         aprint_error("%s: can't create DMA map for RX\n",
  779                             sc->sc_dev.dv_xname);
  780                         goto fail_8;
  781                 }
  782         }
  783 
  784         /*
  785          * Record interface as attached. From here, we should not fail.
  786          */
  787         sc->sc_flags |= RTK_ATTACHED;
  788 
  789         ifp = &sc->ethercom.ec_if;
  790         ifp->if_softc = sc;
  791         strcpy(ifp->if_xname, sc->sc_dev.dv_xname);
  792         ifp->if_mtu = ETHERMTU;
  793         ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST;
  794         ifp->if_ioctl = re_ioctl;
  795         sc->ethercom.ec_capabilities |=
  796             ETHERCAP_VLAN_MTU | ETHERCAP_VLAN_HWTAGGING;
  797         ifp->if_start = re_start;
  798         ifp->if_stop = re_stop;
  799 
  800         /*
  801          * IFCAP_CSUM_IPv4_Tx on re(4) is broken for small packets,
  802          * so we have a workaround to handle the bug by padding
  803          * such packets manually.
  804          */
  805         ifp->if_capabilities |=
  806             IFCAP_CSUM_IPv4_Tx | IFCAP_CSUM_IPv4_Rx |
  807             IFCAP_CSUM_TCPv4_Tx | IFCAP_CSUM_TCPv4_Rx |
  808             IFCAP_CSUM_UDPv4_Tx | IFCAP_CSUM_UDPv4_Rx |
  809             IFCAP_TSOv4;
  810 
  811         /*
  812          * XXX
  813          * Still have no idea how to make TSO work on 8168C, 8168CP,
  814          * 8102E, 8111C and 8111CP.
  815          */
  816         if ((sc->sc_quirk & RTKQ_DESCV2) != 0)
  817                 ifp->if_capabilities &= ~IFCAP_TSOv4;
  818 
  819         ifp->if_watchdog = re_watchdog;
  820         ifp->if_init = re_init;
  821         ifp->if_snd.ifq_maxlen = RE_IFQ_MAXLEN;
  822         ifp->if_capenable = ifp->if_capabilities;
  823         IFQ_SET_READY(&ifp->if_snd);
  824 
  825         callout_init(&sc->rtk_tick_ch);
  826 
  827         /* Do MII setup */
  828         sc->mii.mii_ifp = ifp;
  829         sc->mii.mii_readreg = re_miibus_readreg;
  830         sc->mii.mii_writereg = re_miibus_writereg;
  831         sc->mii.mii_statchg = re_miibus_statchg;
  832         ifmedia_init(&sc->mii.mii_media, IFM_IMASK, re_ifmedia_upd,
  833             re_ifmedia_sts);
  834         mii_attach(&sc->sc_dev, &sc->mii, 0xffffffff, MII_PHY_ANY,
  835             MII_OFFSET_ANY, 0);
  836         ifmedia_set(&sc->mii.mii_media, IFM_ETHER | IFM_AUTO);
  837 
  838         /*
  839          * Call MI attach routine.
  840          */
  841         if_attach(ifp);
  842         ether_ifattach(ifp, eaddr);
  843 
  844 
  845         /*
  846          * Make sure the interface is shutdown during reboot.
  847          */
  848         sc->sc_sdhook = shutdownhook_establish(re_shutdown, sc);
  849         if (sc->sc_sdhook == NULL)
  850                 aprint_error("%s: WARNING: unable to establish shutdown hook\n",
  851                     sc->sc_dev.dv_xname);
  852         /*
  853          * Add a suspend hook to make sure we come back up after a
  854          * resume.
  855          */
  856         sc->sc_powerhook = powerhook_establish(sc->sc_dev.dv_xname,
  857             re_power, sc);
  858         if (sc->sc_powerhook == NULL)
  859                 aprint_error("%s: WARNING: unable to establish power hook\n",
  860                     sc->sc_dev.dv_xname);
  861 
  862 
  863         return;
  864 
  865  fail_8:
  866         /* Destroy DMA maps for RX buffers. */
  867         for (i = 0; i < RE_RX_DESC_CNT; i++)
  868                 if (sc->re_ldata.re_rxsoft[i].rxs_dmamap != NULL)
  869                         bus_dmamap_destroy(sc->sc_dmat,
  870                             sc->re_ldata.re_rxsoft[i].rxs_dmamap);
  871 
  872         /* Free DMA'able memory for the RX ring. */
  873         bus_dmamap_unload(sc->sc_dmat, sc->re_ldata.re_rx_list_map);
  874  fail_7:
  875         bus_dmamap_destroy(sc->sc_dmat, sc->re_ldata.re_rx_list_map);
  876  fail_6:
  877         bus_dmamem_unmap(sc->sc_dmat,
  878             (caddr_t)sc->re_ldata.re_rx_list, RE_RX_DMAMEM_SZ);
  879  fail_5:
  880         bus_dmamem_free(sc->sc_dmat,
  881             &sc->re_ldata.re_rx_listseg, sc->re_ldata.re_rx_listnseg);
  882 
  883  fail_4:
  884         /* Destroy DMA maps for TX buffers. */
  885         for (i = 0; i < RE_TX_QLEN; i++)
  886                 if (sc->re_ldata.re_txq[i].txq_dmamap != NULL)
  887                         bus_dmamap_destroy(sc->sc_dmat,
  888                             sc->re_ldata.re_txq[i].txq_dmamap);
  889 
  890         /* Free DMA'able memory for the TX ring. */
  891         bus_dmamap_unload(sc->sc_dmat, sc->re_ldata.re_tx_list_map);
  892  fail_3:
  893         bus_dmamap_destroy(sc->sc_dmat, sc->re_ldata.re_tx_list_map);
  894  fail_2:
  895         bus_dmamem_unmap(sc->sc_dmat,
  896             (caddr_t)sc->re_ldata.re_tx_list, RE_TX_LIST_SZ(sc));
  897  fail_1:
  898         bus_dmamem_free(sc->sc_dmat,
  899             &sc->re_ldata.re_tx_listseg, sc->re_ldata.re_tx_listnseg);
  900  fail_0:
  901         return;
  902 }
  903 
  904 
  905 /*
  906  * re_activate:
  907  *     Handle device activation/deactivation requests.
  908  */
  909 int
  910 re_activate(struct device *self, enum devact act)
  911 {
  912         struct rtk_softc *sc = (void *)self;
  913         int s, error = 0;
  914 
  915         s = splnet();
  916         switch (act) {
  917         case DVACT_ACTIVATE:
  918                 error = EOPNOTSUPP;
  919                 break;
  920         case DVACT_DEACTIVATE:
  921                 mii_activate(&sc->mii, act, MII_PHY_ANY, MII_OFFSET_ANY);
  922                 if_deactivate(&sc->ethercom.ec_if);
  923                 break;
  924         }
  925         splx(s);
  926 
  927         return error;
  928 }
  929 
  930 /*
  931  * re_detach:
  932  *     Detach a rtk interface.
  933  */
  934 int
  935 re_detach(struct rtk_softc *sc)
  936 {
  937         struct ifnet *ifp = &sc->ethercom.ec_if;
  938         int i;
  939 
  940         /*
  941          * Succeed now if there isn't any work to do.
  942          */
  943         if ((sc->sc_flags & RTK_ATTACHED) == 0)
  944                 return 0;
  945 
  946         /* Unhook our tick handler. */
  947         callout_stop(&sc->rtk_tick_ch);
  948 
  949         /* Detach all PHYs. */
  950         mii_detach(&sc->mii, MII_PHY_ANY, MII_OFFSET_ANY);
  951 
  952         /* Delete all remaining media. */
  953         ifmedia_delete_instance(&sc->mii.mii_media, IFM_INST_ANY);
  954 
  955         ether_ifdetach(ifp);
  956         if_detach(ifp);
  957 
  958         /* Destroy DMA maps for RX buffers. */
  959         for (i = 0; i < RE_RX_DESC_CNT; i++)
  960                 if (sc->re_ldata.re_rxsoft[i].rxs_dmamap != NULL)
  961                         bus_dmamap_destroy(sc->sc_dmat,
  962                             sc->re_ldata.re_rxsoft[i].rxs_dmamap);
  963 
  964         /* Free DMA'able memory for the RX ring. */
  965         bus_dmamap_unload(sc->sc_dmat, sc->re_ldata.re_rx_list_map);
  966         bus_dmamap_destroy(sc->sc_dmat, sc->re_ldata.re_rx_list_map);
  967         bus_dmamem_unmap(sc->sc_dmat,
  968             (caddr_t)sc->re_ldata.re_rx_list, RE_RX_DMAMEM_SZ);
  969         bus_dmamem_free(sc->sc_dmat,
  970             &sc->re_ldata.re_rx_listseg, sc->re_ldata.re_rx_listnseg);
  971 
  972         /* Destroy DMA maps for TX buffers. */
  973         for (i = 0; i < RE_TX_QLEN; i++)
  974                 if (sc->re_ldata.re_txq[i].txq_dmamap != NULL)
  975                         bus_dmamap_destroy(sc->sc_dmat,
  976                             sc->re_ldata.re_txq[i].txq_dmamap);
  977 
  978         /* Free DMA'able memory for the TX ring. */
  979         bus_dmamap_unload(sc->sc_dmat, sc->re_ldata.re_tx_list_map);
  980         bus_dmamap_destroy(sc->sc_dmat, sc->re_ldata.re_tx_list_map);
  981         bus_dmamem_unmap(sc->sc_dmat,
  982             (caddr_t)sc->re_ldata.re_tx_list, RE_TX_LIST_SZ(sc));
  983         bus_dmamem_free(sc->sc_dmat,
  984             &sc->re_ldata.re_tx_listseg, sc->re_ldata.re_tx_listnseg);
  985 
  986 
  987         shutdownhook_disestablish(sc->sc_sdhook);
  988         powerhook_disestablish(sc->sc_powerhook);
  989 
  990         return 0;
  991 }
  992 
  993 /*
  994  * re_enable:
  995  *     Enable the RTL81X9 chip.
  996  */
  997 static int
  998 re_enable(struct rtk_softc *sc)
  999 {
 1000 
 1001         if (RTK_IS_ENABLED(sc) == 0 && sc->sc_enable != NULL) {
 1002                 if ((*sc->sc_enable)(sc) != 0) {
 1003                         aprint_error("%s: device enable failed\n",
 1004                             sc->sc_dev.dv_xname);
 1005                         return EIO;
 1006                 }
 1007                 sc->sc_flags |= RTK_ENABLED;
 1008         }
 1009         return 0;
 1010 }
 1011 
 1012 /*
 1013  * re_disable:
 1014  *     Disable the RTL81X9 chip.
 1015  */
 1016 static void
 1017 re_disable(struct rtk_softc *sc)
 1018 {
 1019 
 1020         if (RTK_IS_ENABLED(sc) && sc->sc_disable != NULL) {
 1021                 (*sc->sc_disable)(sc);
 1022                 sc->sc_flags &= ~RTK_ENABLED;
 1023         }
 1024 }
 1025 
 1026 /*
 1027  * re_power:
 1028  *     Power management (suspend/resume) hook.
 1029  */
 1030 void
 1031 re_power(int why, void *arg)
 1032 {
 1033         struct rtk_softc *sc = (void *)arg;
 1034         struct ifnet *ifp = &sc->ethercom.ec_if;
 1035         int s;
 1036 
 1037         s = splnet();
 1038         switch (why) {
 1039         case PWR_SUSPEND:
 1040         case PWR_STANDBY:
 1041                 re_stop(ifp, 0);
 1042                 if (sc->sc_power != NULL)
 1043                         (*sc->sc_power)(sc, why);
 1044                 break;
 1045         case PWR_RESUME:
 1046                 if (ifp->if_flags & IFF_UP) {
 1047                         if (sc->sc_power != NULL)
 1048                                 (*sc->sc_power)(sc, why);
 1049                         re_init(ifp);
 1050                 }
 1051                 break;
 1052         case PWR_SOFTSUSPEND:
 1053         case PWR_SOFTSTANDBY:
 1054         case PWR_SOFTRESUME:
 1055                 break;
 1056         }
 1057         splx(s);
 1058 }
 1059 
 1060 
 1061 static int
 1062 re_newbuf(struct rtk_softc *sc, int idx, struct mbuf *m)
 1063 {
 1064         struct mbuf             *n = NULL;
 1065         bus_dmamap_t            map;
 1066         struct re_desc          *d;
 1067         struct re_rxsoft        *rxs;
 1068         uint32_t                cmdstat;
 1069         int                     error;
 1070 
 1071         if (m == NULL) {
 1072                 MGETHDR(n, M_DONTWAIT, MT_DATA);
 1073                 if (n == NULL)
 1074                         return ENOBUFS;
 1075 
 1076                 MCLGET(n, M_DONTWAIT);
 1077                 if ((n->m_flags & M_EXT) == 0) {
 1078                         m_freem(n);
 1079                         return ENOBUFS;
 1080                 }
 1081                 m = n;
 1082         } else
 1083                 m->m_data = m->m_ext.ext_buf;
 1084 
 1085         /*
 1086          * Initialize mbuf length fields and fixup
 1087          * alignment so that the frame payload is
 1088          * longword aligned.
 1089          */
 1090         m->m_len = m->m_pkthdr.len = MCLBYTES - RE_ETHER_ALIGN;
 1091         m->m_data += RE_ETHER_ALIGN;
 1092 
 1093         rxs = &sc->re_ldata.re_rxsoft[idx];
 1094         map = rxs->rxs_dmamap;
 1095         error = bus_dmamap_load_mbuf(sc->sc_dmat, map, m,
 1096             BUS_DMA_READ|BUS_DMA_NOWAIT);
 1097 
 1098         if (error)
 1099                 goto out;
 1100 
 1101         bus_dmamap_sync(sc->sc_dmat, map, 0, map->dm_mapsize,
 1102             BUS_DMASYNC_PREREAD);
 1103 
 1104         d = &sc->re_ldata.re_rx_list[idx];
 1105 #ifdef DIAGNOSTIC
 1106         RE_RXDESCSYNC(sc, idx, BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE);
 1107         cmdstat = le32toh(d->re_cmdstat);
 1108         RE_RXDESCSYNC(sc, idx, BUS_DMASYNC_PREREAD);
 1109         if (cmdstat & RE_RDESC_STAT_OWN) {
 1110                 panic("%s: tried to map busy RX descriptor",
 1111                     sc->sc_dev.dv_xname);
 1112         }
 1113 #endif
 1114 
 1115         rxs->rxs_mbuf = m;
 1116 
 1117         d->re_vlanctl = 0;
 1118         cmdstat = map->dm_segs[0].ds_len;
 1119         if (idx == (RE_RX_DESC_CNT - 1))
 1120                 cmdstat |= RE_RDESC_CMD_EOR;
 1121         re_set_bufaddr(d, map->dm_segs[0].ds_addr);
 1122         d->re_cmdstat = htole32(cmdstat);
 1123         RE_RXDESCSYNC(sc, idx, BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1124         cmdstat |= RE_RDESC_CMD_OWN;
 1125         d->re_cmdstat = htole32(cmdstat);
 1126         RE_RXDESCSYNC(sc, idx, BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1127 
 1128         return 0;
 1129  out:
 1130         if (n != NULL)
 1131                 m_freem(n);
 1132         return ENOMEM;
 1133 }
 1134 
 1135 static int
 1136 re_tx_list_init(struct rtk_softc *sc)
 1137 {
 1138         int i;
 1139 
 1140         memset(sc->re_ldata.re_tx_list, 0, RE_TX_LIST_SZ(sc));
 1141         for (i = 0; i < RE_TX_QLEN; i++) {
 1142                 sc->re_ldata.re_txq[i].txq_mbuf = NULL;
 1143         }
 1144 
 1145         bus_dmamap_sync(sc->sc_dmat,
 1146             sc->re_ldata.re_tx_list_map, 0,
 1147             sc->re_ldata.re_tx_list_map->dm_mapsize,
 1148             BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1149         sc->re_ldata.re_txq_prodidx = 0;
 1150         sc->re_ldata.re_txq_considx = 0;
 1151         sc->re_ldata.re_txq_free = RE_TX_QLEN;
 1152         sc->re_ldata.re_tx_free = RE_TX_DESC_CNT(sc);
 1153         sc->re_ldata.re_tx_nextfree = 0;
 1154 
 1155         return 0;
 1156 }
 1157 
 1158 static int
 1159 re_rx_list_init(struct rtk_softc *sc)
 1160 {
 1161         int                     i;
 1162 
 1163         memset((char *)sc->re_ldata.re_rx_list, 0, RE_RX_LIST_SZ);
 1164 
 1165         for (i = 0; i < RE_RX_DESC_CNT; i++) {
 1166                 if (re_newbuf(sc, i, NULL) == ENOBUFS)
 1167                         return ENOBUFS;
 1168         }
 1169 
 1170         sc->re_ldata.re_rx_prodidx = 0;
 1171         sc->re_head = sc->re_tail = NULL;
 1172 
 1173         return 0;
 1174 }
 1175 
 1176 /*
 1177  * RX handler for C+ and 8169. For the gigE chips, we support
 1178  * the reception of jumbo frames that have been fragmented
 1179  * across multiple 2K mbuf cluster buffers.
 1180  */
 1181 static void
 1182 re_rxeof(struct rtk_softc *sc)
 1183 {
 1184         struct mbuf             *m;
 1185         struct ifnet            *ifp;
 1186         int                     i, total_len;
 1187         struct re_desc          *cur_rx;
 1188         struct re_rxsoft        *rxs;
 1189         uint32_t                rxstat, rxvlan;
 1190 
 1191         ifp = &sc->ethercom.ec_if;
 1192 
 1193         for (i = sc->re_ldata.re_rx_prodidx;; i = RE_NEXT_RX_DESC(sc, i)) {
 1194                 cur_rx = &sc->re_ldata.re_rx_list[i];
 1195                 RE_RXDESCSYNC(sc, i,
 1196                     BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE);
 1197                 rxstat = le32toh(cur_rx->re_cmdstat);
 1198                 rxvlan = le32toh(cur_rx->re_vlanctl);
 1199                 RE_RXDESCSYNC(sc, i, BUS_DMASYNC_PREREAD);
 1200                 if ((rxstat & RE_RDESC_STAT_OWN) != 0) {
 1201                         break;
 1202                 }
 1203                 total_len = rxstat & sc->re_rxlenmask;
 1204                 rxs = &sc->re_ldata.re_rxsoft[i];
 1205                 m = rxs->rxs_mbuf;
 1206 
 1207                 /* Invalidate the RX mbuf and unload its map */
 1208 
 1209                 bus_dmamap_sync(sc->sc_dmat,
 1210                     rxs->rxs_dmamap, 0, rxs->rxs_dmamap->dm_mapsize,
 1211                     BUS_DMASYNC_POSTREAD);
 1212                 bus_dmamap_unload(sc->sc_dmat, rxs->rxs_dmamap);
 1213 
 1214                 if ((rxstat & RE_RDESC_STAT_EOF) == 0) {
 1215                         m->m_len = MCLBYTES - RE_ETHER_ALIGN;
 1216                         if (sc->re_head == NULL)
 1217                                 sc->re_head = sc->re_tail = m;
 1218                         else {
 1219                                 m->m_flags &= ~M_PKTHDR;
 1220                                 sc->re_tail->m_next = m;
 1221                                 sc->re_tail = m;
 1222                         }
 1223                         re_newbuf(sc, i, NULL);
 1224                         continue;
 1225                 }
 1226 
 1227                 /*
 1228                  * NOTE: for the 8139C+, the frame length field
 1229                  * is always 12 bits in size, but for the gigE chips,
 1230                  * it is 13 bits (since the max RX frame length is 16K).
 1231                  * Unfortunately, all 32 bits in the status word
 1232                  * were already used, so to make room for the extra
 1233                  * length bit, RealTek took out the 'frame alignment
 1234                  * error' bit and shifted the other status bits
 1235                  * over one slot. The OWN, EOR, FS and LS bits are
 1236                  * still in the same places. We have already extracted
 1237                  * the frame length and checked the OWN bit, so rather
 1238                  * than using an alternate bit mapping, we shift the
 1239                  * status bits one space to the right so we can evaluate
 1240                  * them using the 8169 status as though it was in the
 1241                  * same format as that of the 8139C+.
 1242                  */
 1243                 if ((sc->sc_quirk & RTKQ_8139CPLUS) == 0)
 1244                         rxstat >>= 1;
 1245 
 1246                 if (__predict_false((rxstat & RE_RDESC_STAT_RXERRSUM) != 0)) {
 1247 #ifdef RE_DEBUG
 1248                         aprint_error("%s: RX error (rxstat = 0x%08x)",
 1249                             sc->sc_dev.dv_xname, rxstat);
 1250                         if (rxstat & RE_RDESC_STAT_FRALIGN)
 1251                                 aprint_error(", frame alignment error");
 1252                         if (rxstat & RE_RDESC_STAT_BUFOFLOW)
 1253                                 aprint_error(", out of buffer space");
 1254                         if (rxstat & RE_RDESC_STAT_FIFOOFLOW)
 1255                                 aprint_error(", FIFO overrun");
 1256                         if (rxstat & RE_RDESC_STAT_GIANT)
 1257                                 aprint_error(", giant packet");
 1258                         if (rxstat & RE_RDESC_STAT_RUNT)
 1259                                 aprint_error(", runt packet");
 1260                         if (rxstat & RE_RDESC_STAT_CRCERR)
 1261                                 aprint_error(", CRC error");
 1262                         aprint_error("\n");
 1263 #endif
 1264                         ifp->if_ierrors++;
 1265                         /*
 1266                          * If this is part of a multi-fragment packet,
 1267                          * discard all the pieces.
 1268                          */
 1269                         if (sc->re_head != NULL) {
 1270                                 m_freem(sc->re_head);
 1271                                 sc->re_head = sc->re_tail = NULL;
 1272                         }
 1273                         re_newbuf(sc, i, m);
 1274                         continue;
 1275                 }
 1276 
 1277                 /*
 1278                  * If allocating a replacement mbuf fails,
 1279                  * reload the current one.
 1280                  */
 1281 
 1282                 if (__predict_false(re_newbuf(sc, i, NULL) != 0)) {
 1283                         ifp->if_ierrors++;
 1284                         if (sc->re_head != NULL) {
 1285                                 m_freem(sc->re_head);
 1286                                 sc->re_head = sc->re_tail = NULL;
 1287                         }
 1288                         re_newbuf(sc, i, m);
 1289                         continue;
 1290                 }
 1291 
 1292                 if (sc->re_head != NULL) {
 1293                         m->m_len = total_len % (MCLBYTES - RE_ETHER_ALIGN);
 1294                         /*
 1295                          * Special case: if there's 4 bytes or less
 1296                          * in this buffer, the mbuf can be discarded:
 1297                          * the last 4 bytes is the CRC, which we don't
 1298                          * care about anyway.
 1299                          */
 1300                         if (m->m_len <= ETHER_CRC_LEN) {
 1301                                 sc->re_tail->m_len -=
 1302                                     (ETHER_CRC_LEN - m->m_len);
 1303                                 m_freem(m);
 1304                         } else {
 1305                                 m->m_len -= ETHER_CRC_LEN;
 1306                                 m->m_flags &= ~M_PKTHDR;
 1307                                 sc->re_tail->m_next = m;
 1308                         }
 1309                         m = sc->re_head;
 1310                         sc->re_head = sc->re_tail = NULL;
 1311                         m->m_pkthdr.len = total_len - ETHER_CRC_LEN;
 1312                 } else
 1313                         m->m_pkthdr.len = m->m_len =
 1314                             (total_len - ETHER_CRC_LEN);
 1315 
 1316                 ifp->if_ipackets++;
 1317                 m->m_pkthdr.rcvif = ifp;
 1318 
 1319                 /* Do RX checksumming */
 1320                 if ((sc->sc_quirk & RTKQ_DESCV2) == 0) {
 1321                         /* Check IP header checksum */
 1322                         if ((rxstat & RE_RDESC_STAT_PROTOID) != 0) {
 1323                                 m->m_pkthdr.csum_flags |= M_CSUM_IPv4;
 1324                                 if (rxstat & RE_RDESC_STAT_IPSUMBAD)
 1325                                         m->m_pkthdr.csum_flags |=
 1326                                             M_CSUM_IPv4_BAD;
 1327 
 1328                                 /* Check TCP/UDP checksum */
 1329                                 if (RE_TCPPKT(rxstat)) {
 1330                                         m->m_pkthdr.csum_flags |= M_CSUM_TCPv4;
 1331                                         if (rxstat & RE_RDESC_STAT_TCPSUMBAD)
 1332                                                 m->m_pkthdr.csum_flags |=
 1333                                                     M_CSUM_TCP_UDP_BAD;
 1334                                 } else if (RE_UDPPKT(rxstat)) {
 1335                                         m->m_pkthdr.csum_flags |= M_CSUM_UDPv4;
 1336                                         if (rxstat & RE_RDESC_STAT_UDPSUMBAD)
 1337                                                 m->m_pkthdr.csum_flags |=
 1338                                                     M_CSUM_TCP_UDP_BAD;
 1339                                 }
 1340                         }
 1341                 } else {
 1342                         /* Check IPv4 header checksum */
 1343                         if ((rxvlan & RE_RDESC_VLANCTL_IPV4) != 0) {
 1344                                 m->m_pkthdr.csum_flags |= M_CSUM_IPv4;
 1345                                 if (rxstat & RE_RDESC_STAT_IPSUMBAD)
 1346                                         m->m_pkthdr.csum_flags |=
 1347                                             M_CSUM_IPv4_BAD;
 1348 
 1349                                 /* Check TCPv4/UDPv4 checksum */
 1350                                 if (RE_TCPPKT(rxstat)) {
 1351                                         m->m_pkthdr.csum_flags |= M_CSUM_TCPv4;
 1352                                         if (rxstat & RE_RDESC_STAT_TCPSUMBAD)
 1353                                                 m->m_pkthdr.csum_flags |=
 1354                                                     M_CSUM_TCP_UDP_BAD;
 1355                                 } else if (RE_UDPPKT(rxstat)) {
 1356                                         m->m_pkthdr.csum_flags |= M_CSUM_UDPv4;
 1357                                         if (rxstat & RE_RDESC_STAT_UDPSUMBAD)
 1358                                                 m->m_pkthdr.csum_flags |=
 1359                                                     M_CSUM_TCP_UDP_BAD;
 1360                                 }
 1361                         }
 1362                         /* XXX Check TCPv6/UDPv6 checksum? */
 1363                 }
 1364 
 1365                 if (rxvlan & RE_RDESC_VLANCTL_TAG) {
 1366                         VLAN_INPUT_TAG(ifp, m,
 1367                              bswap16(rxvlan & RE_RDESC_VLANCTL_DATA),
 1368                              continue);
 1369                 }
 1370 #if NBPFILTER > 0
 1371                 if (ifp->if_bpf)
 1372                         bpf_mtap(ifp->if_bpf, m);
 1373 #endif
 1374                 (*ifp->if_input)(ifp, m);
 1375         }
 1376 
 1377         sc->re_ldata.re_rx_prodidx = i;
 1378 }
 1379 
 1380 static void
 1381 re_txeof(struct rtk_softc *sc)
 1382 {
 1383         struct ifnet            *ifp;
 1384         struct re_txq           *txq;
 1385         uint32_t                txstat;
 1386         int                     idx, descidx;
 1387 
 1388         ifp = &sc->ethercom.ec_if;
 1389 
 1390         for (idx = sc->re_ldata.re_txq_considx;
 1391             sc->re_ldata.re_txq_free < RE_TX_QLEN;
 1392             idx = RE_NEXT_TXQ(sc, idx), sc->re_ldata.re_txq_free++) {
 1393                 txq = &sc->re_ldata.re_txq[idx];
 1394                 KASSERT(txq->txq_mbuf != NULL);
 1395 
 1396                 descidx = txq->txq_descidx;
 1397                 RE_TXDESCSYNC(sc, descidx,
 1398                     BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE);
 1399                 txstat =
 1400                     le32toh(sc->re_ldata.re_tx_list[descidx].re_cmdstat);
 1401                 RE_TXDESCSYNC(sc, descidx, BUS_DMASYNC_PREREAD);
 1402                 KASSERT((txstat & RE_TDESC_CMD_EOF) != 0);
 1403                 if (txstat & RE_TDESC_CMD_OWN) {
 1404                         break;
 1405                 }
 1406 
 1407                 sc->re_ldata.re_tx_free += txq->txq_nsegs;
 1408                 KASSERT(sc->re_ldata.re_tx_free <= RE_TX_DESC_CNT(sc));
 1409                 bus_dmamap_sync(sc->sc_dmat, txq->txq_dmamap,
 1410                     0, txq->txq_dmamap->dm_mapsize, BUS_DMASYNC_POSTWRITE);
 1411                 bus_dmamap_unload(sc->sc_dmat, txq->txq_dmamap);
 1412                 m_freem(txq->txq_mbuf);
 1413                 txq->txq_mbuf = NULL;
 1414 
 1415                 if (txstat & (RE_TDESC_STAT_EXCESSCOL | RE_TDESC_STAT_COLCNT))
 1416                         ifp->if_collisions++;
 1417                 if (txstat & RE_TDESC_STAT_TXERRSUM)
 1418                         ifp->if_oerrors++;
 1419                 else
 1420                         ifp->if_opackets++;
 1421         }
 1422 
 1423         sc->re_ldata.re_txq_considx = idx;
 1424 
 1425         if (sc->re_ldata.re_txq_free > RE_NTXDESC_RSVD)
 1426                 ifp->if_flags &= ~IFF_OACTIVE;
 1427 
 1428         /*
 1429          * If not all descriptors have been released reaped yet,
 1430          * reload the timer so that we will eventually get another
 1431          * interrupt that will cause us to re-enter this routine.
 1432          * This is done in case the transmitter has gone idle.
 1433          */
 1434         if (sc->re_ldata.re_txq_free < RE_TX_QLEN) {
 1435                 CSR_WRITE_4(sc, RTK_TIMERCNT, 1);
 1436                 if ((sc->sc_quirk & RTKQ_PCIE) != 0) {
 1437                         /*
 1438                          * Some chips will ignore a second TX request
 1439                          * issued while an existing transmission is in
 1440                          * progress. If the transmitter goes idle but
 1441                          * there are still packets waiting to be sent,
 1442                          * we need to restart the channel here to flush
 1443                          * them out. This only seems to be required with
 1444                          * the PCIe devices.
 1445                          */
 1446                         CSR_WRITE_1(sc, RTK_GTXSTART, RTK_TXSTART_START);
 1447                 }
 1448         } else
 1449                 ifp->if_timer = 0;
 1450 }
 1451 
 1452 /*
 1453  * Stop all chip I/O so that the kernel's probe routines don't
 1454  * get confused by errant DMAs when rebooting.
 1455  */
 1456 static void
 1457 re_shutdown(void *vsc)
 1458 
 1459 {
 1460         struct rtk_softc        *sc = vsc;
 1461 
 1462         re_stop(&sc->ethercom.ec_if, 0);
 1463 }
 1464 
 1465 
 1466 static void
 1467 re_tick(void *xsc)
 1468 {
 1469         struct rtk_softc        *sc = xsc;
 1470         int s;
 1471 
 1472         /*XXX: just return for 8169S/8110S with rev 2 or newer phy */
 1473         s = splnet();
 1474 
 1475         mii_tick(&sc->mii);
 1476         splx(s);
 1477 
 1478         callout_reset(&sc->rtk_tick_ch, hz, re_tick, sc);
 1479 }
 1480 
 1481 #ifdef DEVICE_POLLING
 1482 static void
 1483 re_poll(struct ifnet *ifp, enum poll_cmd cmd, int count)
 1484 {
 1485         struct rtk_softc *sc = ifp->if_softc;
 1486 
 1487         RTK_LOCK(sc);
 1488         if ((ifp->if_capenable & IFCAP_POLLING) == 0) {
 1489                 ether_poll_deregister(ifp);
 1490                 cmd = POLL_DEREGISTER;
 1491         }
 1492         if (cmd == POLL_DEREGISTER) { /* final call, enable interrupts */
 1493                 CSR_WRITE_2(sc, RTK_IMR, RTK_INTRS_CPLUS);
 1494                 goto done;
 1495         }
 1496 
 1497         sc->rxcycles = count;
 1498         re_rxeof(sc);
 1499         re_txeof(sc);
 1500 
 1501         if (IFQ_IS_EMPTY(&ifp->if_snd) == 0)
 1502                 (*ifp->if_start)(ifp);
 1503 
 1504         if (cmd == POLL_AND_CHECK_STATUS) { /* also check status register */
 1505                 uint16_t       status;
 1506 
 1507                 status = CSR_READ_2(sc, RTK_ISR);
 1508                 if (status == 0xffff)
 1509                         goto done;
 1510                 if (status)
 1511                         CSR_WRITE_2(sc, RTK_ISR, status);
 1512 
 1513                 /*
 1514                  * XXX check behaviour on receiver stalls.
 1515                  */
 1516 
 1517                 if (status & RTK_ISR_SYSTEM_ERR) {
 1518                         re_init(sc);
 1519                 }
 1520         }
 1521  done:
 1522         RTK_UNLOCK(sc);
 1523 }
 1524 #endif /* DEVICE_POLLING */
 1525 
 1526 int
 1527 re_intr(void *arg)
 1528 {
 1529         struct rtk_softc        *sc = arg;
 1530         struct ifnet            *ifp;
 1531         uint16_t                status;
 1532         int                     handled = 0;
 1533 
 1534         ifp = &sc->ethercom.ec_if;
 1535 
 1536         if ((ifp->if_flags & IFF_UP) == 0)
 1537                 return 0;
 1538 
 1539 #ifdef DEVICE_POLLING
 1540         if (ifp->if_flags & IFF_POLLING)
 1541                 goto done;
 1542         if ((ifp->if_capenable & IFCAP_POLLING) &&
 1543             ether_poll_register(re_poll, ifp)) { /* ok, disable interrupts */
 1544                 CSR_WRITE_2(sc, RTK_IMR, 0x0000);
 1545                 re_poll(ifp, 0, 1);
 1546                 goto done;
 1547         }
 1548 #endif /* DEVICE_POLLING */
 1549 
 1550         for (;;) {
 1551 
 1552                 status = CSR_READ_2(sc, RTK_ISR);
 1553                 /* If the card has gone away the read returns 0xffff. */
 1554                 if (status == 0xffff)
 1555                         break;
 1556                 if (status) {
 1557                         handled = 1;
 1558                         CSR_WRITE_2(sc, RTK_ISR, status);
 1559                 }
 1560 
 1561                 if ((status & RTK_INTRS_CPLUS) == 0)
 1562                         break;
 1563 
 1564                 if (status & (RTK_ISR_RX_OK | RTK_ISR_RX_ERR))
 1565                         re_rxeof(sc);
 1566 
 1567                 if (status & (RTK_ISR_TIMEOUT_EXPIRED | RTK_ISR_TX_ERR |
 1568                     RTK_ISR_TX_DESC_UNAVAIL))
 1569                         re_txeof(sc);
 1570 
 1571                 if (status & RTK_ISR_SYSTEM_ERR) {
 1572                         re_init(ifp);
 1573                 }
 1574 
 1575                 if (status & RTK_ISR_LINKCHG) {
 1576                         callout_stop(&sc->rtk_tick_ch);
 1577                         re_tick(sc);
 1578                 }
 1579         }
 1580 
 1581         if (handled && !IFQ_IS_EMPTY(&ifp->if_snd))
 1582                 re_start(ifp);
 1583 
 1584 #ifdef DEVICE_POLLING
 1585  done:
 1586 #endif
 1587 
 1588         return handled;
 1589 }
 1590 
 1591 
 1592 
 1593 /*
 1594  * Main transmit routine for C+ and gigE NICs.
 1595  */
 1596 
 1597 static void
 1598 re_start(struct ifnet *ifp)
 1599 {
 1600         struct rtk_softc        *sc;
 1601         struct mbuf             *m;
 1602         bus_dmamap_t            map;
 1603         struct re_txq           *txq;
 1604         struct re_desc          *d;
 1605         struct m_tag            *mtag;
 1606         uint32_t                cmdstat, re_flags, vlanctl;
 1607         int                     ofree, idx, error, nsegs, seg;
 1608         int                     startdesc, curdesc, lastdesc;
 1609         boolean_t               pad;
 1610 
 1611         sc = ifp->if_softc;
 1612         ofree = sc->re_ldata.re_txq_free;
 1613 
 1614         for (idx = sc->re_ldata.re_txq_prodidx;; idx = RE_NEXT_TXQ(sc, idx)) {
 1615 
 1616                 IFQ_POLL(&ifp->if_snd, m);
 1617                 if (m == NULL)
 1618                         break;
 1619 
 1620                 if (sc->re_ldata.re_txq_free == 0 ||
 1621                     sc->re_ldata.re_tx_free == 0) {
 1622                         /* no more free slots left */
 1623                         ifp->if_flags |= IFF_OACTIVE;
 1624                         break;
 1625                 }
 1626 
 1627                 /*
 1628                  * Set up checksum offload. Note: checksum offload bits must
 1629                  * appear in all descriptors of a multi-descriptor transmit
 1630                  * attempt. (This is according to testing done with an 8169
 1631                  * chip. I'm not sure if this is a requirement or a bug.)
 1632                  */
 1633 
 1634                 vlanctl = 0;
 1635                 if ((m->m_pkthdr.csum_flags & M_CSUM_TSOv4) != 0) {
 1636                         uint32_t segsz = m->m_pkthdr.segsz;
 1637 
 1638                         re_flags = RE_TDESC_CMD_LGSEND |
 1639                             (segsz << RE_TDESC_CMD_MSSVAL_SHIFT);
 1640                 } else {
 1641                         /*
 1642                          * set RE_TDESC_CMD_IPCSUM if any checksum offloading
 1643                          * is requested.  otherwise, RE_TDESC_CMD_TCPCSUM/
 1644                          * RE_TDESC_CMD_UDPCSUM doesn't make effects.
 1645                          */
 1646                         re_flags = 0;
 1647                         if ((m->m_pkthdr.csum_flags &
 1648                             (M_CSUM_IPv4 | M_CSUM_TCPv4 | M_CSUM_UDPv4))
 1649                             != 0) {
 1650                                 if ((sc->sc_quirk & RTKQ_DESCV2) == 0) {
 1651                                         re_flags |= RE_TDESC_CMD_IPCSUM;
 1652                                         if (m->m_pkthdr.csum_flags &
 1653                                             M_CSUM_TCPv4) {
 1654                                                 re_flags |=
 1655                                                     RE_TDESC_CMD_TCPCSUM;
 1656                                         } else if (m->m_pkthdr.csum_flags &
 1657                                             M_CSUM_UDPv4) {
 1658                                                 re_flags |=
 1659                                                     RE_TDESC_CMD_UDPCSUM;
 1660                                         }
 1661                                 } else {
 1662                                         vlanctl |= RE_TDESC_VLANCTL_IPCSUM;
 1663                                         if (m->m_pkthdr.csum_flags &
 1664                                             M_CSUM_TCPv4) {
 1665                                                 vlanctl |=
 1666                                                     RE_TDESC_VLANCTL_TCPCSUM;
 1667                                         } else if (m->m_pkthdr.csum_flags &
 1668                                             M_CSUM_UDPv4) {
 1669                                                 vlanctl |=
 1670                                                     RE_TDESC_VLANCTL_UDPCSUM;
 1671                                         }
 1672                                 }
 1673                         }
 1674                 }
 1675 
 1676                 txq = &sc->re_ldata.re_txq[idx];
 1677                 map = txq->txq_dmamap;
 1678                 error = bus_dmamap_load_mbuf(sc->sc_dmat, map, m,
 1679                     BUS_DMA_WRITE|BUS_DMA_NOWAIT);
 1680 
 1681                 if (__predict_false(error)) {
 1682                         /* XXX try to defrag if EFBIG? */
 1683                         aprint_error("%s: can't map mbuf (error %d)\n",
 1684                             sc->sc_dev.dv_xname, error);
 1685 
 1686                         IFQ_DEQUEUE(&ifp->if_snd, m);
 1687                         m_freem(m);
 1688                         ifp->if_oerrors++;
 1689                         continue;
 1690                 }
 1691 
 1692                 nsegs = map->dm_nsegs;
 1693                 pad = FALSE;
 1694                 if (__predict_false(m->m_pkthdr.len <= RE_IP4CSUMTX_PADLEN &&
 1695                     (re_flags & RE_TDESC_CMD_IPCSUM) != 0 &&
 1696                     (sc->sc_quirk & RTKQ_DESCV2) == 0)) {
 1697                         pad = TRUE;
 1698                         nsegs++;
 1699                 }
 1700 
 1701                 if (nsegs > sc->re_ldata.re_tx_free) {
 1702                         /*
 1703                          * Not enough free descriptors to transmit this packet.
 1704                          */
 1705                         ifp->if_flags |= IFF_OACTIVE;
 1706                         bus_dmamap_unload(sc->sc_dmat, map);
 1707                         break;
 1708                 }
 1709 
 1710                 IFQ_DEQUEUE(&ifp->if_snd, m);
 1711 
 1712                 /*
 1713                  * Make sure that the caches are synchronized before we
 1714                  * ask the chip to start DMA for the packet data.
 1715                  */
 1716                 bus_dmamap_sync(sc->sc_dmat, map, 0, map->dm_mapsize,
 1717                     BUS_DMASYNC_PREWRITE);
 1718 
 1719                 /*
 1720                  * Set up hardware VLAN tagging. Note: vlan tag info must
 1721                  * appear in all descriptors of a multi-descriptor
 1722                  * transmission attempt.
 1723                  */
 1724                 if ((mtag = VLAN_OUTPUT_TAG(&sc->ethercom, m)) != NULL)
 1725                         vlanctl |= bswap16(VLAN_TAG_VALUE(mtag)) |
 1726                             RE_TDESC_VLANCTL_TAG;
 1727 
 1728                 /*
 1729                  * Map the segment array into descriptors.
 1730                  * Note that we set the start-of-frame and
 1731                  * end-of-frame markers for either TX or RX,
 1732                  * but they really only have meaning in the TX case.
 1733                  * (In the RX case, it's the chip that tells us
 1734                  *  where packets begin and end.)
 1735                  * We also keep track of the end of the ring
 1736                  * and set the end-of-ring bits as needed,
 1737                  * and we set the ownership bits in all except
 1738                  * the very first descriptor. (The caller will
 1739                  * set this descriptor later when it start
 1740                  * transmission or reception.)
 1741                  */
 1742                 curdesc = startdesc = sc->re_ldata.re_tx_nextfree;
 1743                 lastdesc = -1;
 1744                 for (seg = 0; seg < map->dm_nsegs;
 1745                     seg++, curdesc = RE_NEXT_TX_DESC(sc, curdesc)) {
 1746                         d = &sc->re_ldata.re_tx_list[curdesc];
 1747 #ifdef DIAGNOSTIC
 1748                         RE_TXDESCSYNC(sc, curdesc,
 1749                             BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE);
 1750                         cmdstat = le32toh(d->re_cmdstat);
 1751                         RE_TXDESCSYNC(sc, curdesc, BUS_DMASYNC_PREREAD);
 1752                         if (cmdstat & RE_TDESC_STAT_OWN) {
 1753                                 panic("%s: tried to map busy TX descriptor",
 1754                                     sc->sc_dev.dv_xname);
 1755                         }
 1756 #endif
 1757 
 1758                         d->re_vlanctl = htole32(vlanctl);
 1759                         re_set_bufaddr(d, map->dm_segs[seg].ds_addr);
 1760                         cmdstat = re_flags | map->dm_segs[seg].ds_len;
 1761                         if (seg == 0)
 1762                                 cmdstat |= RE_TDESC_CMD_SOF;
 1763                         else
 1764                                 cmdstat |= RE_TDESC_CMD_OWN;
 1765                         if (curdesc == (RE_TX_DESC_CNT(sc) - 1))
 1766                                 cmdstat |= RE_TDESC_CMD_EOR;
 1767                         if (seg == nsegs - 1) {
 1768                                 cmdstat |= RE_TDESC_CMD_EOF;
 1769                                 lastdesc = curdesc;
 1770                         }
 1771                         d->re_cmdstat = htole32(cmdstat);
 1772                         RE_TXDESCSYNC(sc, curdesc,
 1773                             BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1774                 }
 1775                 if (__predict_false(pad)) {
 1776                         bus_addr_t paddaddr;
 1777 
 1778                         d = &sc->re_ldata.re_tx_list[curdesc];
 1779                         d->re_vlanctl = htole32(vlanctl);
 1780                         paddaddr = RE_TXPADDADDR(sc);
 1781                         re_set_bufaddr(d, paddaddr);
 1782                         cmdstat = re_flags |
 1783                             RE_TDESC_CMD_OWN | RE_TDESC_CMD_EOF |
 1784                             (RE_IP4CSUMTX_PADLEN + 1 - m->m_pkthdr.len);
 1785                         if (curdesc == (RE_TX_DESC_CNT(sc) - 1))
 1786                                 cmdstat |= RE_TDESC_CMD_EOR;
 1787                         d->re_cmdstat = htole32(cmdstat);
 1788                         RE_TXDESCSYNC(sc, curdesc,
 1789                             BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1790                         lastdesc = curdesc;
 1791                         curdesc = RE_NEXT_TX_DESC(sc, curdesc);
 1792                 }
 1793                 KASSERT(lastdesc != -1);
 1794 
 1795                 /* Transfer ownership of packet to the chip. */
 1796 
 1797                 sc->re_ldata.re_tx_list[startdesc].re_cmdstat |=
 1798                     htole32(RE_TDESC_CMD_OWN);
 1799                 RE_TXDESCSYNC(sc, startdesc,
 1800                     BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1801 
 1802                 /* update info of TX queue and descriptors */
 1803                 txq->txq_mbuf = m;
 1804                 txq->txq_descidx = lastdesc;
 1805                 txq->txq_nsegs = nsegs;
 1806 
 1807                 sc->re_ldata.re_txq_free--;
 1808                 sc->re_ldata.re_tx_free -= nsegs;
 1809                 sc->re_ldata.re_tx_nextfree = curdesc;
 1810 
 1811 #if NBPFILTER > 0
 1812                 /*
 1813                  * If there's a BPF listener, bounce a copy of this frame
 1814                  * to him.
 1815                  */
 1816                 if (ifp->if_bpf)
 1817                         bpf_mtap(ifp->if_bpf, m);
 1818 #endif
 1819         }
 1820 
 1821         if (sc->re_ldata.re_txq_free < ofree) {
 1822                 /*
 1823                  * TX packets are enqueued.
 1824                  */
 1825                 sc->re_ldata.re_txq_prodidx = idx;
 1826 
 1827                 /*
 1828                  * Start the transmitter to poll.
 1829                  *
 1830                  * RealTek put the TX poll request register in a different
 1831                  * location on the 8169 gigE chip. I don't know why.
 1832                  */
 1833                 if ((sc->sc_quirk & RTKQ_8139CPLUS) != 0)
 1834                         CSR_WRITE_1(sc, RTK_TXSTART, RTK_TXSTART_START);
 1835                 else
 1836                         CSR_WRITE_1(sc, RTK_GTXSTART, RTK_TXSTART_START);
 1837 
 1838                 /*
 1839                  * Use the countdown timer for interrupt moderation.
 1840                  * 'TX done' interrupts are disabled. Instead, we reset the
 1841                  * countdown timer, which will begin counting until it hits
 1842                  * the value in the TIMERINT register, and then trigger an
 1843                  * interrupt. Each time we write to the TIMERCNT register,
 1844                  * the timer count is reset to 0.
 1845                  */
 1846                 CSR_WRITE_4(sc, RTK_TIMERCNT, 1);
 1847 
 1848                 /*
 1849                  * Set a timeout in case the chip goes out to lunch.
 1850                  */
 1851                 ifp->if_timer = 5;
 1852         }
 1853 }
 1854 
 1855 static int
 1856 re_init(struct ifnet *ifp)
 1857 {
 1858         struct rtk_softc        *sc = ifp->if_softc;
 1859         uint8_t                 *enaddr;
 1860         uint32_t                rxcfg = 0;
 1861         uint32_t                reg;
 1862         uint16_t cfg;
 1863         int error;
 1864 
 1865         if ((error = re_enable(sc)) != 0)
 1866                 goto out;
 1867 
 1868         /*
 1869          * Cancel pending I/O and free all RX/TX buffers.
 1870          */
 1871         re_stop(ifp, 0);
 1872 
 1873         re_reset(sc);
 1874 
 1875         /*
 1876          * Enable C+ RX and TX mode, as well as VLAN stripping and
 1877          * RX checksum offload. We must configure the C+ register
 1878          * before all others.
 1879          */
 1880         cfg = RE_CPLUSCMD_PCI_MRW;
 1881 
 1882         /*
 1883          * XXX: For old 8169 set bit 14.
 1884          *      For 8169S/8110S and above, do not set bit 14.
 1885          */
 1886         if ((sc->sc_quirk & RTKQ_8169NONS) != 0)
 1887                 cfg |= (0x1 << 14);
 1888 
 1889         if ((ifp->if_capenable & ETHERCAP_VLAN_HWTAGGING) != 0)
 1890                 cfg |= RE_CPLUSCMD_VLANSTRIP;
 1891         if ((ifp->if_capenable & (IFCAP_CSUM_IPv4_Rx |
 1892              IFCAP_CSUM_TCPv4_Rx | IFCAP_CSUM_UDPv4_Rx)) != 0)
 1893                 cfg |= RE_CPLUSCMD_RXCSUM_ENB;
 1894         if ((sc->sc_quirk & RTKQ_MACSTAT) != 0) {
 1895                 cfg |= RE_CPLUSCMD_MACSTAT_DIS;
 1896                 cfg |= RE_CPLUSCMD_TXENB;
 1897         } else
 1898                 cfg |= RE_CPLUSCMD_RXENB | RE_CPLUSCMD_TXENB;
 1899 
 1900         CSR_WRITE_2(sc, RTK_CPLUS_CMD, cfg);
 1901 
 1902         /* XXX: from Realtek-supplied Linux driver. Wholly undocumented. */
 1903         if ((sc->sc_quirk & RTKQ_8139CPLUS) == 0)
 1904                 CSR_WRITE_2(sc, RTK_IM, 0x0000);
 1905 
 1906         DELAY(10000);
 1907 
 1908         /*
 1909          * Init our MAC address.  Even though the chipset
 1910          * documentation doesn't mention it, we need to enter "Config
 1911          * register write enable" mode to modify the ID registers.
 1912          */
 1913         CSR_WRITE_1(sc, RTK_EECMD, RTK_EEMODE_WRITECFG);
 1914         enaddr = LLADDR(ifp->if_sadl);
 1915         reg = enaddr[0] | (enaddr[1] << 8) |
 1916             (enaddr[2] << 16) | (enaddr[3] << 24);
 1917         CSR_WRITE_4(sc, RTK_IDR0, reg);
 1918         reg = enaddr[4] | (enaddr[5] << 8);
 1919         CSR_WRITE_4(sc, RTK_IDR4, reg);
 1920         CSR_WRITE_1(sc, RTK_EECMD, RTK_EEMODE_OFF);
 1921 
 1922         /*
 1923          * For C+ mode, initialize the RX descriptors and mbufs.
 1924          */
 1925         re_rx_list_init(sc);
 1926         re_tx_list_init(sc);
 1927 
 1928         /*
 1929          * Load the addresses of the RX and TX lists into the chip.
 1930          */
 1931         CSR_WRITE_4(sc, RTK_RXLIST_ADDR_HI,
 1932             RE_ADDR_HI(sc->re_ldata.re_rx_list_map->dm_segs[0].ds_addr));
 1933         CSR_WRITE_4(sc, RTK_RXLIST_ADDR_LO,
 1934             RE_ADDR_LO(sc->re_ldata.re_rx_list_map->dm_segs[0].ds_addr));
 1935 
 1936         CSR_WRITE_4(sc, RTK_TXLIST_ADDR_HI,
 1937             RE_ADDR_HI(sc->re_ldata.re_tx_list_map->dm_segs[0].ds_addr));
 1938         CSR_WRITE_4(sc, RTK_TXLIST_ADDR_LO,
 1939             RE_ADDR_LO(sc->re_ldata.re_tx_list_map->dm_segs[0].ds_addr));
 1940 
 1941         /*
 1942          * Enable transmit and receive.
 1943          */
 1944         CSR_WRITE_1(sc, RTK_COMMAND, RTK_CMD_TX_ENB | RTK_CMD_RX_ENB);
 1945 
 1946         /*
 1947          * Set the initial TX and RX configuration.
 1948          */
 1949         if (sc->re_testmode && (sc->sc_quirk & RTKQ_8169NONS) != 0) {
 1950                 /* test mode is needed only for old 8169 */
 1951                 CSR_WRITE_4(sc, RTK_TXCFG,
 1952                     RE_TXCFG_CONFIG | RTK_LOOPTEST_ON);
 1953         } else
 1954                 CSR_WRITE_4(sc, RTK_TXCFG, RE_TXCFG_CONFIG);
 1955 
 1956         CSR_WRITE_1(sc, RTK_EARLY_TX_THRESH, 16);
 1957 
 1958         CSR_WRITE_4(sc, RTK_RXCFG, RE_RXCFG_CONFIG);
 1959 
 1960         /* Set the individual bit to receive frames for this host only. */
 1961         rxcfg = CSR_READ_4(sc, RTK_RXCFG);
 1962         rxcfg |= RTK_RXCFG_RX_INDIV;
 1963 
 1964         /* If we want promiscuous mode, set the allframes bit. */
 1965         if (ifp->if_flags & IFF_PROMISC)
 1966                 rxcfg |= RTK_RXCFG_RX_ALLPHYS;
 1967         else
 1968                 rxcfg &= ~RTK_RXCFG_RX_ALLPHYS;
 1969         CSR_WRITE_4(sc, RTK_RXCFG, rxcfg);
 1970 
 1971         /*
 1972          * Set capture broadcast bit to capture broadcast frames.
 1973          */
 1974         if (ifp->if_flags & IFF_BROADCAST)
 1975                 rxcfg |= RTK_RXCFG_RX_BROAD;
 1976         else
 1977                 rxcfg &= ~RTK_RXCFG_RX_BROAD;
 1978         CSR_WRITE_4(sc, RTK_RXCFG, rxcfg);
 1979 
 1980         /*
 1981          * Program the multicast filter, if necessary.
 1982          */
 1983         rtk_setmulti(sc);
 1984 
 1985 #ifdef DEVICE_POLLING
 1986         /*
 1987          * Disable interrupts if we are polling.
 1988          */
 1989         if (ifp->if_flags & IFF_POLLING)
 1990                 CSR_WRITE_2(sc, RTK_IMR, 0);
 1991         else    /* otherwise ... */
 1992 #endif /* DEVICE_POLLING */
 1993         /*
 1994          * Enable interrupts.
 1995          */
 1996         if (sc->re_testmode)
 1997                 CSR_WRITE_2(sc, RTK_IMR, 0);
 1998         else
 1999                 CSR_WRITE_2(sc, RTK_IMR, RTK_INTRS_CPLUS);
 2000 
 2001         /* Start RX/TX process. */
 2002         CSR_WRITE_4(sc, RTK_MISSEDPKT, 0);
 2003 #ifdef notdef
 2004         /* Enable receiver and transmitter. */
 2005         CSR_WRITE_1(sc, RTK_COMMAND, RTK_CMD_TX_ENB | RTK_CMD_RX_ENB);
 2006 #endif
 2007 
 2008         /*
 2009          * Initialize the timer interrupt register so that
 2010          * a timer interrupt will be generated once the timer
 2011          * reaches a certain number of ticks. The timer is
 2012          * reloaded on each transmit. This gives us TX interrupt
 2013          * moderation, which dramatically improves TX frame rate.
 2014          */
 2015 
 2016         if ((sc->sc_quirk & RTKQ_8139CPLUS) != 0)
 2017                 CSR_WRITE_4(sc, RTK_TIMERINT, 0x400);
 2018         else {
 2019                 CSR_WRITE_4(sc, RTK_TIMERINT_8169, 0x800);
 2020 
 2021                 /*
 2022                  * For 8169 gigE NICs, set the max allowed RX packet
 2023                  * size so we can receive jumbo frames.
 2024                  */
 2025                 CSR_WRITE_2(sc, RTK_MAXRXPKTLEN, 16383);
 2026         }
 2027 
 2028         if (sc->re_testmode)
 2029                 return 0;
 2030 
 2031         CSR_WRITE_1(sc, RTK_CFG1, RTK_CFG1_DRVLOAD);
 2032 
 2033         ifp->if_flags |= IFF_RUNNING;
 2034         ifp->if_flags &= ~IFF_OACTIVE;
 2035 
 2036         callout_reset(&sc->rtk_tick_ch, hz, re_tick, sc);
 2037 
 2038  out:
 2039         if (error) {
 2040                 ifp->if_flags &= ~(IFF_RUNNING | IFF_OACTIVE);
 2041                 ifp->if_timer = 0;
 2042                 aprint_error("%s: interface not running\n",
 2043                     sc->sc_dev.dv_xname);
 2044         }
 2045 
 2046         return error;
 2047 }
 2048 
 2049 /*
 2050  * Set media options.
 2051  */
 2052 static int
 2053 re_ifmedia_upd(struct ifnet *ifp)
 2054 {
 2055         struct rtk_softc        *sc;
 2056 
 2057         sc = ifp->if_softc;
 2058 
 2059         return mii_mediachg(&sc->mii);
 2060 }
 2061 
 2062 /*
 2063  * Report current media status.
 2064  */
 2065 static void
 2066 re_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr)
 2067 {
 2068         struct rtk_softc        *sc;
 2069 
 2070         sc = ifp->if_softc;
 2071 
 2072         mii_pollstat(&sc->mii);
 2073         ifmr->ifm_active = sc->mii.mii_media_active;
 2074         ifmr->ifm_status = sc->mii.mii_media_status;
 2075 }
 2076 
 2077 static int
 2078 re_ioctl(struct ifnet *ifp, u_long command, caddr_t data)
 2079 {
 2080         struct rtk_softc        *sc = ifp->if_softc;
 2081         struct ifreq            *ifr = (struct ifreq *) data;
 2082         int                     s, error = 0;
 2083 
 2084         s = splnet();
 2085 
 2086         switch (command) {
 2087         case SIOCSIFMTU:
 2088                 /*
 2089                  * Disable jumbo frames if it's not supported.
 2090                  */
 2091                 if ((sc->sc_quirk & RTKQ_NOJUMBO) != 0 &&
 2092                     ifr->ifr_mtu > ETHERMTU) {
 2093                         error = EINVAL;
 2094                         break;
 2095                 }
 2096 
 2097                 if (ifr->ifr_mtu < ETHERMIN || ifr->ifr_mtu > ETHERMTU_JUMBO)
 2098                         error = EINVAL;
 2099                 ifp->if_mtu = ifr->ifr_mtu;
 2100                 break;
 2101         case SIOCGIFMEDIA:
 2102         case SIOCSIFMEDIA:
 2103                 error = ifmedia_ioctl(ifp, ifr, &sc->mii.mii_media, command);
 2104                 break;
 2105         default:
 2106                 error = ether_ioctl(ifp, command, data);
 2107                 if (error == ENETRESET) {
 2108                         if (ifp->if_flags & IFF_RUNNING)
 2109                                 rtk_setmulti(sc);
 2110                         error = 0;
 2111                 }
 2112                 break;
 2113         }
 2114 
 2115         splx(s);
 2116 
 2117         return error;
 2118 }
 2119 
 2120 static void
 2121 re_watchdog(struct ifnet *ifp)
 2122 {
 2123         struct rtk_softc        *sc;
 2124         int                     s;
 2125 
 2126         sc = ifp->if_softc;
 2127         s = splnet();
 2128         aprint_error("%s: watchdog timeout\n", sc->sc_dev.dv_xname);
 2129         ifp->if_oerrors++;
 2130 
 2131         re_txeof(sc);
 2132         re_rxeof(sc);
 2133 
 2134         re_init(ifp);
 2135 
 2136         splx(s);
 2137 }
 2138 
 2139 /*
 2140  * Stop the adapter and free any mbufs allocated to the
 2141  * RX and TX lists.
 2142  */
 2143 static void
 2144 re_stop(struct ifnet *ifp, int disable)
 2145 {
 2146         int             i;
 2147         struct rtk_softc *sc = ifp->if_softc;
 2148 
 2149         callout_stop(&sc->rtk_tick_ch);
 2150 
 2151 #ifdef DEVICE_POLLING
 2152         ether_poll_deregister(ifp);
 2153 #endif /* DEVICE_POLLING */
 2154 
 2155         mii_down(&sc->mii);
 2156 
 2157         if ((sc->sc_quirk & RTKQ_CMDSTOP) != 0)
 2158                 CSR_WRITE_1(sc, RTK_COMMAND, RTK_CMD_STOPREQ | RTK_CMD_TX_ENB |
 2159                     RTK_CMD_RX_ENB);
 2160         else
 2161                 CSR_WRITE_1(sc, RTK_COMMAND, 0x00);
 2162         DELAY(1000);
 2163         CSR_WRITE_2(sc, RTK_IMR, 0x0000);
 2164         CSR_WRITE_2(sc, RTK_ISR, 0xFFFF);
 2165 
 2166         if (sc->re_head != NULL) {
 2167                 m_freem(sc->re_head);
 2168                 sc->re_head = sc->re_tail = NULL;
 2169         }
 2170 
 2171         /* Free the TX list buffers. */
 2172         for (i = 0; i < RE_TX_QLEN; i++) {
 2173                 if (sc->re_ldata.re_txq[i].txq_mbuf != NULL) {
 2174                         bus_dmamap_unload(sc->sc_dmat,
 2175                             sc->re_ldata.re_txq[i].txq_dmamap);
 2176                         m_freem(sc->re_ldata.re_txq[i].txq_mbuf);
 2177                         sc->re_ldata.re_txq[i].txq_mbuf = NULL;
 2178                 }
 2179         }
 2180 
 2181         /* Free the RX list buffers. */
 2182         for (i = 0; i < RE_RX_DESC_CNT; i++) {
 2183                 if (sc->re_ldata.re_rxsoft[i].rxs_mbuf != NULL) {
 2184                         bus_dmamap_unload(sc->sc_dmat,
 2185                             sc->re_ldata.re_rxsoft[i].rxs_dmamap);
 2186                         m_freem(sc->re_ldata.re_rxsoft[i].rxs_mbuf);
 2187                         sc->re_ldata.re_rxsoft[i].rxs_mbuf = NULL;
 2188                 }
 2189         }
 2190 
 2191         if (disable)
 2192                 re_disable(sc);
 2193 
 2194         ifp->if_flags &= ~(IFF_RUNNING | IFF_OACTIVE);
 2195         ifp->if_timer = 0;
 2196 }

Cache object: 5d44e449b64201f34361f4ba25678bbc


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.