The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/dev/ic/rtl8169.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /*      $NetBSD: rtl8169.c,v 1.14.2.7 2007/10/04 18:50:22 bouyer Exp $  */
    2 
    3 /*
    4  * Copyright (c) 1997, 1998-2003
    5  *      Bill Paul <wpaul@windriver.com>.  All rights reserved.
    6  *
    7  * Redistribution and use in source and binary forms, with or without
    8  * modification, are permitted provided that the following conditions
    9  * are met:
   10  * 1. Redistributions of source code must retain the above copyright
   11  *    notice, this list of conditions and the following disclaimer.
   12  * 2. Redistributions in binary form must reproduce the above copyright
   13  *    notice, this list of conditions and the following disclaimer in the
   14  *    documentation and/or other materials provided with the distribution.
   15  * 3. All advertising materials mentioning features or use of this software
   16  *    must display the following acknowledgement:
   17  *      This product includes software developed by Bill Paul.
   18  * 4. Neither the name of the author nor the names of any co-contributors
   19  *    may be used to endorse or promote products derived from this software
   20  *    without specific prior written permission.
   21  *
   22  * THIS SOFTWARE IS PROVIDED BY Bill Paul AND CONTRIBUTORS ``AS IS'' AND
   23  * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
   24  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
   25  * ARE DISCLAIMED.  IN NO EVENT SHALL Bill Paul OR THE VOICES IN HIS HEAD
   26  * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
   27  * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
   28  * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
   29  * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
   30  * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
   31  * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
   32  * THE POSSIBILITY OF SUCH DAMAGE.
   33  */
   34 
   35 #include <sys/cdefs.h>
   36 /* $FreeBSD: /repoman/r/ncvs/src/sys/dev/re/if_re.c,v 1.20 2004/04/11 20:34:08 ru Exp $ */
   37 
   38 /*
   39  * RealTek 8139C+/8169/8169S/8110S PCI NIC driver
   40  *
   41  * Written by Bill Paul <wpaul@windriver.com>
   42  * Senior Networking Software Engineer
   43  * Wind River Systems
   44  */
   45 
   46 /*
   47  * This driver is designed to support RealTek's next generation of
   48  * 10/100 and 10/100/1000 PCI ethernet controllers. There are currently
   49  * four devices in this family: the RTL8139C+, the RTL8169, the RTL8169S
   50  * and the RTL8110S.
   51  *
   52  * The 8139C+ is a 10/100 ethernet chip. It is backwards compatible
   53  * with the older 8139 family, however it also supports a special
   54  * C+ mode of operation that provides several new performance enhancing
   55  * features. These include:
   56  *
   57  *      o Descriptor based DMA mechanism. Each descriptor represents
   58  *        a single packet fragment. Data buffers may be aligned on
   59  *        any byte boundary.
   60  *
   61  *      o 64-bit DMA
   62  *
   63  *      o TCP/IP checksum offload for both RX and TX
   64  *
   65  *      o High and normal priority transmit DMA rings
   66  *
   67  *      o VLAN tag insertion and extraction
   68  *
   69  *      o TCP large send (segmentation offload)
   70  *
   71  * Like the 8139, the 8139C+ also has a built-in 10/100 PHY. The C+
   72  * programming API is fairly straightforward. The RX filtering, EEPROM
   73  * access and PHY access is the same as it is on the older 8139 series
   74  * chips.
   75  *
   76  * The 8169 is a 64-bit 10/100/1000 gigabit ethernet MAC. It has almost the
   77  * same programming API and feature set as the 8139C+ with the following
   78  * differences and additions:
   79  *
   80  *      o 1000Mbps mode
   81  *
   82  *      o Jumbo frames
   83  *
   84  *      o GMII and TBI ports/registers for interfacing with copper
   85  *        or fiber PHYs
   86  *
   87  *      o RX and TX DMA rings can have up to 1024 descriptors
   88  *        (the 8139C+ allows a maximum of 64)
   89  *
   90  *      o Slight differences in register layout from the 8139C+
   91  *
   92  * The TX start and timer interrupt registers are at different locations
   93  * on the 8169 than they are on the 8139C+. Also, the status word in the
   94  * RX descriptor has a slightly different bit layout. The 8169 does not
   95  * have a built-in PHY. Most reference boards use a Marvell 88E1000 'Alaska'
   96  * copper gigE PHY.
   97  *
   98  * The 8169S/8110S 10/100/1000 devices have built-in copper gigE PHYs
   99  * (the 'S' stands for 'single-chip'). These devices have the same
  100  * programming API as the older 8169, but also have some vendor-specific
  101  * registers for the on-board PHY. The 8110S is a LAN-on-motherboard
  102  * part designed to be pin-compatible with the RealTek 8100 10/100 chip.
  103  *
  104  * This driver takes advantage of the RX and TX checksum offload and
  105  * VLAN tag insertion/extraction features. It also implements TX
  106  * interrupt moderation using the timer interrupt registers, which
  107  * significantly reduces TX interrupt load. There is also support
  108  * for jumbo frames, however the 8169/8169S/8110S can not transmit
  109  * jumbo frames larger than 7.5K, so the max MTU possible with this
  110  * driver is 7500 bytes.
  111  */
  112 
  113 #include "bpfilter.h"
  114 #include "vlan.h"
  115 
  116 #include <sys/param.h>
  117 #include <sys/endian.h>
  118 #include <sys/systm.h>
  119 #include <sys/sockio.h>
  120 #include <sys/mbuf.h>
  121 #include <sys/malloc.h>
  122 #include <sys/kernel.h>
  123 #include <sys/socket.h>
  124 #include <sys/device.h>
  125 
  126 #include <net/if.h>
  127 #include <net/if_arp.h>
  128 #include <net/if_dl.h>
  129 #include <net/if_ether.h>
  130 #include <net/if_media.h>
  131 #include <net/if_vlanvar.h>
  132 
  133 #include <netinet/in_systm.h>   /* XXX for IP_MAXPACKET */
  134 #include <netinet/in.h>         /* XXX for IP_MAXPACKET */
  135 #include <netinet/ip.h>         /* XXX for IP_MAXPACKET */
  136 
  137 #if NBPFILTER > 0
  138 #include <net/bpf.h>
  139 #endif
  140 
  141 #include <machine/bus.h>
  142 
  143 #include <dev/mii/mii.h>
  144 #include <dev/mii/miivar.h>
  145 
  146 #include <dev/ic/rtl81x9reg.h>
  147 #include <dev/ic/rtl81x9var.h>
  148 
  149 #include <dev/ic/rtl8169var.h>
  150 
  151 static inline void re_set_bufaddr(struct re_desc *, bus_addr_t);
  152 
  153 static int re_newbuf(struct rtk_softc *, int, struct mbuf *);
  154 static int re_rx_list_init(struct rtk_softc *);
  155 static int re_tx_list_init(struct rtk_softc *);
  156 static void re_rxeof(struct rtk_softc *);
  157 static void re_txeof(struct rtk_softc *);
  158 static void re_tick(void *);
  159 static void re_start(struct ifnet *);
  160 static int re_ioctl(struct ifnet *, u_long, caddr_t);
  161 static int re_init(struct ifnet *);
  162 static void re_stop(struct ifnet *, int);
  163 static void re_watchdog(struct ifnet *);
  164 
  165 static void re_shutdown(void *);
  166 static int re_enable(struct rtk_softc *);
  167 static void re_disable(struct rtk_softc *);
  168 static void re_power(int, void *);
  169 
  170 static int re_ifmedia_upd(struct ifnet *);
  171 static void re_ifmedia_sts(struct ifnet *, struct ifmediareq *);
  172 
  173 static int re_gmii_readreg(struct device *, int, int);
  174 static void re_gmii_writereg(struct device *, int, int, int);
  175 
  176 static int re_miibus_readreg(struct device *, int, int);
  177 static void re_miibus_writereg(struct device *, int, int, int);
  178 static void re_miibus_statchg(struct device *);
  179 
  180 static void re_reset(struct rtk_softc *);
  181 
  182 static inline void
  183 re_set_bufaddr(struct re_desc *d, bus_addr_t addr)
  184 {
  185 
  186         d->re_bufaddr_lo = htole32((uint32_t)addr);
  187         if (sizeof(bus_addr_t) == sizeof(uint64_t))
  188                 d->re_bufaddr_hi = htole32((uint64_t)addr >> 32);
  189         else
  190                 d->re_bufaddr_hi = 0;
  191 }
  192 
  193 static int
  194 re_gmii_readreg(struct device *self, int phy, int reg)
  195 {
  196         struct rtk_softc        *sc = (void *)self;
  197         uint32_t                rval;
  198         int                     i;
  199 
  200         if (phy != 7)
  201                 return 0;
  202 
  203         /* Let the rgephy driver read the GMEDIASTAT register */
  204 
  205         if (reg == RTK_GMEDIASTAT) {
  206                 rval = CSR_READ_1(sc, RTK_GMEDIASTAT);
  207                 return rval;
  208         }
  209 
  210         CSR_WRITE_4(sc, RTK_PHYAR, reg << 16);
  211         DELAY(1000);
  212 
  213         for (i = 0; i < RTK_TIMEOUT; i++) {
  214                 rval = CSR_READ_4(sc, RTK_PHYAR);
  215                 if (rval & RTK_PHYAR_BUSY)
  216                         break;
  217                 DELAY(100);
  218         }
  219 
  220         if (i == RTK_TIMEOUT) {
  221                 aprint_error("%s: PHY read failed\n", sc->sc_dev.dv_xname);
  222                 return 0;
  223         }
  224 
  225         return rval & RTK_PHYAR_PHYDATA;
  226 }
  227 
  228 static void
  229 re_gmii_writereg(struct device *dev, int phy, int reg, int data)
  230 {
  231         struct rtk_softc        *sc = (void *)dev;
  232         uint32_t                rval;
  233         int                     i;
  234 
  235         CSR_WRITE_4(sc, RTK_PHYAR, (reg << 16) |
  236             (data & RTK_PHYAR_PHYDATA) | RTK_PHYAR_BUSY);
  237         DELAY(1000);
  238 
  239         for (i = 0; i < RTK_TIMEOUT; i++) {
  240                 rval = CSR_READ_4(sc, RTK_PHYAR);
  241                 if (!(rval & RTK_PHYAR_BUSY))
  242                         break;
  243                 DELAY(100);
  244         }
  245 
  246         if (i == RTK_TIMEOUT) {
  247                 aprint_error("%s: PHY write reg %x <- %x failed\n",
  248                     sc->sc_dev.dv_xname, reg, data);
  249         }
  250 }
  251 
  252 static int
  253 re_miibus_readreg(struct device *dev, int phy, int reg)
  254 {
  255         struct rtk_softc        *sc = (void *)dev;
  256         uint16_t                rval = 0;
  257         uint16_t                re8139_reg = 0;
  258         int                     s;
  259 
  260         s = splnet();
  261 
  262         if ((sc->sc_quirk & RTKQ_8139CPLUS) == 0) {
  263                 rval = re_gmii_readreg(dev, phy, reg);
  264                 splx(s);
  265                 return rval;
  266         }
  267 
  268         /* Pretend the internal PHY is only at address 0 */
  269         if (phy) {
  270                 splx(s);
  271                 return 0;
  272         }
  273         switch (reg) {
  274         case MII_BMCR:
  275                 re8139_reg = RTK_BMCR;
  276                 break;
  277         case MII_BMSR:
  278                 re8139_reg = RTK_BMSR;
  279                 break;
  280         case MII_ANAR:
  281                 re8139_reg = RTK_ANAR;
  282                 break;
  283         case MII_ANER:
  284                 re8139_reg = RTK_ANER;
  285                 break;
  286         case MII_ANLPAR:
  287                 re8139_reg = RTK_LPAR;
  288                 break;
  289         case MII_PHYIDR1:
  290         case MII_PHYIDR2:
  291                 splx(s);
  292                 return 0;
  293         /*
  294          * Allow the rlphy driver to read the media status
  295          * register. If we have a link partner which does not
  296          * support NWAY, this is the register which will tell
  297          * us the results of parallel detection.
  298          */
  299         case RTK_MEDIASTAT:
  300                 rval = CSR_READ_1(sc, RTK_MEDIASTAT);
  301                 splx(s);
  302                 return rval;
  303         default:
  304                 aprint_error("%s: bad phy register\n", sc->sc_dev.dv_xname);
  305                 splx(s);
  306                 return 0;
  307         }
  308         rval = CSR_READ_2(sc, re8139_reg);
  309         if ((sc->sc_quirk & RTKQ_8139CPLUS) != 0 && re8139_reg == RTK_BMCR) {
  310                 /* 8139C+ has different bit layout. */
  311                 rval &= ~(BMCR_LOOP | BMCR_ISO);
  312         }
  313         splx(s);
  314         return rval;
  315 }
  316 
  317 static void
  318 re_miibus_writereg(struct device *dev, int phy, int reg, int data)
  319 {
  320         struct rtk_softc        *sc = (void *)dev;
  321         uint16_t                re8139_reg = 0;
  322         int                     s;
  323 
  324         s = splnet();
  325 
  326         if ((sc->sc_quirk & RTKQ_8139CPLUS) == 0) {
  327                 re_gmii_writereg(dev, phy, reg, data);
  328                 splx(s);
  329                 return;
  330         }
  331 
  332         /* Pretend the internal PHY is only at address 0 */
  333         if (phy) {
  334                 splx(s);
  335                 return;
  336         }
  337         switch (reg) {
  338         case MII_BMCR:
  339                 re8139_reg = RTK_BMCR;
  340                 if ((sc->sc_quirk & RTKQ_8139CPLUS) != 0) {
  341                         /* 8139C+ has different bit layout. */
  342                         data &= ~(BMCR_LOOP | BMCR_ISO);
  343                 }
  344                 break;
  345         case MII_BMSR:
  346                 re8139_reg = RTK_BMSR;
  347                 break;
  348         case MII_ANAR:
  349                 re8139_reg = RTK_ANAR;
  350                 break;
  351         case MII_ANER:
  352                 re8139_reg = RTK_ANER;
  353                 break;
  354         case MII_ANLPAR:
  355                 re8139_reg = RTK_LPAR;
  356                 break;
  357         case MII_PHYIDR1:
  358         case MII_PHYIDR2:
  359                 splx(s);
  360                 return;
  361                 break;
  362         default:
  363                 aprint_error("%s: bad phy register\n", sc->sc_dev.dv_xname);
  364                 splx(s);
  365                 return;
  366         }
  367         CSR_WRITE_2(sc, re8139_reg, data);
  368         splx(s);
  369         return;
  370 }
  371 
  372 static void
  373 re_miibus_statchg(struct device *dev)
  374 {
  375 
  376         return;
  377 }
  378 
  379 static void
  380 re_reset(struct rtk_softc *sc)
  381 {
  382         int             i;
  383 
  384         CSR_WRITE_1(sc, RTK_COMMAND, RTK_CMD_RESET);
  385 
  386         for (i = 0; i < RTK_TIMEOUT; i++) {
  387                 DELAY(10);
  388                 if ((CSR_READ_1(sc, RTK_COMMAND) & RTK_CMD_RESET) == 0)
  389                         break;
  390         }
  391         if (i == RTK_TIMEOUT)
  392                 aprint_error("%s: reset never completed!\n",
  393                     sc->sc_dev.dv_xname);
  394 
  395         /*
  396          * NB: Realtek-supplied Linux driver does this only for
  397          * MCFG_METHOD_2, which corresponds to sc->sc_rev == 2.
  398          */
  399         if (1) /* XXX check softc flag for 8169s version */
  400                 CSR_WRITE_1(sc, RTK_LDPS, 1);
  401 
  402         return;
  403 }
  404 
  405 /*
  406  * The following routine is designed to test for a defect on some
  407  * 32-bit 8169 cards. Some of these NICs have the REQ64# and ACK64#
  408  * lines connected to the bus, however for a 32-bit only card, they
  409  * should be pulled high. The result of this defect is that the
  410  * NIC will not work right if you plug it into a 64-bit slot: DMA
  411  * operations will be done with 64-bit transfers, which will fail
  412  * because the 64-bit data lines aren't connected.
  413  *
  414  * There's no way to work around this (short of talking a soldering
  415  * iron to the board), however we can detect it. The method we use
  416  * here is to put the NIC into digital loopback mode, set the receiver
  417  * to promiscuous mode, and then try to send a frame. We then compare
  418  * the frame data we sent to what was received. If the data matches,
  419  * then the NIC is working correctly, otherwise we know the user has
  420  * a defective NIC which has been mistakenly plugged into a 64-bit PCI
  421  * slot. In the latter case, there's no way the NIC can work correctly,
  422  * so we print out a message on the console and abort the device attach.
  423  */
  424 
  425 int
  426 re_diag(struct rtk_softc *sc)
  427 {
  428         struct ifnet            *ifp = &sc->ethercom.ec_if;
  429         struct mbuf             *m0;
  430         struct ether_header     *eh;
  431         struct re_rxsoft        *rxs;
  432         struct re_desc          *cur_rx;
  433         bus_dmamap_t            dmamap;
  434         uint16_t                status;
  435         uint32_t                rxstat;
  436         int                     total_len, i, s, error = 0;
  437         static const uint8_t    dst[] = { 0x00, 'h', 'e', 'l', 'l', 'o' };
  438         static const uint8_t    src[] = { 0x00, 'w', 'o', 'r', 'l', 'd' };
  439 
  440         /* Allocate a single mbuf */
  441 
  442         MGETHDR(m0, M_DONTWAIT, MT_DATA);
  443         if (m0 == NULL)
  444                 return ENOBUFS;
  445 
  446         /*
  447          * Initialize the NIC in test mode. This sets the chip up
  448          * so that it can send and receive frames, but performs the
  449          * following special functions:
  450          * - Puts receiver in promiscuous mode
  451          * - Enables digital loopback mode
  452          * - Leaves interrupts turned off
  453          */
  454 
  455         ifp->if_flags |= IFF_PROMISC;
  456         sc->re_testmode = 1;
  457         re_init(ifp);
  458         re_stop(ifp, 0);
  459         DELAY(100000);
  460         re_init(ifp);
  461 
  462         /* Put some data in the mbuf */
  463 
  464         eh = mtod(m0, struct ether_header *);
  465         memcpy(eh->ether_dhost, (char *)&dst, ETHER_ADDR_LEN);
  466         memcpy(eh->ether_shost, (char *)&src, ETHER_ADDR_LEN);
  467         eh->ether_type = htons(ETHERTYPE_IP);
  468         m0->m_pkthdr.len = m0->m_len = ETHER_MIN_LEN - ETHER_CRC_LEN;
  469 
  470         /*
  471          * Queue the packet, start transmission.
  472          */
  473 
  474         CSR_WRITE_2(sc, RTK_ISR, 0xFFFF);
  475         s = splnet();
  476         IF_ENQUEUE(&ifp->if_snd, m0);
  477         re_start(ifp);
  478         splx(s);
  479         m0 = NULL;
  480 
  481         /* Wait for it to propagate through the chip */
  482 
  483         DELAY(100000);
  484         for (i = 0; i < RTK_TIMEOUT; i++) {
  485                 status = CSR_READ_2(sc, RTK_ISR);
  486                 if ((status & (RTK_ISR_TIMEOUT_EXPIRED | RTK_ISR_RX_OK)) ==
  487                     (RTK_ISR_TIMEOUT_EXPIRED | RTK_ISR_RX_OK))
  488                         break;
  489                 DELAY(10);
  490         }
  491         if (i == RTK_TIMEOUT) {
  492                 aprint_error("%s: diagnostic failed, failed to receive packet "
  493                     "in loopback mode\n", sc->sc_dev.dv_xname);
  494                 error = EIO;
  495                 goto done;
  496         }
  497 
  498         /*
  499          * The packet should have been dumped into the first
  500          * entry in the RX DMA ring. Grab it from there.
  501          */
  502 
  503         rxs = &sc->re_ldata.re_rxsoft[0];
  504         dmamap = rxs->rxs_dmamap;
  505         bus_dmamap_sync(sc->sc_dmat, dmamap, 0, dmamap->dm_mapsize,
  506             BUS_DMASYNC_POSTREAD);
  507         bus_dmamap_unload(sc->sc_dmat, dmamap);
  508 
  509         m0 = rxs->rxs_mbuf;
  510         rxs->rxs_mbuf = NULL;
  511         eh = mtod(m0, struct ether_header *);
  512 
  513         RE_RXDESCSYNC(sc, 0, BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE);
  514         cur_rx = &sc->re_ldata.re_rx_list[0];
  515         rxstat = le32toh(cur_rx->re_cmdstat);
  516         total_len = rxstat & sc->re_rxlenmask;
  517 
  518         if (total_len != ETHER_MIN_LEN) {
  519                 aprint_error("%s: diagnostic failed, received short packet\n",
  520                     sc->sc_dev.dv_xname);
  521                 error = EIO;
  522                 goto done;
  523         }
  524 
  525         /* Test that the received packet data matches what we sent. */
  526 
  527         if (memcmp((char *)&eh->ether_dhost, (char *)&dst, ETHER_ADDR_LEN) ||
  528             memcmp((char *)&eh->ether_shost, (char *)&src, ETHER_ADDR_LEN) ||
  529             ntohs(eh->ether_type) != ETHERTYPE_IP) {
  530                 aprint_error("%s: WARNING, DMA FAILURE!\n",
  531                     sc->sc_dev.dv_xname);
  532                 aprint_error("%s: expected TX data: %s",
  533                     sc->sc_dev.dv_xname, ether_sprintf(dst));
  534                 aprint_error("/%s/0x%x\n", ether_sprintf(src), ETHERTYPE_IP);
  535                 aprint_error("%s: received RX data: %s",
  536                     sc->sc_dev.dv_xname,
  537                     ether_sprintf(eh->ether_dhost));
  538                 aprint_error("/%s/0x%x\n", ether_sprintf(eh->ether_shost),
  539                     ntohs(eh->ether_type));
  540                 aprint_error("%s: You may have a defective 32-bit NIC plugged "
  541                     "into a 64-bit PCI slot.\n", sc->sc_dev.dv_xname);
  542                 aprint_error("%s: Please re-install the NIC in a 32-bit slot "
  543                     "for proper operation.\n", sc->sc_dev.dv_xname);
  544                 aprint_error("%s: Read the re(4) man page for more details.\n",
  545                     sc->sc_dev.dv_xname);
  546                 error = EIO;
  547         }
  548 
  549  done:
  550         /* Turn interface off, release resources */
  551 
  552         sc->re_testmode = 0;
  553         ifp->if_flags &= ~IFF_PROMISC;
  554         re_stop(ifp, 0);
  555         if (m0 != NULL)
  556                 m_freem(m0);
  557 
  558         return error;
  559 }
  560 
  561 
  562 /*
  563  * Attach the interface. Allocate softc structures, do ifmedia
  564  * setup and ethernet/BPF attach.
  565  */
  566 void
  567 re_attach(struct rtk_softc *sc)
  568 {
  569         u_char                  eaddr[ETHER_ADDR_LEN];
  570         uint16_t                val;
  571         struct ifnet            *ifp;
  572         int                     error = 0, i, addr_len;
  573 
  574         /* Reset the adapter. */
  575         re_reset(sc);
  576 
  577         if (rtk_read_eeprom(sc, RTK_EE_ID, RTK_EEADDR_LEN1) == 0x8129)
  578                 addr_len = RTK_EEADDR_LEN1;
  579         else
  580                 addr_len = RTK_EEADDR_LEN0;
  581 
  582         /*
  583          * Get station address from the EEPROM.
  584          */
  585         for (i = 0; i < 3; i++) {
  586                 val = rtk_read_eeprom(sc, RTK_EE_EADDR0 + i, addr_len);
  587                 eaddr[(i * 2) + 0] = val & 0xff;
  588                 eaddr[(i * 2) + 1] = val >> 8;
  589         }
  590 
  591         if ((sc->sc_quirk & RTKQ_8139CPLUS) == 0) {
  592                 uint32_t hwrev;
  593 
  594                 /* Revision of 8169/8169S/8110s in bits 30..26, 23 */
  595                 hwrev = CSR_READ_4(sc, RTK_TXCFG) & RTK_TXCFG_HWREV;
  596                 /* These rev numbers are taken from Realtek's driver */
  597                 if (       hwrev == RTK_HWREV_8100E_SPIN2) {
  598                         sc->sc_rev = 15;
  599                 } else if (hwrev == RTK_HWREV_8100E) {
  600                         sc->sc_rev = 14;
  601                 } else if (hwrev == RTK_HWREV_8101E) {
  602                         sc->sc_rev = 13;
  603                 } else if (hwrev == RTK_HWREV_8168_SPIN2 ||
  604                            hwrev == RTK_HWREV_8168_SPIN3) {
  605                         sc->sc_rev = 12;
  606                 } else if (hwrev == RTK_HWREV_8168_SPIN1) {
  607                         sc->sc_rev = 11;
  608                 } else if (hwrev == RTK_HWREV_8169_8110SC) {
  609                         sc->sc_rev = 5;
  610                 } else if (hwrev == RTK_HWREV_8169_8110SB) {
  611                         sc->sc_rev = 4;
  612                 } else if (hwrev == RTK_HWREV_8169S) {
  613                         sc->sc_rev = 3;
  614                 } else if (hwrev == RTK_HWREV_8110S) {
  615                         sc->sc_rev = 2;
  616                 } else if (hwrev == RTK_HWREV_8169) {
  617                         sc->sc_rev = 1;
  618                         sc->sc_quirk |= RTKQ_8169NONS;
  619                 } else {
  620                         aprint_normal("%s: Unknown revision (0x%08x)\n",
  621                             sc->sc_dev.dv_xname, hwrev);
  622                         /* assume the latest one */
  623                         sc->sc_rev = 15;
  624                 }
  625 
  626                 /* Set RX length mask */
  627                 sc->re_rxlenmask = RE_RDESC_STAT_GFRAGLEN;
  628                 sc->re_ldata.re_tx_desc_cnt = RE_TX_DESC_CNT_8169;
  629         } else {
  630                 /* Set RX length mask */
  631                 sc->re_rxlenmask = RE_RDESC_STAT_FRAGLEN;
  632                 sc->re_ldata.re_tx_desc_cnt = RE_TX_DESC_CNT_8139;
  633         }
  634 
  635         aprint_normal("%s: Ethernet address %s\n",
  636             sc->sc_dev.dv_xname, ether_sprintf(eaddr));
  637 
  638         if (sc->re_ldata.re_tx_desc_cnt >
  639             PAGE_SIZE / sizeof(struct re_desc)) {
  640                 sc->re_ldata.re_tx_desc_cnt =
  641                     PAGE_SIZE / sizeof(struct re_desc);
  642         }
  643 
  644         aprint_verbose("%s: using %d tx descriptors\n",
  645             sc->sc_dev.dv_xname, sc->re_ldata.re_tx_desc_cnt);
  646         KASSERT(RE_NEXT_TX_DESC(sc, RE_TX_DESC_CNT(sc) - 1) == 0);
  647 
  648         /* Allocate DMA'able memory for the TX ring */
  649         if ((error = bus_dmamem_alloc(sc->sc_dmat, RE_TX_LIST_SZ(sc),
  650             RE_RING_ALIGN, 0, &sc->re_ldata.re_tx_listseg, 1,
  651             &sc->re_ldata.re_tx_listnseg, BUS_DMA_NOWAIT)) != 0) {
  652                 aprint_error("%s: can't allocate tx listseg, error = %d\n",
  653                     sc->sc_dev.dv_xname, error);
  654                 goto fail_0;
  655         }
  656 
  657         /* Load the map for the TX ring. */
  658         if ((error = bus_dmamem_map(sc->sc_dmat, &sc->re_ldata.re_tx_listseg,
  659             sc->re_ldata.re_tx_listnseg, RE_TX_LIST_SZ(sc),
  660             (caddr_t *)&sc->re_ldata.re_tx_list,
  661             BUS_DMA_COHERENT | BUS_DMA_NOWAIT)) != 0) {
  662                 aprint_error("%s: can't map tx list, error = %d\n",
  663                     sc->sc_dev.dv_xname, error);
  664                 goto fail_1;
  665         }
  666         memset(sc->re_ldata.re_tx_list, 0, RE_TX_LIST_SZ(sc));
  667 
  668         if ((error = bus_dmamap_create(sc->sc_dmat, RE_TX_LIST_SZ(sc), 1,
  669             RE_TX_LIST_SZ(sc), 0, 0,
  670             &sc->re_ldata.re_tx_list_map)) != 0) {
  671                 aprint_error("%s: can't create tx list map, error = %d\n",
  672                     sc->sc_dev.dv_xname, error);
  673                 goto fail_2;
  674         }
  675 
  676 
  677         if ((error = bus_dmamap_load(sc->sc_dmat,
  678             sc->re_ldata.re_tx_list_map, sc->re_ldata.re_tx_list,
  679             RE_TX_LIST_SZ(sc), NULL, BUS_DMA_NOWAIT)) != 0) {
  680                 aprint_error("%s: can't load tx list, error = %d\n",
  681                     sc->sc_dev.dv_xname, error);
  682                 goto fail_3;
  683         }
  684 
  685         /* Create DMA maps for TX buffers */
  686         for (i = 0; i < RE_TX_QLEN; i++) {
  687                 error = bus_dmamap_create(sc->sc_dmat,
  688                     round_page(IP_MAXPACKET),
  689                     RE_TX_DESC_CNT(sc) - RE_NTXDESC_RSVD, RE_TDESC_CMD_FRAGLEN,
  690                     0, 0, &sc->re_ldata.re_txq[i].txq_dmamap);
  691                 if (error) {
  692                         aprint_error("%s: can't create DMA map for TX\n",
  693                             sc->sc_dev.dv_xname);
  694                         goto fail_4;
  695                 }
  696         }
  697 
  698         /* Allocate DMA'able memory for the RX ring */
  699         /* XXX see also a comment about RE_RX_DMAMEM_SZ in rtl81x9var.h */
  700         if ((error = bus_dmamem_alloc(sc->sc_dmat,
  701             RE_RX_DMAMEM_SZ, RE_RING_ALIGN, 0, &sc->re_ldata.re_rx_listseg, 1,
  702             &sc->re_ldata.re_rx_listnseg, BUS_DMA_NOWAIT)) != 0) {
  703                 aprint_error("%s: can't allocate rx listseg, error = %d\n",
  704                     sc->sc_dev.dv_xname, error);
  705                 goto fail_4;
  706         }
  707 
  708         /* Load the map for the RX ring. */
  709         if ((error = bus_dmamem_map(sc->sc_dmat, &sc->re_ldata.re_rx_listseg,
  710             sc->re_ldata.re_rx_listnseg, RE_RX_DMAMEM_SZ,
  711             (caddr_t *)&sc->re_ldata.re_rx_list,
  712             BUS_DMA_COHERENT | BUS_DMA_NOWAIT)) != 0) {
  713                 aprint_error("%s: can't map rx list, error = %d\n",
  714                     sc->sc_dev.dv_xname, error);
  715                 goto fail_5;
  716         }
  717         memset(sc->re_ldata.re_rx_list, 0, RE_RX_DMAMEM_SZ);
  718 
  719         if ((error = bus_dmamap_create(sc->sc_dmat,
  720             RE_RX_DMAMEM_SZ, 1, RE_RX_DMAMEM_SZ, 0, 0,
  721             &sc->re_ldata.re_rx_list_map)) != 0) {
  722                 aprint_error("%s: can't create rx list map, error = %d\n",
  723                     sc->sc_dev.dv_xname, error);
  724                 goto fail_6;
  725         }
  726 
  727         if ((error = bus_dmamap_load(sc->sc_dmat,
  728             sc->re_ldata.re_rx_list_map, sc->re_ldata.re_rx_list,
  729             RE_RX_DMAMEM_SZ, NULL, BUS_DMA_NOWAIT)) != 0) {
  730                 aprint_error("%s: can't load rx list, error = %d\n",
  731                     sc->sc_dev.dv_xname, error);
  732                 goto fail_7;
  733         }
  734 
  735         /* Create DMA maps for RX buffers */
  736         for (i = 0; i < RE_RX_DESC_CNT; i++) {
  737                 error = bus_dmamap_create(sc->sc_dmat, MCLBYTES, 1, MCLBYTES,
  738                     0, 0, &sc->re_ldata.re_rxsoft[i].rxs_dmamap);
  739                 if (error) {
  740                         aprint_error("%s: can't create DMA map for RX\n",
  741                             sc->sc_dev.dv_xname);
  742                         goto fail_8;
  743                 }
  744         }
  745 
  746         /*
  747          * Record interface as attached. From here, we should not fail.
  748          */
  749         sc->sc_flags |= RTK_ATTACHED;
  750 
  751         ifp = &sc->ethercom.ec_if;
  752         ifp->if_softc = sc;
  753         strcpy(ifp->if_xname, sc->sc_dev.dv_xname);
  754         ifp->if_mtu = ETHERMTU;
  755         ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST;
  756         ifp->if_ioctl = re_ioctl;
  757         sc->ethercom.ec_capabilities |=
  758             ETHERCAP_VLAN_MTU | ETHERCAP_VLAN_HWTAGGING;
  759         ifp->if_start = re_start;
  760         ifp->if_stop = re_stop;
  761 
  762         /*
  763          * IFCAP_CSUM_IPv4_Tx on re(4) is broken for small packets,
  764          * so we have a workaround to handle the bug by padding
  765          * such packets manually.
  766          */
  767         ifp->if_capabilities |=
  768             IFCAP_CSUM_IPv4 |
  769             IFCAP_CSUM_TCPv4 |
  770             IFCAP_CSUM_UDPv4 |
  771             IFCAP_TSOv4;
  772         ifp->if_watchdog = re_watchdog;
  773         ifp->if_init = re_init;
  774         ifp->if_snd.ifq_maxlen = RE_IFQ_MAXLEN;
  775         ifp->if_capenable = ifp->if_capabilities;
  776         IFQ_SET_READY(&ifp->if_snd);
  777 
  778         callout_init(&sc->rtk_tick_ch);
  779 
  780         /* Do MII setup */
  781         sc->mii.mii_ifp = ifp;
  782         sc->mii.mii_readreg = re_miibus_readreg;
  783         sc->mii.mii_writereg = re_miibus_writereg;
  784         sc->mii.mii_statchg = re_miibus_statchg;
  785         ifmedia_init(&sc->mii.mii_media, IFM_IMASK, re_ifmedia_upd,
  786             re_ifmedia_sts);
  787         mii_attach(&sc->sc_dev, &sc->mii, 0xffffffff, MII_PHY_ANY,
  788             MII_OFFSET_ANY, 0);
  789         ifmedia_set(&sc->mii.mii_media, IFM_ETHER | IFM_AUTO);
  790 
  791         /*
  792          * Call MI attach routine.
  793          */
  794         if_attach(ifp);
  795         ether_ifattach(ifp, eaddr);
  796 
  797 
  798         /*
  799          * Make sure the interface is shutdown during reboot.
  800          */
  801         sc->sc_sdhook = shutdownhook_establish(re_shutdown, sc);
  802         if (sc->sc_sdhook == NULL)
  803                 aprint_error("%s: WARNING: unable to establish shutdown hook\n",
  804                     sc->sc_dev.dv_xname);
  805         /*
  806          * Add a suspend hook to make sure we come back up after a
  807          * resume.
  808          */
  809         sc->sc_powerhook = powerhook_establish(re_power, sc);
  810         if (sc->sc_powerhook == NULL)
  811                 aprint_error("%s: WARNING: unable to establish power hook\n",
  812                     sc->sc_dev.dv_xname);
  813 
  814 
  815         return;
  816 
  817  fail_8:
  818         /* Destroy DMA maps for RX buffers. */
  819         for (i = 0; i < RE_RX_DESC_CNT; i++)
  820                 if (sc->re_ldata.re_rxsoft[i].rxs_dmamap != NULL)
  821                         bus_dmamap_destroy(sc->sc_dmat,
  822                             sc->re_ldata.re_rxsoft[i].rxs_dmamap);
  823 
  824         /* Free DMA'able memory for the RX ring. */
  825         bus_dmamap_unload(sc->sc_dmat, sc->re_ldata.re_rx_list_map);
  826  fail_7:
  827         bus_dmamap_destroy(sc->sc_dmat, sc->re_ldata.re_rx_list_map);
  828  fail_6:
  829         bus_dmamem_unmap(sc->sc_dmat,
  830             (caddr_t)sc->re_ldata.re_rx_list, RE_RX_DMAMEM_SZ);
  831  fail_5:
  832         bus_dmamem_free(sc->sc_dmat,
  833             &sc->re_ldata.re_rx_listseg, sc->re_ldata.re_rx_listnseg);
  834 
  835  fail_4:
  836         /* Destroy DMA maps for TX buffers. */
  837         for (i = 0; i < RE_TX_QLEN; i++)
  838                 if (sc->re_ldata.re_txq[i].txq_dmamap != NULL)
  839                         bus_dmamap_destroy(sc->sc_dmat,
  840                             sc->re_ldata.re_txq[i].txq_dmamap);
  841 
  842         /* Free DMA'able memory for the TX ring. */
  843         bus_dmamap_unload(sc->sc_dmat, sc->re_ldata.re_tx_list_map);
  844  fail_3:
  845         bus_dmamap_destroy(sc->sc_dmat, sc->re_ldata.re_tx_list_map);
  846  fail_2:
  847         bus_dmamem_unmap(sc->sc_dmat,
  848             (caddr_t)sc->re_ldata.re_tx_list, RE_TX_LIST_SZ(sc));
  849  fail_1:
  850         bus_dmamem_free(sc->sc_dmat,
  851             &sc->re_ldata.re_tx_listseg, sc->re_ldata.re_tx_listnseg);
  852  fail_0:
  853         return;
  854 }
  855 
  856 
  857 /*
  858  * re_activate:
  859  *     Handle device activation/deactivation requests.
  860  */
  861 int
  862 re_activate(struct device *self, enum devact act)
  863 {
  864         struct rtk_softc *sc = (void *)self;
  865         int s, error = 0;
  866 
  867         s = splnet();
  868         switch (act) {
  869         case DVACT_ACTIVATE:
  870                 error = EOPNOTSUPP;
  871                 break;
  872         case DVACT_DEACTIVATE:
  873                 mii_activate(&sc->mii, act, MII_PHY_ANY, MII_OFFSET_ANY);
  874                 if_deactivate(&sc->ethercom.ec_if);
  875                 break;
  876         }
  877         splx(s);
  878 
  879         return error;
  880 }
  881 
  882 /*
  883  * re_detach:
  884  *     Detach a rtk interface.
  885  */
  886 int
  887 re_detach(struct rtk_softc *sc)
  888 {
  889         struct ifnet *ifp = &sc->ethercom.ec_if;
  890         int i;
  891 
  892         /*
  893          * Succeed now if there isn't any work to do.
  894          */
  895         if ((sc->sc_flags & RTK_ATTACHED) == 0)
  896                 return 0;
  897 
  898         /* Unhook our tick handler. */
  899         callout_stop(&sc->rtk_tick_ch);
  900 
  901         /* Detach all PHYs. */
  902         mii_detach(&sc->mii, MII_PHY_ANY, MII_OFFSET_ANY);
  903 
  904         /* Delete all remaining media. */
  905         ifmedia_delete_instance(&sc->mii.mii_media, IFM_INST_ANY);
  906 
  907         ether_ifdetach(ifp);
  908         if_detach(ifp);
  909 
  910         /* Destroy DMA maps for RX buffers. */
  911         for (i = 0; i < RE_RX_DESC_CNT; i++)
  912                 if (sc->re_ldata.re_rxsoft[i].rxs_dmamap != NULL)
  913                         bus_dmamap_destroy(sc->sc_dmat,
  914                             sc->re_ldata.re_rxsoft[i].rxs_dmamap);
  915 
  916         /* Free DMA'able memory for the RX ring. */
  917         bus_dmamap_unload(sc->sc_dmat, sc->re_ldata.re_rx_list_map);
  918         bus_dmamap_destroy(sc->sc_dmat, sc->re_ldata.re_rx_list_map);
  919         bus_dmamem_unmap(sc->sc_dmat,
  920             (caddr_t)sc->re_ldata.re_rx_list, RE_RX_DMAMEM_SZ);
  921         bus_dmamem_free(sc->sc_dmat,
  922             &sc->re_ldata.re_rx_listseg, sc->re_ldata.re_rx_listnseg);
  923 
  924         /* Destroy DMA maps for TX buffers. */
  925         for (i = 0; i < RE_TX_QLEN; i++)
  926                 if (sc->re_ldata.re_txq[i].txq_dmamap != NULL)
  927                         bus_dmamap_destroy(sc->sc_dmat,
  928                             sc->re_ldata.re_txq[i].txq_dmamap);
  929 
  930         /* Free DMA'able memory for the TX ring. */
  931         bus_dmamap_unload(sc->sc_dmat, sc->re_ldata.re_tx_list_map);
  932         bus_dmamap_destroy(sc->sc_dmat, sc->re_ldata.re_tx_list_map);
  933         bus_dmamem_unmap(sc->sc_dmat,
  934             (caddr_t)sc->re_ldata.re_tx_list, RE_TX_LIST_SZ(sc));
  935         bus_dmamem_free(sc->sc_dmat,
  936             &sc->re_ldata.re_tx_listseg, sc->re_ldata.re_tx_listnseg);
  937 
  938 
  939         shutdownhook_disestablish(sc->sc_sdhook);
  940         powerhook_disestablish(sc->sc_powerhook);
  941 
  942         return 0;
  943 }
  944 
  945 /*
  946  * re_enable:
  947  *     Enable the RTL81X9 chip.
  948  */
  949 static int
  950 re_enable(struct rtk_softc *sc)
  951 {
  952 
  953         if (RTK_IS_ENABLED(sc) == 0 && sc->sc_enable != NULL) {
  954                 if ((*sc->sc_enable)(sc) != 0) {
  955                         aprint_error("%s: device enable failed\n",
  956                             sc->sc_dev.dv_xname);
  957                         return EIO;
  958                 }
  959                 sc->sc_flags |= RTK_ENABLED;
  960         }
  961         return 0;
  962 }
  963 
  964 /*
  965  * re_disable:
  966  *     Disable the RTL81X9 chip.
  967  */
  968 static void
  969 re_disable(struct rtk_softc *sc)
  970 {
  971 
  972         if (RTK_IS_ENABLED(sc) && sc->sc_disable != NULL) {
  973                 (*sc->sc_disable)(sc);
  974                 sc->sc_flags &= ~RTK_ENABLED;
  975         }
  976 }
  977 
  978 /*
  979  * re_power:
  980  *     Power management (suspend/resume) hook.
  981  */
  982 void
  983 re_power(int why, void *arg)
  984 {
  985         struct rtk_softc *sc = (void *)arg;
  986         struct ifnet *ifp = &sc->ethercom.ec_if;
  987         int s;
  988 
  989         s = splnet();
  990         switch (why) {
  991         case PWR_SUSPEND:
  992         case PWR_STANDBY:
  993                 re_stop(ifp, 0);
  994                 if (sc->sc_power != NULL)
  995                         (*sc->sc_power)(sc, why);
  996                 break;
  997         case PWR_RESUME:
  998                 if (ifp->if_flags & IFF_UP) {
  999                         if (sc->sc_power != NULL)
 1000                                 (*sc->sc_power)(sc, why);
 1001                         re_init(ifp);
 1002                 }
 1003                 break;
 1004         case PWR_SOFTSUSPEND:
 1005         case PWR_SOFTSTANDBY:
 1006         case PWR_SOFTRESUME:
 1007                 break;
 1008         }
 1009         splx(s);
 1010 }
 1011 
 1012 
 1013 static int
 1014 re_newbuf(struct rtk_softc *sc, int idx, struct mbuf *m)
 1015 {
 1016         struct mbuf             *n = NULL;
 1017         bus_dmamap_t            map;
 1018         struct re_desc          *d;
 1019         struct re_rxsoft        *rxs;
 1020         uint32_t                cmdstat;
 1021         int                     error;
 1022 
 1023         if (m == NULL) {
 1024                 MGETHDR(n, M_DONTWAIT, MT_DATA);
 1025                 if (n == NULL)
 1026                         return ENOBUFS;
 1027 
 1028                 MCLGET(n, M_DONTWAIT);
 1029                 if ((n->m_flags & M_EXT) == 0) {
 1030                         m_freem(n);
 1031                         return ENOBUFS;
 1032                 }
 1033                 m = n;
 1034         } else
 1035                 m->m_data = m->m_ext.ext_buf;
 1036 
 1037         /*
 1038          * Initialize mbuf length fields and fixup
 1039          * alignment so that the frame payload is
 1040          * longword aligned.
 1041          */
 1042         m->m_len = m->m_pkthdr.len = MCLBYTES - RE_ETHER_ALIGN;
 1043         m->m_data += RE_ETHER_ALIGN;
 1044 
 1045         rxs = &sc->re_ldata.re_rxsoft[idx];
 1046         map = rxs->rxs_dmamap;
 1047         error = bus_dmamap_load_mbuf(sc->sc_dmat, map, m,
 1048             BUS_DMA_READ|BUS_DMA_NOWAIT);
 1049 
 1050         if (error)
 1051                 goto out;
 1052 
 1053         bus_dmamap_sync(sc->sc_dmat, map, 0, map->dm_mapsize,
 1054             BUS_DMASYNC_PREREAD);
 1055 
 1056         d = &sc->re_ldata.re_rx_list[idx];
 1057 #ifdef DIAGNOSTIC
 1058         RE_RXDESCSYNC(sc, idx, BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE);
 1059         cmdstat = le32toh(d->re_cmdstat);
 1060         RE_RXDESCSYNC(sc, idx, BUS_DMASYNC_PREREAD);
 1061         if (cmdstat & RE_RDESC_STAT_OWN) {
 1062                 panic("%s: tried to map busy RX descriptor",
 1063                     sc->sc_dev.dv_xname);
 1064         }
 1065 #endif
 1066 
 1067         rxs->rxs_mbuf = m;
 1068 
 1069         d->re_vlanctl = 0;
 1070         cmdstat = map->dm_segs[0].ds_len;
 1071         if (idx == (RE_RX_DESC_CNT - 1))
 1072                 cmdstat |= RE_RDESC_CMD_EOR;
 1073         re_set_bufaddr(d, map->dm_segs[0].ds_addr);
 1074         d->re_cmdstat = htole32(cmdstat);
 1075         RE_RXDESCSYNC(sc, idx, BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1076         cmdstat |= RE_RDESC_CMD_OWN;
 1077         d->re_cmdstat = htole32(cmdstat);
 1078         RE_RXDESCSYNC(sc, idx, BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1079 
 1080         return 0;
 1081  out:
 1082         if (n != NULL)
 1083                 m_freem(n);
 1084         return ENOMEM;
 1085 }
 1086 
 1087 static int
 1088 re_tx_list_init(struct rtk_softc *sc)
 1089 {
 1090         int i;
 1091 
 1092         memset(sc->re_ldata.re_tx_list, 0, RE_TX_LIST_SZ(sc));
 1093         for (i = 0; i < RE_TX_QLEN; i++) {
 1094                 sc->re_ldata.re_txq[i].txq_mbuf = NULL;
 1095         }
 1096 
 1097         bus_dmamap_sync(sc->sc_dmat,
 1098             sc->re_ldata.re_tx_list_map, 0,
 1099             sc->re_ldata.re_tx_list_map->dm_mapsize,
 1100             BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1101         sc->re_ldata.re_txq_prodidx = 0;
 1102         sc->re_ldata.re_txq_considx = 0;
 1103         sc->re_ldata.re_txq_free = RE_TX_QLEN;
 1104         sc->re_ldata.re_tx_free = RE_TX_DESC_CNT(sc);
 1105         sc->re_ldata.re_tx_nextfree = 0;
 1106 
 1107         return 0;
 1108 }
 1109 
 1110 static int
 1111 re_rx_list_init(struct rtk_softc *sc)
 1112 {
 1113         int                     i;
 1114 
 1115         memset((char *)sc->re_ldata.re_rx_list, 0, RE_RX_LIST_SZ);
 1116 
 1117         for (i = 0; i < RE_RX_DESC_CNT; i++) {
 1118                 if (re_newbuf(sc, i, NULL) == ENOBUFS)
 1119                         return ENOBUFS;
 1120         }
 1121 
 1122         sc->re_ldata.re_rx_prodidx = 0;
 1123         sc->re_head = sc->re_tail = NULL;
 1124 
 1125         return 0;
 1126 }
 1127 
 1128 /*
 1129  * RX handler for C+ and 8169. For the gigE chips, we support
 1130  * the reception of jumbo frames that have been fragmented
 1131  * across multiple 2K mbuf cluster buffers.
 1132  */
 1133 static void
 1134 re_rxeof(struct rtk_softc *sc)
 1135 {
 1136         struct mbuf             *m;
 1137         struct ifnet            *ifp;
 1138         int                     i, total_len;
 1139         struct re_desc          *cur_rx;
 1140         struct re_rxsoft        *rxs;
 1141         uint32_t                rxstat, rxvlan;
 1142 
 1143         ifp = &sc->ethercom.ec_if;
 1144 
 1145         for (i = sc->re_ldata.re_rx_prodidx;; i = RE_NEXT_RX_DESC(sc, i)) {
 1146                 cur_rx = &sc->re_ldata.re_rx_list[i];
 1147                 RE_RXDESCSYNC(sc, i,
 1148                     BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE);
 1149                 rxstat = le32toh(cur_rx->re_cmdstat);
 1150                 RE_RXDESCSYNC(sc, i, BUS_DMASYNC_PREREAD);
 1151                 if ((rxstat & RE_RDESC_STAT_OWN) != 0) {
 1152                         break;
 1153                 }
 1154                 total_len = rxstat & sc->re_rxlenmask;
 1155                 rxvlan = le32toh(cur_rx->re_vlanctl);
 1156                 rxs = &sc->re_ldata.re_rxsoft[i];
 1157                 m = rxs->rxs_mbuf;
 1158 
 1159                 /* Invalidate the RX mbuf and unload its map */
 1160 
 1161                 bus_dmamap_sync(sc->sc_dmat,
 1162                     rxs->rxs_dmamap, 0, rxs->rxs_dmamap->dm_mapsize,
 1163                     BUS_DMASYNC_POSTREAD);
 1164                 bus_dmamap_unload(sc->sc_dmat, rxs->rxs_dmamap);
 1165 
 1166                 if ((rxstat & RE_RDESC_STAT_EOF) == 0) {
 1167                         m->m_len = MCLBYTES - RE_ETHER_ALIGN;
 1168                         if (sc->re_head == NULL)
 1169                                 sc->re_head = sc->re_tail = m;
 1170                         else {
 1171                                 m->m_flags &= ~M_PKTHDR;
 1172                                 sc->re_tail->m_next = m;
 1173                                 sc->re_tail = m;
 1174                         }
 1175                         re_newbuf(sc, i, NULL);
 1176                         continue;
 1177                 }
 1178 
 1179                 /*
 1180                  * NOTE: for the 8139C+, the frame length field
 1181                  * is always 12 bits in size, but for the gigE chips,
 1182                  * it is 13 bits (since the max RX frame length is 16K).
 1183                  * Unfortunately, all 32 bits in the status word
 1184                  * were already used, so to make room for the extra
 1185                  * length bit, RealTek took out the 'frame alignment
 1186                  * error' bit and shifted the other status bits
 1187                  * over one slot. The OWN, EOR, FS and LS bits are
 1188                  * still in the same places. We have already extracted
 1189                  * the frame length and checked the OWN bit, so rather
 1190                  * than using an alternate bit mapping, we shift the
 1191                  * status bits one space to the right so we can evaluate
 1192                  * them using the 8169 status as though it was in the
 1193                  * same format as that of the 8139C+.
 1194                  */
 1195                 if ((sc->sc_quirk & RTKQ_8139CPLUS) == 0)
 1196                         rxstat >>= 1;
 1197 
 1198                 if (__predict_false((rxstat & RE_RDESC_STAT_RXERRSUM) != 0)) {
 1199 #ifdef RE_DEBUG
 1200                         aprint_error("%s: RX error (rxstat = 0x%08x)",
 1201                             sc->sc_dev.dv_xname, rxstat);
 1202                         if (rxstat & RE_RDESC_STAT_FRALIGN)
 1203                                 aprint_error(", frame alignment error");
 1204                         if (rxstat & RE_RDESC_STAT_BUFOFLOW)
 1205                                 aprint_error(", out of buffer space");
 1206                         if (rxstat & RE_RDESC_STAT_FIFOOFLOW)
 1207                                 aprint_error(", FIFO overrun");
 1208                         if (rxstat & RE_RDESC_STAT_GIANT)
 1209                                 aprint_error(", giant packet");
 1210                         if (rxstat & RE_RDESC_STAT_RUNT)
 1211                                 aprint_error(", runt packet");
 1212                         if (rxstat & RE_RDESC_STAT_CRCERR)
 1213                                 aprint_error(", CRC error");
 1214                         aprint_error("\n");
 1215 #endif
 1216                         ifp->if_ierrors++;
 1217                         /*
 1218                          * If this is part of a multi-fragment packet,
 1219                          * discard all the pieces.
 1220                          */
 1221                         if (sc->re_head != NULL) {
 1222                                 m_freem(sc->re_head);
 1223                                 sc->re_head = sc->re_tail = NULL;
 1224                         }
 1225                         re_newbuf(sc, i, m);
 1226                         continue;
 1227                 }
 1228 
 1229                 /*
 1230                  * If allocating a replacement mbuf fails,
 1231                  * reload the current one.
 1232                  */
 1233 
 1234                 if (__predict_false(re_newbuf(sc, i, NULL) != 0)) {
 1235                         ifp->if_ierrors++;
 1236                         if (sc->re_head != NULL) {
 1237                                 m_freem(sc->re_head);
 1238                                 sc->re_head = sc->re_tail = NULL;
 1239                         }
 1240                         re_newbuf(sc, i, m);
 1241                         continue;
 1242                 }
 1243 
 1244                 if (sc->re_head != NULL) {
 1245                         m->m_len = total_len % (MCLBYTES - RE_ETHER_ALIGN);
 1246                         /*
 1247                          * Special case: if there's 4 bytes or less
 1248                          * in this buffer, the mbuf can be discarded:
 1249                          * the last 4 bytes is the CRC, which we don't
 1250                          * care about anyway.
 1251                          */
 1252                         if (m->m_len <= ETHER_CRC_LEN) {
 1253                                 sc->re_tail->m_len -=
 1254                                     (ETHER_CRC_LEN - m->m_len);
 1255                                 m_freem(m);
 1256                         } else {
 1257                                 m->m_len -= ETHER_CRC_LEN;
 1258                                 m->m_flags &= ~M_PKTHDR;
 1259                                 sc->re_tail->m_next = m;
 1260                         }
 1261                         m = sc->re_head;
 1262                         sc->re_head = sc->re_tail = NULL;
 1263                         m->m_pkthdr.len = total_len - ETHER_CRC_LEN;
 1264                 } else
 1265                         m->m_pkthdr.len = m->m_len =
 1266                             (total_len - ETHER_CRC_LEN);
 1267 
 1268                 ifp->if_ipackets++;
 1269                 m->m_pkthdr.rcvif = ifp;
 1270 
 1271                 /* Do RX checksumming */
 1272 
 1273                 /* Check IP header checksum */
 1274                 if (rxstat & RE_RDESC_STAT_PROTOID) {
 1275                         m->m_pkthdr.csum_flags |= M_CSUM_IPv4;
 1276                         if (rxstat & RE_RDESC_STAT_IPSUMBAD)
 1277                                 m->m_pkthdr.csum_flags |= M_CSUM_IPv4_BAD;
 1278                 }
 1279 
 1280                 /* Check TCP/UDP checksum */
 1281                 if (RE_TCPPKT(rxstat)) {
 1282                         m->m_pkthdr.csum_flags |= M_CSUM_TCPv4;
 1283                         if (rxstat & RE_RDESC_STAT_TCPSUMBAD)
 1284                                 m->m_pkthdr.csum_flags |= M_CSUM_TCP_UDP_BAD;
 1285                 } else if (RE_UDPPKT(rxstat)) {
 1286                         m->m_pkthdr.csum_flags |= M_CSUM_UDPv4;
 1287                         if (rxstat & RE_RDESC_STAT_UDPSUMBAD)
 1288                                 m->m_pkthdr.csum_flags |= M_CSUM_TCP_UDP_BAD;
 1289                 }
 1290 
 1291                 if (rxvlan & RE_RDESC_VLANCTL_TAG) {
 1292                         VLAN_INPUT_TAG(ifp, m,
 1293                              bswap16(rxvlan & RE_RDESC_VLANCTL_DATA),
 1294                              continue);
 1295                 }
 1296 #if NBPFILTER > 0
 1297                 if (ifp->if_bpf)
 1298                         bpf_mtap(ifp->if_bpf, m);
 1299 #endif
 1300                 (*ifp->if_input)(ifp, m);
 1301         }
 1302 
 1303         sc->re_ldata.re_rx_prodidx = i;
 1304 }
 1305 
 1306 static void
 1307 re_txeof(struct rtk_softc *sc)
 1308 {
 1309         struct ifnet            *ifp;
 1310         struct re_txq           *txq;
 1311         uint32_t                txstat;
 1312         int                     idx, descidx;
 1313 
 1314         ifp = &sc->ethercom.ec_if;
 1315 
 1316         for (idx = sc->re_ldata.re_txq_considx;
 1317             sc->re_ldata.re_txq_free < RE_TX_QLEN;
 1318             idx = RE_NEXT_TXQ(sc, idx), sc->re_ldata.re_txq_free++) {
 1319                 txq = &sc->re_ldata.re_txq[idx];
 1320                 KASSERT(txq->txq_mbuf != NULL);
 1321 
 1322                 descidx = txq->txq_descidx;
 1323                 RE_TXDESCSYNC(sc, descidx,
 1324                     BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE);
 1325                 txstat =
 1326                     le32toh(sc->re_ldata.re_tx_list[descidx].re_cmdstat);
 1327                 RE_TXDESCSYNC(sc, descidx, BUS_DMASYNC_PREREAD);
 1328                 KASSERT((txstat & RE_TDESC_CMD_EOF) != 0);
 1329                 if (txstat & RE_TDESC_CMD_OWN) {
 1330                         break;
 1331                 }
 1332 
 1333                 sc->re_ldata.re_tx_free += txq->txq_nsegs;
 1334                 KASSERT(sc->re_ldata.re_tx_free <= RE_TX_DESC_CNT(sc));
 1335                 bus_dmamap_sync(sc->sc_dmat, txq->txq_dmamap,
 1336                     0, txq->txq_dmamap->dm_mapsize, BUS_DMASYNC_POSTWRITE);
 1337                 bus_dmamap_unload(sc->sc_dmat, txq->txq_dmamap);
 1338                 m_freem(txq->txq_mbuf);
 1339                 txq->txq_mbuf = NULL;
 1340 
 1341                 if (txstat & (RE_TDESC_STAT_EXCESSCOL | RE_TDESC_STAT_COLCNT))
 1342                         ifp->if_collisions++;
 1343                 if (txstat & RE_TDESC_STAT_TXERRSUM)
 1344                         ifp->if_oerrors++;
 1345                 else
 1346                         ifp->if_opackets++;
 1347         }
 1348 
 1349         sc->re_ldata.re_txq_considx = idx;
 1350 
 1351         if (sc->re_ldata.re_txq_free > RE_NTXDESC_RSVD)
 1352                 ifp->if_flags &= ~IFF_OACTIVE;
 1353 
 1354         /*
 1355          * If not all descriptors have been released reaped yet,
 1356          * reload the timer so that we will eventually get another
 1357          * interrupt that will cause us to re-enter this routine.
 1358          * This is done in case the transmitter has gone idle.
 1359          */
 1360         if (sc->re_ldata.re_txq_free < RE_TX_QLEN) {
 1361                 CSR_WRITE_4(sc, RTK_TIMERCNT, 1);
 1362                 if ((sc->sc_quirk & RTKQ_PCIE) != 0) {
 1363                         /*
 1364                          * Some chips will ignore a second TX request
 1365                          * issued while an existing transmission is in
 1366                          * progress. If the transmitter goes idle but
 1367                          * there are still packets waiting to be sent,
 1368                          * we need to restart the channel here to flush
 1369                          * them out. This only seems to be required with
 1370                          * the PCIe devices.
 1371                          */
 1372                         CSR_WRITE_2(sc, RTK_GTXSTART, RTK_TXSTART_START);
 1373                 }
 1374         } else
 1375                 ifp->if_timer = 0;
 1376 }
 1377 
 1378 /*
 1379  * Stop all chip I/O so that the kernel's probe routines don't
 1380  * get confused by errant DMAs when rebooting.
 1381  */
 1382 static void
 1383 re_shutdown(void *vsc)
 1384 
 1385 {
 1386         struct rtk_softc        *sc = vsc;
 1387 
 1388         re_stop(&sc->ethercom.ec_if, 0);
 1389 }
 1390 
 1391 
 1392 static void
 1393 re_tick(void *xsc)
 1394 {
 1395         struct rtk_softc        *sc = xsc;
 1396         int s;
 1397 
 1398         /*XXX: just return for 8169S/8110S with rev 2 or newer phy */
 1399         s = splnet();
 1400 
 1401         mii_tick(&sc->mii);
 1402         splx(s);
 1403 
 1404         callout_reset(&sc->rtk_tick_ch, hz, re_tick, sc);
 1405 }
 1406 
 1407 #ifdef DEVICE_POLLING
 1408 static void
 1409 re_poll(struct ifnet *ifp, enum poll_cmd cmd, int count)
 1410 {
 1411         struct rtk_softc *sc = ifp->if_softc;
 1412 
 1413         RTK_LOCK(sc);
 1414         if ((ifp->if_capenable & IFCAP_POLLING) == 0) {
 1415                 ether_poll_deregister(ifp);
 1416                 cmd = POLL_DEREGISTER;
 1417         }
 1418         if (cmd == POLL_DEREGISTER) { /* final call, enable interrupts */
 1419                 CSR_WRITE_2(sc, RTK_IMR, RTK_INTRS_CPLUS);
 1420                 goto done;
 1421         }
 1422 
 1423         sc->rxcycles = count;
 1424         re_rxeof(sc);
 1425         re_txeof(sc);
 1426 
 1427         if (IFQ_IS_EMPTY(&ifp->if_snd) == 0)
 1428                 (*ifp->if_start)(ifp);
 1429 
 1430         if (cmd == POLL_AND_CHECK_STATUS) { /* also check status register */
 1431                 uint16_t       status;
 1432 
 1433                 status = CSR_READ_2(sc, RTK_ISR);
 1434                 if (status == 0xffff)
 1435                         goto done;
 1436                 if (status)
 1437                         CSR_WRITE_2(sc, RTK_ISR, status);
 1438 
 1439                 /*
 1440                  * XXX check behaviour on receiver stalls.
 1441                  */
 1442 
 1443                 if (status & RTK_ISR_SYSTEM_ERR) {
 1444                         re_init(sc);
 1445                 }
 1446         }
 1447  done:
 1448         RTK_UNLOCK(sc);
 1449 }
 1450 #endif /* DEVICE_POLLING */
 1451 
 1452 int
 1453 re_intr(void *arg)
 1454 {
 1455         struct rtk_softc        *sc = arg;
 1456         struct ifnet            *ifp;
 1457         uint16_t                status;
 1458         int                     handled = 0;
 1459 
 1460         ifp = &sc->ethercom.ec_if;
 1461 
 1462         if ((ifp->if_flags & IFF_UP) == 0)
 1463                 return 0;
 1464 
 1465 #ifdef DEVICE_POLLING
 1466         if (ifp->if_flags & IFF_POLLING)
 1467                 goto done;
 1468         if ((ifp->if_capenable & IFCAP_POLLING) &&
 1469             ether_poll_register(re_poll, ifp)) { /* ok, disable interrupts */
 1470                 CSR_WRITE_2(sc, RTK_IMR, 0x0000);
 1471                 re_poll(ifp, 0, 1);
 1472                 goto done;
 1473         }
 1474 #endif /* DEVICE_POLLING */
 1475 
 1476         for (;;) {
 1477 
 1478                 status = CSR_READ_2(sc, RTK_ISR);
 1479                 /* If the card has gone away the read returns 0xffff. */
 1480                 if (status == 0xffff)
 1481                         break;
 1482                 if (status) {
 1483                         handled = 1;
 1484                         CSR_WRITE_2(sc, RTK_ISR, status);
 1485                 }
 1486 
 1487                 if ((status & RTK_INTRS_CPLUS) == 0)
 1488                         break;
 1489 
 1490                 if (status & (RTK_ISR_RX_OK | RTK_ISR_RX_ERR))
 1491                         re_rxeof(sc);
 1492 
 1493                 if (status & (RTK_ISR_TIMEOUT_EXPIRED | RTK_ISR_TX_ERR |
 1494                     RTK_ISR_TX_DESC_UNAVAIL))
 1495                         re_txeof(sc);
 1496 
 1497                 if (status & RTK_ISR_SYSTEM_ERR) {
 1498                         re_init(ifp);
 1499                 }
 1500 
 1501                 if (status & RTK_ISR_LINKCHG) {
 1502                         callout_stop(&sc->rtk_tick_ch);
 1503                         re_tick(sc);
 1504                 }
 1505         }
 1506 
 1507         if (handled && !IFQ_IS_EMPTY(&ifp->if_snd))
 1508                 re_start(ifp);
 1509 
 1510 #ifdef DEVICE_POLLING
 1511  done:
 1512 #endif
 1513 
 1514         return handled;
 1515 }
 1516 
 1517 
 1518 
 1519 /*
 1520  * Main transmit routine for C+ and gigE NICs.
 1521  */
 1522 
 1523 static void
 1524 re_start(struct ifnet *ifp)
 1525 {
 1526         struct rtk_softc        *sc;
 1527         struct mbuf             *m;
 1528         bus_dmamap_t            map;
 1529         struct re_txq           *txq;
 1530         struct re_desc          *d;
 1531         struct m_tag            *mtag;
 1532         uint32_t                cmdstat, re_flags;
 1533         int                     ofree, idx, error, nsegs, seg;
 1534         int                     startdesc, curdesc, lastdesc;
 1535         boolean_t               pad;
 1536 
 1537         sc = ifp->if_softc;
 1538         ofree = sc->re_ldata.re_txq_free;
 1539 
 1540         for (idx = sc->re_ldata.re_txq_prodidx;; idx = RE_NEXT_TXQ(sc, idx)) {
 1541 
 1542                 IFQ_POLL(&ifp->if_snd, m);
 1543                 if (m == NULL)
 1544                         break;
 1545 
 1546                 if (sc->re_ldata.re_txq_free == 0 ||
 1547                     sc->re_ldata.re_tx_free <= RE_NTXDESC_RSVD) {
 1548                         /* no more free slots left */
 1549                         ifp->if_flags |= IFF_OACTIVE;
 1550                         break;
 1551                 }
 1552 
 1553                 /*
 1554                  * Set up checksum offload. Note: checksum offload bits must
 1555                  * appear in all descriptors of a multi-descriptor transmit
 1556                  * attempt. (This is according to testing done with an 8169
 1557                  * chip. I'm not sure if this is a requirement or a bug.)
 1558                  */
 1559 
 1560                 if ((m->m_pkthdr.csum_flags & M_CSUM_TSOv4) != 0) {
 1561                         uint32_t segsz = m->m_pkthdr.segsz;
 1562 
 1563                         re_flags = RE_TDESC_CMD_LGSEND |
 1564                             (segsz << RE_TDESC_CMD_MSSVAL_SHIFT);
 1565                 } else {
 1566                         /*
 1567                          * set RE_TDESC_CMD_IPCSUM if any checksum offloading
 1568                          * is requested.  otherwise, RE_TDESC_CMD_TCPCSUM/
 1569                          * RE_TDESC_CMD_UDPCSUM doesn't make effects.
 1570                          */
 1571                         re_flags = 0;
 1572                         if ((m->m_pkthdr.csum_flags &
 1573                             (M_CSUM_IPv4 | M_CSUM_TCPv4 | M_CSUM_UDPv4))
 1574                             != 0) {
 1575                                 re_flags |= RE_TDESC_CMD_IPCSUM;
 1576                                 if (m->m_pkthdr.csum_flags & M_CSUM_TCPv4) {
 1577                                         re_flags |= RE_TDESC_CMD_TCPCSUM;
 1578                                 } else if (m->m_pkthdr.csum_flags &
 1579                                     M_CSUM_UDPv4) {
 1580                                         re_flags |= RE_TDESC_CMD_UDPCSUM;
 1581                                 }
 1582                         }
 1583                 }
 1584 
 1585                 txq = &sc->re_ldata.re_txq[idx];
 1586                 map = txq->txq_dmamap;
 1587                 error = bus_dmamap_load_mbuf(sc->sc_dmat, map, m,
 1588                     BUS_DMA_WRITE|BUS_DMA_NOWAIT);
 1589 
 1590                 if (__predict_false(error)) {
 1591                         /* XXX try to defrag if EFBIG? */
 1592                         aprint_error("%s: can't map mbuf (error %d)\n",
 1593                             sc->sc_dev.dv_xname, error);
 1594 
 1595                         IFQ_DEQUEUE(&ifp->if_snd, m);
 1596                         m_freem(m);
 1597                         ifp->if_oerrors++;
 1598                         continue;
 1599                 }
 1600 
 1601                 nsegs = map->dm_nsegs;
 1602                 pad = FALSE;
 1603                 if (__predict_false(m->m_pkthdr.len <= RE_IP4CSUMTX_PADLEN &&
 1604                     (re_flags & RE_TDESC_CMD_IPCSUM) != 0)) {
 1605                         pad = TRUE;
 1606                         nsegs++;
 1607                 }
 1608 
 1609                 if (nsegs > sc->re_ldata.re_tx_free - RE_NTXDESC_RSVD) {
 1610                         /*
 1611                          * Not enough free descriptors to transmit this packet.
 1612                          */
 1613                         ifp->if_flags |= IFF_OACTIVE;
 1614                         bus_dmamap_unload(sc->sc_dmat, map);
 1615                         break;
 1616                 }
 1617 
 1618                 IFQ_DEQUEUE(&ifp->if_snd, m);
 1619 
 1620                 /*
 1621                  * Make sure that the caches are synchronized before we
 1622                  * ask the chip to start DMA for the packet data.
 1623                  */
 1624                 bus_dmamap_sync(sc->sc_dmat, map, 0, map->dm_mapsize,
 1625                     BUS_DMASYNC_PREWRITE);
 1626 
 1627                 /*
 1628                  * Map the segment array into descriptors.
 1629                  * Note that we set the start-of-frame and
 1630                  * end-of-frame markers for either TX or RX,
 1631                  * but they really only have meaning in the TX case.
 1632                  * (In the RX case, it's the chip that tells us
 1633                  *  where packets begin and end.)
 1634                  * We also keep track of the end of the ring
 1635                  * and set the end-of-ring bits as needed,
 1636                  * and we set the ownership bits in all except
 1637                  * the very first descriptor. (The caller will
 1638                  * set this descriptor later when it start
 1639                  * transmission or reception.)
 1640                  */
 1641                 curdesc = startdesc = sc->re_ldata.re_tx_nextfree;
 1642                 lastdesc = -1;
 1643                 for (seg = 0; seg < map->dm_nsegs;
 1644                     seg++, curdesc = RE_NEXT_TX_DESC(sc, curdesc)) {
 1645                         d = &sc->re_ldata.re_tx_list[curdesc];
 1646 #ifdef DIAGNOSTIC
 1647                         RE_TXDESCSYNC(sc, curdesc,
 1648                             BUS_DMASYNC_POSTREAD|BUS_DMASYNC_POSTWRITE);
 1649                         cmdstat = le32toh(d->re_cmdstat);
 1650                         RE_TXDESCSYNC(sc, curdesc, BUS_DMASYNC_PREREAD);
 1651                         if (cmdstat & RE_TDESC_STAT_OWN) {
 1652                                 panic("%s: tried to map busy TX descriptor",
 1653                                     sc->sc_dev.dv_xname);
 1654                         }
 1655 #endif
 1656 
 1657                         d->re_vlanctl = 0;
 1658                         re_set_bufaddr(d, map->dm_segs[seg].ds_addr);
 1659                         cmdstat = re_flags | map->dm_segs[seg].ds_len;
 1660                         if (seg == 0)
 1661                                 cmdstat |= RE_TDESC_CMD_SOF;
 1662                         else
 1663                                 cmdstat |= RE_TDESC_CMD_OWN;
 1664                         if (curdesc == (RE_TX_DESC_CNT(sc) - 1))
 1665                                 cmdstat |= RE_TDESC_CMD_EOR;
 1666                         if (seg == nsegs - 1) {
 1667                                 cmdstat |= RE_TDESC_CMD_EOF;
 1668                                 lastdesc = curdesc;
 1669                         }
 1670                         d->re_cmdstat = htole32(cmdstat);
 1671                         RE_TXDESCSYNC(sc, curdesc,
 1672                             BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1673                 }
 1674                 if (__predict_false(pad)) {
 1675                         bus_addr_t paddaddr;
 1676 
 1677                         d = &sc->re_ldata.re_tx_list[curdesc];
 1678                         d->re_vlanctl = 0;
 1679                         paddaddr = RE_TXPADDADDR(sc);
 1680                         re_set_bufaddr(d, paddaddr);
 1681                         cmdstat = re_flags |
 1682                             RE_TDESC_CMD_OWN | RE_TDESC_CMD_EOF |
 1683                             (RE_IP4CSUMTX_PADLEN + 1 - m->m_pkthdr.len);
 1684                         if (curdesc == (RE_TX_DESC_CNT(sc) - 1))
 1685                                 cmdstat |= RE_TDESC_CMD_EOR;
 1686                         d->re_cmdstat = htole32(cmdstat);
 1687                         RE_TXDESCSYNC(sc, curdesc,
 1688                             BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1689                         lastdesc = curdesc;
 1690                         curdesc = RE_NEXT_TX_DESC(sc, curdesc);
 1691                 }
 1692                 KASSERT(lastdesc != -1);
 1693 
 1694                 /*
 1695                  * Set up hardware VLAN tagging. Note: vlan tag info must
 1696                  * appear in the first descriptor of a multi-descriptor
 1697                  * transmission attempt.
 1698                  */
 1699                 if ((mtag = VLAN_OUTPUT_TAG(&sc->ethercom, m)) != NULL) {
 1700                         sc->re_ldata.re_tx_list[startdesc].re_vlanctl =
 1701                             htole32(bswap16(VLAN_TAG_VALUE(mtag)) |
 1702                             RE_TDESC_VLANCTL_TAG);
 1703                 }
 1704 
 1705                 /* Transfer ownership of packet to the chip. */
 1706 
 1707                 sc->re_ldata.re_tx_list[startdesc].re_cmdstat |=
 1708                     htole32(RE_TDESC_CMD_OWN);
 1709                 RE_TXDESCSYNC(sc, startdesc,
 1710                     BUS_DMASYNC_PREREAD|BUS_DMASYNC_PREWRITE);
 1711 
 1712                 /* update info of TX queue and descriptors */
 1713                 txq->txq_mbuf = m;
 1714                 txq->txq_descidx = lastdesc;
 1715                 txq->txq_nsegs = nsegs;
 1716 
 1717                 sc->re_ldata.re_txq_free--;
 1718                 sc->re_ldata.re_tx_free -= nsegs;
 1719                 sc->re_ldata.re_tx_nextfree = curdesc;
 1720 
 1721 #if NBPFILTER > 0
 1722                 /*
 1723                  * If there's a BPF listener, bounce a copy of this frame
 1724                  * to him.
 1725                  */
 1726                 if (ifp->if_bpf)
 1727                         bpf_mtap(ifp->if_bpf, m);
 1728 #endif
 1729         }
 1730 
 1731         if (sc->re_ldata.re_txq_free < ofree) {
 1732                 /*
 1733                  * TX packets are enqueued.
 1734                  */
 1735                 sc->re_ldata.re_txq_prodidx = idx;
 1736 
 1737                 /*
 1738                  * Start the transmitter to poll.
 1739                  *
 1740                  * RealTek put the TX poll request register in a different
 1741                  * location on the 8169 gigE chip. I don't know why.
 1742                  */
 1743                 if ((sc->sc_quirk & RTKQ_8139CPLUS) != 0)
 1744                         CSR_WRITE_1(sc, RTK_TXSTART, RTK_TXSTART_START);
 1745                 else
 1746                         CSR_WRITE_2(sc, RTK_GTXSTART, RTK_TXSTART_START);
 1747 
 1748                 /*
 1749                  * Use the countdown timer for interrupt moderation.
 1750                  * 'TX done' interrupts are disabled. Instead, we reset the
 1751                  * countdown timer, which will begin counting until it hits
 1752                  * the value in the TIMERINT register, and then trigger an
 1753                  * interrupt. Each time we write to the TIMERCNT register,
 1754                  * the timer count is reset to 0.
 1755                  */
 1756                 CSR_WRITE_4(sc, RTK_TIMERCNT, 1);
 1757 
 1758                 /*
 1759                  * Set a timeout in case the chip goes out to lunch.
 1760                  */
 1761                 ifp->if_timer = 5;
 1762         }
 1763 }
 1764 
 1765 static int
 1766 re_init(struct ifnet *ifp)
 1767 {
 1768         struct rtk_softc        *sc = ifp->if_softc;
 1769         uint8_t                 *enaddr;
 1770         uint32_t                rxcfg = 0;
 1771         uint32_t                reg;
 1772         int error;
 1773 
 1774         if ((error = re_enable(sc)) != 0)
 1775                 goto out;
 1776 
 1777         /*
 1778          * Cancel pending I/O and free all RX/TX buffers.
 1779          */
 1780         re_stop(ifp, 0);
 1781 
 1782         re_reset(sc);
 1783 
 1784         /*
 1785          * Enable C+ RX and TX mode, as well as VLAN stripping and
 1786          * RX checksum offload. We must configure the C+ register
 1787          * before all others.
 1788          */
 1789         reg = 0;
 1790 
 1791         /*
 1792          * XXX: Realtek docs say bits 0 and 1 are reserved, for 8169S/8110S.
 1793          * FreeBSD  drivers set these bits anyway (for 8139C+?).
 1794          * So far, it works.
 1795          */
 1796 
 1797         /*
 1798          * XXX: For old 8169 set bit 14.
 1799          *      For 8169S/8110S and above, do not set bit 14.
 1800          */
 1801         if ((sc->sc_quirk & RTKQ_8169NONS) != 0)
 1802                 reg |= (0x1 << 14) | RTK_CPLUSCMD_PCI_MRW;;
 1803 
 1804         if (1)  {/* not for 8169S ? */
 1805                 reg |=
 1806                     RTK_CPLUSCMD_VLANSTRIP |
 1807                     (ifp->if_capenable &
 1808                     (IFCAP_CSUM_IPv4 | IFCAP_CSUM_TCPv4 |
 1809                      IFCAP_CSUM_UDPv4) ?
 1810                     RTK_CPLUSCMD_RXCSUM_ENB : 0);
 1811         }
 1812 
 1813         CSR_WRITE_2(sc, RTK_CPLUS_CMD,
 1814             reg | RTK_CPLUSCMD_RXENB | RTK_CPLUSCMD_TXENB);
 1815 
 1816         /* XXX: from Realtek-supplied Linux driver. Wholly undocumented. */
 1817         if ((sc->sc_quirk & RTKQ_8139CPLUS) == 0)
 1818                 CSR_WRITE_2(sc, RTK_IM, 0x0000);
 1819 
 1820         DELAY(10000);
 1821 
 1822         /*
 1823          * Init our MAC address.  Even though the chipset
 1824          * documentation doesn't mention it, we need to enter "Config
 1825          * register write enable" mode to modify the ID registers.
 1826          */
 1827         CSR_WRITE_1(sc, RTK_EECMD, RTK_EEMODE_WRITECFG);
 1828         enaddr = LLADDR(ifp->if_sadl);
 1829         reg = enaddr[0] | (enaddr[1] << 8) |
 1830             (enaddr[2] << 16) | (enaddr[3] << 24);
 1831         CSR_WRITE_4(sc, RTK_IDR0, reg);
 1832         reg = enaddr[4] | (enaddr[5] << 8);
 1833         CSR_WRITE_4(sc, RTK_IDR4, reg);
 1834         CSR_WRITE_1(sc, RTK_EECMD, RTK_EEMODE_OFF);
 1835 
 1836         /*
 1837          * For C+ mode, initialize the RX descriptors and mbufs.
 1838          */
 1839         re_rx_list_init(sc);
 1840         re_tx_list_init(sc);
 1841 
 1842         /*
 1843          * Load the addresses of the RX and TX lists into the chip.
 1844          */
 1845         CSR_WRITE_4(sc, RTK_RXLIST_ADDR_HI,
 1846             RE_ADDR_HI(sc->re_ldata.re_rx_list_map->dm_segs[0].ds_addr));
 1847         CSR_WRITE_4(sc, RTK_RXLIST_ADDR_LO,
 1848             RE_ADDR_LO(sc->re_ldata.re_rx_list_map->dm_segs[0].ds_addr));
 1849 
 1850         CSR_WRITE_4(sc, RTK_TXLIST_ADDR_HI,
 1851             RE_ADDR_HI(sc->re_ldata.re_tx_list_map->dm_segs[0].ds_addr));
 1852         CSR_WRITE_4(sc, RTK_TXLIST_ADDR_LO,
 1853             RE_ADDR_LO(sc->re_ldata.re_tx_list_map->dm_segs[0].ds_addr));
 1854 
 1855         /*
 1856          * Enable transmit and receive.
 1857          */
 1858         CSR_WRITE_1(sc, RTK_COMMAND, RTK_CMD_TX_ENB | RTK_CMD_RX_ENB);
 1859 
 1860         /*
 1861          * Set the initial TX and RX configuration.
 1862          */
 1863         if (sc->re_testmode && (sc->sc_quirk & RTKQ_8169NONS) != 0) {
 1864                 /* test mode is needed only for old 8169 */
 1865                 CSR_WRITE_4(sc, RTK_TXCFG,
 1866                     RE_TXCFG_CONFIG | RTK_LOOPTEST_ON);
 1867         } else
 1868                 CSR_WRITE_4(sc, RTK_TXCFG, RE_TXCFG_CONFIG);
 1869 
 1870         CSR_WRITE_1(sc, RTK_EARLY_TX_THRESH, 16);
 1871 
 1872         CSR_WRITE_4(sc, RTK_RXCFG, RE_RXCFG_CONFIG);
 1873 
 1874         /* Set the individual bit to receive frames for this host only. */
 1875         rxcfg = CSR_READ_4(sc, RTK_RXCFG);
 1876         rxcfg |= RTK_RXCFG_RX_INDIV;
 1877 
 1878         /* If we want promiscuous mode, set the allframes bit. */
 1879         if (ifp->if_flags & IFF_PROMISC)
 1880                 rxcfg |= RTK_RXCFG_RX_ALLPHYS;
 1881         else
 1882                 rxcfg &= ~RTK_RXCFG_RX_ALLPHYS;
 1883         CSR_WRITE_4(sc, RTK_RXCFG, rxcfg);
 1884 
 1885         /*
 1886          * Set capture broadcast bit to capture broadcast frames.
 1887          */
 1888         if (ifp->if_flags & IFF_BROADCAST)
 1889                 rxcfg |= RTK_RXCFG_RX_BROAD;
 1890         else
 1891                 rxcfg &= ~RTK_RXCFG_RX_BROAD;
 1892         CSR_WRITE_4(sc, RTK_RXCFG, rxcfg);
 1893 
 1894         /*
 1895          * Program the multicast filter, if necessary.
 1896          */
 1897         rtk_setmulti(sc);
 1898 
 1899 #ifdef DEVICE_POLLING
 1900         /*
 1901          * Disable interrupts if we are polling.
 1902          */
 1903         if (ifp->if_flags & IFF_POLLING)
 1904                 CSR_WRITE_2(sc, RTK_IMR, 0);
 1905         else    /* otherwise ... */
 1906 #endif /* DEVICE_POLLING */
 1907         /*
 1908          * Enable interrupts.
 1909          */
 1910         if (sc->re_testmode)
 1911                 CSR_WRITE_2(sc, RTK_IMR, 0);
 1912         else
 1913                 CSR_WRITE_2(sc, RTK_IMR, RTK_INTRS_CPLUS);
 1914 
 1915         /* Start RX/TX process. */
 1916         CSR_WRITE_4(sc, RTK_MISSEDPKT, 0);
 1917 #ifdef notdef
 1918         /* Enable receiver and transmitter. */
 1919         CSR_WRITE_1(sc, RTK_COMMAND, RTK_CMD_TX_ENB | RTK_CMD_RX_ENB);
 1920 #endif
 1921 
 1922         /*
 1923          * Initialize the timer interrupt register so that
 1924          * a timer interrupt will be generated once the timer
 1925          * reaches a certain number of ticks. The timer is
 1926          * reloaded on each transmit. This gives us TX interrupt
 1927          * moderation, which dramatically improves TX frame rate.
 1928          */
 1929 
 1930         if ((sc->sc_quirk & RTKQ_8139CPLUS) != 0)
 1931                 CSR_WRITE_4(sc, RTK_TIMERINT, 0x400);
 1932         else {
 1933                 CSR_WRITE_4(sc, RTK_TIMERINT_8169, 0x800);
 1934 
 1935                 /*
 1936                  * For 8169 gigE NICs, set the max allowed RX packet
 1937                  * size so we can receive jumbo frames.
 1938                  */
 1939                 CSR_WRITE_2(sc, RTK_MAXRXPKTLEN, 16383);
 1940         }
 1941 
 1942         if (sc->re_testmode)
 1943                 return 0;
 1944 
 1945         CSR_WRITE_1(sc, RTK_CFG1, RTK_CFG1_DRVLOAD | RTK_CFG1_FULLDUPLEX);
 1946 
 1947         ifp->if_flags |= IFF_RUNNING;
 1948         ifp->if_flags &= ~IFF_OACTIVE;
 1949 
 1950         callout_reset(&sc->rtk_tick_ch, hz, re_tick, sc);
 1951 
 1952  out:
 1953         if (error) {
 1954                 ifp->if_flags &= ~(IFF_RUNNING | IFF_OACTIVE);
 1955                 ifp->if_timer = 0;
 1956                 aprint_error("%s: interface not running\n",
 1957                     sc->sc_dev.dv_xname);
 1958         }
 1959 
 1960         return error;
 1961 }
 1962 
 1963 /*
 1964  * Set media options.
 1965  */
 1966 static int
 1967 re_ifmedia_upd(struct ifnet *ifp)
 1968 {
 1969         struct rtk_softc        *sc;
 1970 
 1971         sc = ifp->if_softc;
 1972 
 1973         return mii_mediachg(&sc->mii);
 1974 }
 1975 
 1976 /*
 1977  * Report current media status.
 1978  */
 1979 static void
 1980 re_ifmedia_sts(struct ifnet *ifp, struct ifmediareq *ifmr)
 1981 {
 1982         struct rtk_softc        *sc;
 1983 
 1984         sc = ifp->if_softc;
 1985 
 1986         mii_pollstat(&sc->mii);
 1987         ifmr->ifm_active = sc->mii.mii_media_active;
 1988         ifmr->ifm_status = sc->mii.mii_media_status;
 1989 }
 1990 
 1991 static int
 1992 re_ioctl(struct ifnet *ifp, u_long command, caddr_t data)
 1993 {
 1994         struct rtk_softc        *sc = ifp->if_softc;
 1995         struct ifreq            *ifr = (struct ifreq *) data;
 1996         int                     s, error = 0;
 1997 
 1998         s = splnet();
 1999 
 2000         switch (command) {
 2001         case SIOCSIFMTU:
 2002                 if (ifr->ifr_mtu > RE_JUMBO_MTU)
 2003                         error = EINVAL;
 2004                 ifp->if_mtu = ifr->ifr_mtu;
 2005                 break;
 2006         case SIOCGIFMEDIA:
 2007         case SIOCSIFMEDIA:
 2008                 error = ifmedia_ioctl(ifp, ifr, &sc->mii.mii_media, command);
 2009                 break;
 2010         default:
 2011                 error = ether_ioctl(ifp, command, data);
 2012                 if (error == ENETRESET) {
 2013                         if (ifp->if_flags & IFF_RUNNING)
 2014                                 rtk_setmulti(sc);
 2015                         error = 0;
 2016                 }
 2017                 break;
 2018         }
 2019 
 2020         splx(s);
 2021 
 2022         return error;
 2023 }
 2024 
 2025 static void
 2026 re_watchdog(struct ifnet *ifp)
 2027 {
 2028         struct rtk_softc        *sc;
 2029         int                     s;
 2030 
 2031         sc = ifp->if_softc;
 2032         s = splnet();
 2033         aprint_error("%s: watchdog timeout\n", sc->sc_dev.dv_xname);
 2034         ifp->if_oerrors++;
 2035 
 2036         re_txeof(sc);
 2037         re_rxeof(sc);
 2038 
 2039         re_init(ifp);
 2040 
 2041         splx(s);
 2042 }
 2043 
 2044 /*
 2045  * Stop the adapter and free any mbufs allocated to the
 2046  * RX and TX lists.
 2047  */
 2048 static void
 2049 re_stop(struct ifnet *ifp, int disable)
 2050 {
 2051         int             i;
 2052         struct rtk_softc *sc = ifp->if_softc;
 2053 
 2054         callout_stop(&sc->rtk_tick_ch);
 2055 
 2056 #ifdef DEVICE_POLLING
 2057         ether_poll_deregister(ifp);
 2058 #endif /* DEVICE_POLLING */
 2059 
 2060         mii_down(&sc->mii);
 2061 
 2062         CSR_WRITE_1(sc, RTK_COMMAND, 0x00);
 2063         CSR_WRITE_2(sc, RTK_IMR, 0x0000);
 2064 
 2065         if (sc->re_head != NULL) {
 2066                 m_freem(sc->re_head);
 2067                 sc->re_head = sc->re_tail = NULL;
 2068         }
 2069 
 2070         /* Free the TX list buffers. */
 2071         for (i = 0; i < RE_TX_QLEN; i++) {
 2072                 if (sc->re_ldata.re_txq[i].txq_mbuf != NULL) {
 2073                         bus_dmamap_unload(sc->sc_dmat,
 2074                             sc->re_ldata.re_txq[i].txq_dmamap);
 2075                         m_freem(sc->re_ldata.re_txq[i].txq_mbuf);
 2076                         sc->re_ldata.re_txq[i].txq_mbuf = NULL;
 2077                 }
 2078         }
 2079 
 2080         /* Free the RX list buffers. */
 2081         for (i = 0; i < RE_RX_DESC_CNT; i++) {
 2082                 if (sc->re_ldata.re_rxsoft[i].rxs_mbuf != NULL) {
 2083                         bus_dmamap_unload(sc->sc_dmat,
 2084                             sc->re_ldata.re_rxsoft[i].rxs_dmamap);
 2085                         m_freem(sc->re_ldata.re_rxsoft[i].rxs_mbuf);
 2086                         sc->re_ldata.re_rxsoft[i].rxs_mbuf = NULL;
 2087                 }
 2088         }
 2089 
 2090         if (disable)
 2091                 re_disable(sc);
 2092 
 2093         ifp->if_flags &= ~(IFF_RUNNING | IFF_OACTIVE);
 2094         ifp->if_timer = 0;
 2095 }

Cache object: 342d60aa4eb97a1119a47a6cb8563070


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.