The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/kern/uipc_socket.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /*-
    2  * Copyright (c) 1982, 1986, 1988, 1990, 1993
    3  *      The Regents of the University of California.
    4  * Copyright (c) 2004 The FreeBSD Foundation
    5  * Copyright (c) 2004-2008 Robert N. M. Watson
    6  * All rights reserved.
    7  *
    8  * Redistribution and use in source and binary forms, with or without
    9  * modification, are permitted provided that the following conditions
   10  * are met:
   11  * 1. Redistributions of source code must retain the above copyright
   12  *    notice, this list of conditions and the following disclaimer.
   13  * 2. Redistributions in binary form must reproduce the above copyright
   14  *    notice, this list of conditions and the following disclaimer in the
   15  *    documentation and/or other materials provided with the distribution.
   16  * 4. Neither the name of the University nor the names of its contributors
   17  *    may be used to endorse or promote products derived from this software
   18  *    without specific prior written permission.
   19  *
   20  * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
   21  * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
   22  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
   23  * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
   24  * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
   25  * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
   26  * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
   27  * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
   28  * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
   29  * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
   30  * SUCH DAMAGE.
   31  *
   32  *      @(#)uipc_socket.c       8.3 (Berkeley) 4/15/94
   33  */
   34 
   35 /*
   36  * Comments on the socket life cycle:
   37  *
   38  * soalloc() sets of socket layer state for a socket, called only by
   39  * socreate() and sonewconn().  Socket layer private.
   40  *
   41  * sodealloc() tears down socket layer state for a socket, called only by
   42  * sofree() and sonewconn().  Socket layer private.
   43  *
   44  * pru_attach() associates protocol layer state with an allocated socket;
   45  * called only once, may fail, aborting socket allocation.  This is called
   46  * from socreate() and sonewconn().  Socket layer private.
   47  *
   48  * pru_detach() disassociates protocol layer state from an attached socket,
   49  * and will be called exactly once for sockets in which pru_attach() has
   50  * been successfully called.  If pru_attach() returned an error,
   51  * pru_detach() will not be called.  Socket layer private.
   52  *
   53  * pru_abort() and pru_close() notify the protocol layer that the last
   54  * consumer of a socket is starting to tear down the socket, and that the
   55  * protocol should terminate the connection.  Historically, pru_abort() also
   56  * detached protocol state from the socket state, but this is no longer the
   57  * case.
   58  *
   59  * socreate() creates a socket and attaches protocol state.  This is a public
   60  * interface that may be used by socket layer consumers to create new
   61  * sockets.
   62  *
   63  * sonewconn() creates a socket and attaches protocol state.  This is a
   64  * public interface  that may be used by protocols to create new sockets when
   65  * a new connection is received and will be available for accept() on a
   66  * listen socket.
   67  *
   68  * soclose() destroys a socket after possibly waiting for it to disconnect.
   69  * This is a public interface that socket consumers should use to close and
   70  * release a socket when done with it.
   71  *
   72  * soabort() destroys a socket without waiting for it to disconnect (used
   73  * only for incoming connections that are already partially or fully
   74  * connected).  This is used internally by the socket layer when clearing
   75  * listen socket queues (due to overflow or close on the listen socket), but
   76  * is also a public interface protocols may use to abort connections in
   77  * their incomplete listen queues should they no longer be required.  Sockets
   78  * placed in completed connection listen queues should not be aborted for
   79  * reasons described in the comment above the soclose() implementation.  This
   80  * is not a general purpose close routine, and except in the specific
   81  * circumstances described here, should not be used.
   82  *
   83  * sofree() will free a socket and its protocol state if all references on
   84  * the socket have been released, and is the public interface to attempt to
   85  * free a socket when a reference is removed.  This is a socket layer private
   86  * interface.
   87  *
   88  * NOTE: In addition to socreate() and soclose(), which provide a single
   89  * socket reference to the consumer to be managed as required, there are two
   90  * calls to explicitly manage socket references, soref(), and sorele().
   91  * Currently, these are generally required only when transitioning a socket
   92  * from a listen queue to a file descriptor, in order to prevent garbage
   93  * collection of the socket at an untimely moment.  For a number of reasons,
   94  * these interfaces are not preferred, and should be avoided.
   95  */
   96 
   97 #include <sys/cdefs.h>
   98 __FBSDID("$FreeBSD$");
   99 
  100 #include "opt_inet.h"
  101 #include "opt_mac.h"
  102 #include "opt_zero.h"
  103 #include "opt_compat.h"
  104 
  105 #include <sys/param.h>
  106 #include <sys/systm.h>
  107 #include <sys/fcntl.h>
  108 #include <sys/limits.h>
  109 #include <sys/lock.h>
  110 #include <sys/mac.h>
  111 #include <sys/malloc.h>
  112 #include <sys/mbuf.h>
  113 #include <sys/mutex.h>
  114 #include <sys/domain.h>
  115 #include <sys/file.h>                   /* for struct knote */
  116 #include <sys/kernel.h>
  117 #include <sys/event.h>
  118 #include <sys/eventhandler.h>
  119 #include <sys/poll.h>
  120 #include <sys/proc.h>
  121 #include <sys/protosw.h>
  122 #include <sys/socket.h>
  123 #include <sys/socketvar.h>
  124 #include <sys/resourcevar.h>
  125 #include <net/route.h>
  126 #include <sys/signalvar.h>
  127 #include <sys/stat.h>
  128 #include <sys/sx.h>
  129 #include <sys/sysctl.h>
  130 #include <sys/uio.h>
  131 #include <sys/jail.h>
  132 
  133 #include <security/mac/mac_framework.h>
  134 
  135 #include <vm/uma.h>
  136 
  137 #ifdef COMPAT_IA32
  138 #include <sys/mount.h>
  139 #include <compat/freebsd32/freebsd32.h>
  140 
  141 extern struct sysentvec ia32_freebsd_sysvec;
  142 #endif
  143 
  144 static int      soreceive_rcvoob(struct socket *so, struct uio *uio,
  145                     int flags);
  146 
  147 static void     filt_sordetach(struct knote *kn);
  148 static int      filt_soread(struct knote *kn, long hint);
  149 static void     filt_sowdetach(struct knote *kn);
  150 static int      filt_sowrite(struct knote *kn, long hint);
  151 static int      filt_solisten(struct knote *kn, long hint);
  152 
  153 static struct filterops solisten_filtops =
  154         { 1, NULL, filt_sordetach, filt_solisten };
  155 static struct filterops soread_filtops =
  156         { 1, NULL, filt_sordetach, filt_soread };
  157 static struct filterops sowrite_filtops =
  158         { 1, NULL, filt_sowdetach, filt_sowrite };
  159 
  160 uma_zone_t socket_zone;
  161 so_gen_t        so_gencnt;      /* generation count for sockets */
  162 
  163 int     maxsockets;
  164 
  165 MALLOC_DEFINE(M_SONAME, "soname", "socket name");
  166 MALLOC_DEFINE(M_PCB, "pcb", "protocol control block");
  167 
  168 static int somaxconn = SOMAXCONN;
  169 static int sysctl_somaxconn(SYSCTL_HANDLER_ARGS);
  170 /* XXX: we dont have SYSCTL_USHORT */
  171 SYSCTL_PROC(_kern_ipc, KIPC_SOMAXCONN, somaxconn, CTLTYPE_UINT | CTLFLAG_RW,
  172     0, sizeof(int), sysctl_somaxconn, "I", "Maximum pending socket connection "
  173     "queue size");
  174 static int numopensockets;
  175 SYSCTL_INT(_kern_ipc, OID_AUTO, numopensockets, CTLFLAG_RD,
  176     &numopensockets, 0, "Number of open sockets");
  177 #ifdef ZERO_COPY_SOCKETS
  178 /* These aren't static because they're used in other files. */
  179 int so_zero_copy_send = 1;
  180 int so_zero_copy_receive = 1;
  181 SYSCTL_NODE(_kern_ipc, OID_AUTO, zero_copy, CTLFLAG_RD, 0,
  182     "Zero copy controls");
  183 SYSCTL_INT(_kern_ipc_zero_copy, OID_AUTO, receive, CTLFLAG_RW,
  184     &so_zero_copy_receive, 0, "Enable zero copy receive");
  185 SYSCTL_INT(_kern_ipc_zero_copy, OID_AUTO, send, CTLFLAG_RW,
  186     &so_zero_copy_send, 0, "Enable zero copy send");
  187 #endif /* ZERO_COPY_SOCKETS */
  188 
  189 /*
  190  * accept_mtx locks down per-socket fields relating to accept queues.  See
  191  * socketvar.h for an annotation of the protected fields of struct socket.
  192  */
  193 struct mtx accept_mtx;
  194 MTX_SYSINIT(accept_mtx, &accept_mtx, "accept", MTX_DEF);
  195 
  196 /*
  197  * so_global_mtx protects so_gencnt, numopensockets, and the per-socket
  198  * so_gencnt field.
  199  */
  200 static struct mtx so_global_mtx;
  201 MTX_SYSINIT(so_global_mtx, &so_global_mtx, "so_glabel", MTX_DEF);
  202 
  203 /*
  204  * General IPC sysctl name space, used by sockets and a variety of other IPC
  205  * types.
  206  */
  207 SYSCTL_NODE(_kern, KERN_IPC, ipc, CTLFLAG_RW, 0, "IPC");
  208 
  209 /*
  210  * Sysctl to get and set the maximum global sockets limit.  Notify protocols
  211  * of the change so that they can update their dependent limits as required.
  212  */
  213 static int
  214 sysctl_maxsockets(SYSCTL_HANDLER_ARGS)
  215 {
  216         int error, newmaxsockets;
  217 
  218         newmaxsockets = maxsockets;
  219         error = sysctl_handle_int(oidp, &newmaxsockets, 0, req);
  220         if (error == 0 && req->newptr) {
  221                 if (newmaxsockets > maxsockets) {
  222                         maxsockets = newmaxsockets;
  223                         if (maxsockets > ((maxfiles / 4) * 3)) {
  224                                 maxfiles = (maxsockets * 5) / 4;
  225                                 maxfilesperproc = (maxfiles * 9) / 10;
  226                         }
  227                         EVENTHANDLER_INVOKE(maxsockets_change);
  228                 } else
  229                         error = EINVAL;
  230         }
  231         return (error);
  232 }
  233 
  234 SYSCTL_PROC(_kern_ipc, OID_AUTO, maxsockets, CTLTYPE_INT|CTLFLAG_RW,
  235     &maxsockets, 0, sysctl_maxsockets, "IU",
  236     "Maximum number of sockets avaliable");
  237 
  238 /*
  239  * Initialise maxsockets.
  240  */
  241 static void init_maxsockets(void *ignored)
  242 {
  243         TUNABLE_INT_FETCH("kern.ipc.maxsockets", &maxsockets);
  244         maxsockets = imax(maxsockets, imax(maxfiles, nmbclusters));
  245 }
  246 SYSINIT(param, SI_SUB_TUNABLES, SI_ORDER_ANY, init_maxsockets, NULL);
  247 
  248 /*
  249  * Socket operation routines.  These routines are called by the routines in
  250  * sys_socket.c or from a system process, and implement the semantics of
  251  * socket operations by switching out to the protocol specific routines.
  252  */
  253 
  254 /*
  255  * Get a socket structure from our zone, and initialize it.  Note that it
  256  * would probably be better to allocate socket and PCB at the same time, but
  257  * I'm not convinced that all the protocols can be easily modified to do
  258  * this.
  259  *
  260  * soalloc() returns a socket with a ref count of 0.
  261  */
  262 static struct socket *
  263 soalloc(void)
  264 {
  265         struct socket *so;
  266 
  267         so = uma_zalloc(socket_zone, M_NOWAIT | M_ZERO);
  268         if (so == NULL)
  269                 return (NULL);
  270 #ifdef MAC
  271         if (mac_init_socket(so, M_NOWAIT) != 0) {
  272                 uma_zfree(socket_zone, so);
  273                 return (NULL);
  274         }
  275 #endif
  276         SOCKBUF_LOCK_INIT(&so->so_snd, "so_snd");
  277         SOCKBUF_LOCK_INIT(&so->so_rcv, "so_rcv");
  278         sx_init(&so->so_snd.sb_sx, "so_snd_sx");
  279         sx_init(&so->so_rcv.sb_sx, "so_rcv_sx");
  280         TAILQ_INIT(&so->so_aiojobq);
  281         mtx_lock(&so_global_mtx);
  282         so->so_gencnt = ++so_gencnt;
  283         ++numopensockets;
  284         mtx_unlock(&so_global_mtx);
  285         return (so);
  286 }
  287 
  288 /*
  289  * Free the storage associated with a socket at the socket layer, tear down
  290  * locks, labels, etc.  All protocol state is assumed already to have been
  291  * torn down (and possibly never set up) by the caller.
  292  */
  293 static void
  294 sodealloc(struct socket *so)
  295 {
  296 
  297         KASSERT(so->so_count == 0, ("sodealloc(): so_count %d", so->so_count));
  298         KASSERT(so->so_pcb == NULL, ("sodealloc(): so_pcb != NULL"));
  299 
  300         mtx_lock(&so_global_mtx);
  301         so->so_gencnt = ++so_gencnt;
  302         --numopensockets;       /* Could be below, but faster here. */
  303         mtx_unlock(&so_global_mtx);
  304         if (so->so_rcv.sb_hiwat)
  305                 (void)chgsbsize(so->so_cred->cr_uidinfo,
  306                     &so->so_rcv.sb_hiwat, 0, RLIM_INFINITY);
  307         if (so->so_snd.sb_hiwat)
  308                 (void)chgsbsize(so->so_cred->cr_uidinfo,
  309                     &so->so_snd.sb_hiwat, 0, RLIM_INFINITY);
  310 #ifdef INET
  311         /* remove acccept filter if one is present. */
  312         if (so->so_accf != NULL)
  313                 do_setopt_accept_filter(so, NULL);
  314 #endif
  315 #ifdef MAC
  316         mac_destroy_socket(so);
  317 #endif
  318         crfree(so->so_cred);
  319         sx_destroy(&so->so_snd.sb_sx);
  320         sx_destroy(&so->so_rcv.sb_sx);
  321         SOCKBUF_LOCK_DESTROY(&so->so_snd);
  322         SOCKBUF_LOCK_DESTROY(&so->so_rcv);
  323         uma_zfree(socket_zone, so);
  324 }
  325 
  326 /*
  327  * socreate returns a socket with a ref count of 1.  The socket should be
  328  * closed with soclose().
  329  */
  330 int
  331 socreate(int dom, struct socket **aso, int type, int proto,
  332     struct ucred *cred, struct thread *td)
  333 {
  334         struct protosw *prp;
  335         struct socket *so;
  336         int error;
  337 
  338         if (proto)
  339                 prp = pffindproto(dom, proto, type);
  340         else
  341                 prp = pffindtype(dom, type);
  342 
  343         if (prp == NULL || prp->pr_usrreqs->pru_attach == NULL ||
  344             prp->pr_usrreqs->pru_attach == pru_attach_notsupp)
  345                 return (EPROTONOSUPPORT);
  346 
  347         if (jailed(cred) && jail_socket_unixiproute_only &&
  348             prp->pr_domain->dom_family != PF_LOCAL &&
  349             prp->pr_domain->dom_family != PF_INET &&
  350             prp->pr_domain->dom_family != PF_ROUTE) {
  351                 return (EPROTONOSUPPORT);
  352         }
  353 
  354         if (prp->pr_type != type)
  355                 return (EPROTOTYPE);
  356         so = soalloc();
  357         if (so == NULL)
  358                 return (ENOBUFS);
  359 
  360         TAILQ_INIT(&so->so_incomp);
  361         TAILQ_INIT(&so->so_comp);
  362         so->so_type = type;
  363         so->so_cred = crhold(cred);
  364         if ((prp->pr_domain->dom_family == PF_INET) ||
  365             (prp->pr_domain->dom_family == PF_ROUTE))
  366                 so->so_fibnum = td->td_proc->p_fibnum;
  367         else
  368                 so->so_fibnum = 0;
  369         so->so_proto = prp;
  370 #ifdef MAC
  371         mac_create_socket(cred, so);
  372 #endif
  373         knlist_init(&so->so_rcv.sb_sel.si_note, SOCKBUF_MTX(&so->so_rcv),
  374             NULL, NULL, NULL);
  375         knlist_init(&so->so_snd.sb_sel.si_note, SOCKBUF_MTX(&so->so_snd),
  376             NULL, NULL, NULL);
  377         so->so_count = 1;
  378         /*
  379          * Auto-sizing of socket buffers is managed by the protocols and
  380          * the appropriate flags must be set in the pru_attach function.
  381          */
  382         error = (*prp->pr_usrreqs->pru_attach)(so, proto, td);
  383         if (error) {
  384                 KASSERT(so->so_count == 1, ("socreate: so_count %d",
  385                     so->so_count));
  386                 so->so_count = 0;
  387                 sodealloc(so);
  388                 return (error);
  389         }
  390         *aso = so;
  391         return (0);
  392 }
  393 
  394 #ifdef REGRESSION
  395 static int regression_sonewconn_earlytest = 1;
  396 SYSCTL_INT(_regression, OID_AUTO, sonewconn_earlytest, CTLFLAG_RW,
  397     &regression_sonewconn_earlytest, 0, "Perform early sonewconn limit test");
  398 #endif
  399 
  400 /*
  401  * When an attempt at a new connection is noted on a socket which accepts
  402  * connections, sonewconn is called.  If the connection is possible (subject
  403  * to space constraints, etc.) then we allocate a new structure, propoerly
  404  * linked into the data structure of the original socket, and return this.
  405  * Connstatus may be 0, or SO_ISCONFIRMING, or SO_ISCONNECTED.
  406  *
  407  * Note: the ref count on the socket is 0 on return.
  408  */
  409 struct socket *
  410 sonewconn(struct socket *head, int connstatus)
  411 {
  412         struct socket *so;
  413         int over;
  414 
  415         ACCEPT_LOCK();
  416         over = (head->so_qlen > 3 * head->so_qlimit / 2);
  417         ACCEPT_UNLOCK();
  418 #ifdef REGRESSION
  419         if (regression_sonewconn_earlytest && over)
  420 #else
  421         if (over)
  422 #endif
  423                 return (NULL);
  424         so = soalloc();
  425         if (so == NULL)
  426                 return (NULL);
  427         if ((head->so_options & SO_ACCEPTFILTER) != 0)
  428                 connstatus = 0;
  429         so->so_head = head;
  430         so->so_type = head->so_type;
  431         so->so_options = head->so_options &~ SO_ACCEPTCONN;
  432         so->so_linger = head->so_linger;
  433         so->so_state = head->so_state | SS_NOFDREF;
  434         so->so_proto = head->so_proto;
  435         so->so_cred = crhold(head->so_cred);
  436 #ifdef MAC
  437         SOCK_LOCK(head);
  438         mac_create_socket_from_socket(head, so);
  439         SOCK_UNLOCK(head);
  440 #endif
  441         knlist_init(&so->so_rcv.sb_sel.si_note, SOCKBUF_MTX(&so->so_rcv),
  442             NULL, NULL, NULL);
  443         knlist_init(&so->so_snd.sb_sel.si_note, SOCKBUF_MTX(&so->so_snd),
  444             NULL, NULL, NULL);
  445         if (soreserve(so, head->so_snd.sb_hiwat, head->so_rcv.sb_hiwat) ||
  446             (*so->so_proto->pr_usrreqs->pru_attach)(so, 0, NULL)) {
  447                 sodealloc(so);
  448                 return (NULL);
  449         }
  450         so->so_rcv.sb_lowat = head->so_rcv.sb_lowat;
  451         so->so_snd.sb_lowat = head->so_snd.sb_lowat;
  452         so->so_rcv.sb_timeo = head->so_rcv.sb_timeo;
  453         so->so_snd.sb_timeo = head->so_snd.sb_timeo;
  454         so->so_rcv.sb_flags |= head->so_rcv.sb_flags & SB_AUTOSIZE;
  455         so->so_snd.sb_flags |= head->so_snd.sb_flags & SB_AUTOSIZE;
  456         so->so_state |= connstatus;
  457         ACCEPT_LOCK();
  458         if (connstatus) {
  459                 TAILQ_INSERT_TAIL(&head->so_comp, so, so_list);
  460                 so->so_qstate |= SQ_COMP;
  461                 head->so_qlen++;
  462         } else {
  463                 /*
  464                  * Keep removing sockets from the head until there's room for
  465                  * us to insert on the tail.  In pre-locking revisions, this
  466                  * was a simple if(), but as we could be racing with other
  467                  * threads and soabort() requires dropping locks, we must
  468                  * loop waiting for the condition to be true.
  469                  */
  470                 while (head->so_incqlen > head->so_qlimit) {
  471                         struct socket *sp;
  472                         sp = TAILQ_FIRST(&head->so_incomp);
  473                         TAILQ_REMOVE(&head->so_incomp, sp, so_list);
  474                         head->so_incqlen--;
  475                         sp->so_qstate &= ~SQ_INCOMP;
  476                         sp->so_head = NULL;
  477                         ACCEPT_UNLOCK();
  478                         soabort(sp);
  479                         ACCEPT_LOCK();
  480                 }
  481                 TAILQ_INSERT_TAIL(&head->so_incomp, so, so_list);
  482                 so->so_qstate |= SQ_INCOMP;
  483                 head->so_incqlen++;
  484         }
  485         ACCEPT_UNLOCK();
  486         if (connstatus) {
  487                 sorwakeup(head);
  488                 wakeup_one(&head->so_timeo);
  489         }
  490         return (so);
  491 }
  492 
  493 int
  494 sobind(struct socket *so, struct sockaddr *nam, struct thread *td)
  495 {
  496 
  497         return ((*so->so_proto->pr_usrreqs->pru_bind)(so, nam, td));
  498 }
  499 
  500 /*
  501  * solisten() transitions a socket from a non-listening state to a listening
  502  * state, but can also be used to update the listen queue depth on an
  503  * existing listen socket.  The protocol will call back into the sockets
  504  * layer using solisten_proto_check() and solisten_proto() to check and set
  505  * socket-layer listen state.  Call backs are used so that the protocol can
  506  * acquire both protocol and socket layer locks in whatever order is required
  507  * by the protocol.
  508  *
  509  * Protocol implementors are advised to hold the socket lock across the
  510  * socket-layer test and set to avoid races at the socket layer.
  511  */
  512 int
  513 solisten(struct socket *so, int backlog, struct thread *td)
  514 {
  515 
  516         return ((*so->so_proto->pr_usrreqs->pru_listen)(so, backlog, td));
  517 }
  518 
  519 int
  520 solisten_proto_check(struct socket *so)
  521 {
  522 
  523         SOCK_LOCK_ASSERT(so);
  524 
  525         if (so->so_state & (SS_ISCONNECTED | SS_ISCONNECTING |
  526             SS_ISDISCONNECTING))
  527                 return (EINVAL);
  528         return (0);
  529 }
  530 
  531 void
  532 solisten_proto(struct socket *so, int backlog)
  533 {
  534 
  535         SOCK_LOCK_ASSERT(so);
  536 
  537         if (backlog < 0 || backlog > somaxconn)
  538                 backlog = somaxconn;
  539         so->so_qlimit = backlog;
  540         so->so_options |= SO_ACCEPTCONN;
  541 }
  542 
  543 /*
  544  * Attempt to free a socket.  This should really be sotryfree().
  545  *
  546  * sofree() will succeed if:
  547  *
  548  * - There are no outstanding file descriptor references or related consumers
  549  *   (so_count == 0).
  550  *
  551  * - The socket has been closed by user space, if ever open (SS_NOFDREF).
  552  *
  553  * - The protocol does not have an outstanding strong reference on the socket
  554  *   (SS_PROTOREF).
  555  *
  556  * - The socket is not in a completed connection queue, so a process has been
  557  *   notified that it is present.  If it is removed, the user process may
  558  *   block in accept() despite select() saying the socket was ready.
  559  *
  560  * Otherwise, it will quietly abort so that a future call to sofree(), when
  561  * conditions are right, can succeed.
  562  */
  563 void
  564 sofree(struct socket *so)
  565 {
  566         struct protosw *pr = so->so_proto;
  567         struct socket *head;
  568 
  569         ACCEPT_LOCK_ASSERT();
  570         SOCK_LOCK_ASSERT(so);
  571 
  572         if ((so->so_state & SS_NOFDREF) == 0 || so->so_count != 0 ||
  573             (so->so_state & SS_PROTOREF) || (so->so_qstate & SQ_COMP)) {
  574                 SOCK_UNLOCK(so);
  575                 ACCEPT_UNLOCK();
  576                 return;
  577         }
  578 
  579         head = so->so_head;
  580         if (head != NULL) {
  581                 KASSERT((so->so_qstate & SQ_COMP) != 0 ||
  582                     (so->so_qstate & SQ_INCOMP) != 0,
  583                     ("sofree: so_head != NULL, but neither SQ_COMP nor "
  584                     "SQ_INCOMP"));
  585                 KASSERT((so->so_qstate & SQ_COMP) == 0 ||
  586                     (so->so_qstate & SQ_INCOMP) == 0,
  587                     ("sofree: so->so_qstate is SQ_COMP and also SQ_INCOMP"));
  588                 TAILQ_REMOVE(&head->so_incomp, so, so_list);
  589                 head->so_incqlen--;
  590                 so->so_qstate &= ~SQ_INCOMP;
  591                 so->so_head = NULL;
  592         }
  593         KASSERT((so->so_qstate & SQ_COMP) == 0 &&
  594             (so->so_qstate & SQ_INCOMP) == 0,
  595             ("sofree: so_head == NULL, but still SQ_COMP(%d) or SQ_INCOMP(%d)",
  596             so->so_qstate & SQ_COMP, so->so_qstate & SQ_INCOMP));
  597         if (so->so_options & SO_ACCEPTCONN) {
  598                 KASSERT((TAILQ_EMPTY(&so->so_comp)), ("sofree: so_comp populated"));
  599                 KASSERT((TAILQ_EMPTY(&so->so_incomp)), ("sofree: so_comp populated"));
  600         }
  601         SOCK_UNLOCK(so);
  602         ACCEPT_UNLOCK();
  603 
  604         if (pr->pr_flags & PR_RIGHTS && pr->pr_domain->dom_dispose != NULL)
  605                 (*pr->pr_domain->dom_dispose)(so->so_rcv.sb_mb);
  606         if (pr->pr_usrreqs->pru_detach != NULL)
  607                 (*pr->pr_usrreqs->pru_detach)(so);
  608 
  609         /*
  610          * From this point on, we assume that no other references to this
  611          * socket exist anywhere else in the stack.  Therefore, no locks need
  612          * to be acquired or held.
  613          *
  614          * We used to do a lot of socket buffer and socket locking here, as
  615          * well as invoke sorflush() and perform wakeups.  The direct call to
  616          * dom_dispose() and sbrelease_internal() are an inlining of what was
  617          * necessary from sorflush().
  618          *
  619          * Notice that the socket buffer and kqueue state are torn down
  620          * before calling pru_detach.  This means that protocols shold not
  621          * assume they can perform socket wakeups, etc, in their detach code.
  622          */
  623         sbdestroy(&so->so_snd, so);
  624         sbdestroy(&so->so_rcv, so);
  625         knlist_destroy(&so->so_rcv.sb_sel.si_note);
  626         knlist_destroy(&so->so_snd.sb_sel.si_note);
  627         sodealloc(so);
  628 }
  629 
  630 /*
  631  * Close a socket on last file table reference removal.  Initiate disconnect
  632  * if connected.  Free socket when disconnect complete.
  633  *
  634  * This function will sorele() the socket.  Note that soclose() may be called
  635  * prior to the ref count reaching zero.  The actual socket structure will
  636  * not be freed until the ref count reaches zero.
  637  */
  638 int
  639 soclose(struct socket *so)
  640 {
  641         int error = 0;
  642 
  643         KASSERT(!(so->so_state & SS_NOFDREF), ("soclose: SS_NOFDREF on enter"));
  644 
  645         funsetown(&so->so_sigio);
  646         if (so->so_state & SS_ISCONNECTED) {
  647                 if ((so->so_state & SS_ISDISCONNECTING) == 0) {
  648                         error = sodisconnect(so);
  649                         if (error)
  650                                 goto drop;
  651                 }
  652                 if (so->so_options & SO_LINGER) {
  653                         if ((so->so_state & SS_ISDISCONNECTING) &&
  654                             (so->so_state & SS_NBIO))
  655                                 goto drop;
  656                         while (so->so_state & SS_ISCONNECTED) {
  657                                 error = tsleep(&so->so_timeo,
  658                                     PSOCK | PCATCH, "soclos", so->so_linger * hz);
  659                                 if (error)
  660                                         break;
  661                         }
  662                 }
  663         }
  664 
  665 drop:
  666         if (so->so_proto->pr_usrreqs->pru_close != NULL)
  667                 (*so->so_proto->pr_usrreqs->pru_close)(so);
  668         if (so->so_options & SO_ACCEPTCONN) {
  669                 struct socket *sp;
  670                 ACCEPT_LOCK();
  671                 while ((sp = TAILQ_FIRST(&so->so_incomp)) != NULL) {
  672                         TAILQ_REMOVE(&so->so_incomp, sp, so_list);
  673                         so->so_incqlen--;
  674                         sp->so_qstate &= ~SQ_INCOMP;
  675                         sp->so_head = NULL;
  676                         ACCEPT_UNLOCK();
  677                         soabort(sp);
  678                         ACCEPT_LOCK();
  679                 }
  680                 while ((sp = TAILQ_FIRST(&so->so_comp)) != NULL) {
  681                         TAILQ_REMOVE(&so->so_comp, sp, so_list);
  682                         so->so_qlen--;
  683                         sp->so_qstate &= ~SQ_COMP;
  684                         sp->so_head = NULL;
  685                         ACCEPT_UNLOCK();
  686                         soabort(sp);
  687                         ACCEPT_LOCK();
  688                 }
  689                 ACCEPT_UNLOCK();
  690         }
  691         ACCEPT_LOCK();
  692         SOCK_LOCK(so);
  693         KASSERT((so->so_state & SS_NOFDREF) == 0, ("soclose: NOFDREF"));
  694         so->so_state |= SS_NOFDREF;
  695         sorele(so);
  696         return (error);
  697 }
  698 
  699 /*
  700  * soabort() is used to abruptly tear down a connection, such as when a
  701  * resource limit is reached (listen queue depth exceeded), or if a listen
  702  * socket is closed while there are sockets waiting to be accepted.
  703  *
  704  * This interface is tricky, because it is called on an unreferenced socket,
  705  * and must be called only by a thread that has actually removed the socket
  706  * from the listen queue it was on, or races with other threads are risked.
  707  *
  708  * This interface will call into the protocol code, so must not be called
  709  * with any socket locks held.  Protocols do call it while holding their own
  710  * recursible protocol mutexes, but this is something that should be subject
  711  * to review in the future.
  712  */
  713 void
  714 soabort(struct socket *so)
  715 {
  716 
  717         /*
  718          * In as much as is possible, assert that no references to this
  719          * socket are held.  This is not quite the same as asserting that the
  720          * current thread is responsible for arranging for no references, but
  721          * is as close as we can get for now.
  722          */
  723         KASSERT(so->so_count == 0, ("soabort: so_count"));
  724         KASSERT((so->so_state & SS_PROTOREF) == 0, ("soabort: SS_PROTOREF"));
  725         KASSERT(so->so_state & SS_NOFDREF, ("soabort: !SS_NOFDREF"));
  726         KASSERT((so->so_state & SQ_COMP) == 0, ("soabort: SQ_COMP"));
  727         KASSERT((so->so_state & SQ_INCOMP) == 0, ("soabort: SQ_INCOMP"));
  728 
  729         if (so->so_proto->pr_usrreqs->pru_abort != NULL)
  730                 (*so->so_proto->pr_usrreqs->pru_abort)(so);
  731         ACCEPT_LOCK();
  732         SOCK_LOCK(so);
  733         sofree(so);
  734 }
  735 
  736 int
  737 soaccept(struct socket *so, struct sockaddr **nam)
  738 {
  739         int error;
  740 
  741         SOCK_LOCK(so);
  742         KASSERT((so->so_state & SS_NOFDREF) != 0, ("soaccept: !NOFDREF"));
  743         so->so_state &= ~SS_NOFDREF;
  744         SOCK_UNLOCK(so);
  745         error = (*so->so_proto->pr_usrreqs->pru_accept)(so, nam);
  746         return (error);
  747 }
  748 
  749 int
  750 soconnect(struct socket *so, struct sockaddr *nam, struct thread *td)
  751 {
  752         int error;
  753 
  754         if (so->so_options & SO_ACCEPTCONN)
  755                 return (EOPNOTSUPP);
  756         /*
  757          * If protocol is connection-based, can only connect once.
  758          * Otherwise, if connected, try to disconnect first.  This allows
  759          * user to disconnect by connecting to, e.g., a null address.
  760          */
  761         if (so->so_state & (SS_ISCONNECTED|SS_ISCONNECTING) &&
  762             ((so->so_proto->pr_flags & PR_CONNREQUIRED) ||
  763             (error = sodisconnect(so)))) {
  764                 error = EISCONN;
  765         } else {
  766                 /*
  767                  * Prevent accumulated error from previous connection from
  768                  * biting us.
  769                  */
  770                 so->so_error = 0;
  771                 error = (*so->so_proto->pr_usrreqs->pru_connect)(so, nam, td);
  772         }
  773 
  774         return (error);
  775 }
  776 
  777 int
  778 soconnect2(struct socket *so1, struct socket *so2)
  779 {
  780 
  781         return ((*so1->so_proto->pr_usrreqs->pru_connect2)(so1, so2));
  782 }
  783 
  784 int
  785 sodisconnect(struct socket *so)
  786 {
  787         int error;
  788 
  789         if ((so->so_state & SS_ISCONNECTED) == 0)
  790                 return (ENOTCONN);
  791         if (so->so_state & SS_ISDISCONNECTING)
  792                 return (EALREADY);
  793         error = (*so->so_proto->pr_usrreqs->pru_disconnect)(so);
  794         return (error);
  795 }
  796 
  797 #ifdef ZERO_COPY_SOCKETS
  798 struct so_zerocopy_stats{
  799         int size_ok;
  800         int align_ok;
  801         int found_ifp;
  802 };
  803 struct so_zerocopy_stats so_zerocp_stats = {0,0,0};
  804 #include <netinet/in.h>
  805 #include <net/route.h>
  806 #include <netinet/in_pcb.h>
  807 #include <vm/vm.h>
  808 #include <vm/vm_page.h>
  809 #include <vm/vm_object.h>
  810 
  811 /*
  812  * sosend_copyin() is only used if zero copy sockets are enabled.  Otherwise
  813  * sosend_dgram() and sosend_generic() use m_uiotombuf().
  814  * 
  815  * sosend_copyin() accepts a uio and prepares an mbuf chain holding part or
  816  * all of the data referenced by the uio.  If desired, it uses zero-copy.
  817  * *space will be updated to reflect data copied in.
  818  *
  819  * NB: If atomic I/O is requested, the caller must already have checked that
  820  * space can hold resid bytes.
  821  *
  822  * NB: In the event of an error, the caller may need to free the partial
  823  * chain pointed to by *mpp.  The contents of both *uio and *space may be
  824  * modified even in the case of an error.
  825  */
  826 static int
  827 sosend_copyin(struct uio *uio, struct mbuf **retmp, int atomic, long *space,
  828     int flags)
  829 {
  830         struct mbuf *m, **mp, *top;
  831         long len, resid;
  832         int error;
  833 #ifdef ZERO_COPY_SOCKETS
  834         int cow_send;
  835 #endif
  836 
  837         *retmp = top = NULL;
  838         mp = &top;
  839         len = 0;
  840         resid = uio->uio_resid;
  841         error = 0;
  842         do {
  843 #ifdef ZERO_COPY_SOCKETS
  844                 cow_send = 0;
  845 #endif /* ZERO_COPY_SOCKETS */
  846                 if (resid >= MINCLSIZE) {
  847 #ifdef ZERO_COPY_SOCKETS
  848                         if (top == NULL) {
  849                                 m = m_gethdr(M_WAITOK, MT_DATA);
  850                                 m->m_pkthdr.len = 0;
  851                                 m->m_pkthdr.rcvif = NULL;
  852                         } else
  853                                 m = m_get(M_WAITOK, MT_DATA);
  854                         if (so_zero_copy_send &&
  855                             resid>=PAGE_SIZE &&
  856                             *space>=PAGE_SIZE &&
  857                             uio->uio_iov->iov_len>=PAGE_SIZE) {
  858                                 so_zerocp_stats.size_ok++;
  859                                 so_zerocp_stats.align_ok++;
  860                                 cow_send = socow_setup(m, uio);
  861                                 len = cow_send;
  862                         }
  863                         if (!cow_send) {
  864                                 m_clget(m, M_WAITOK);
  865                                 len = min(min(MCLBYTES, resid), *space);
  866                         }
  867 #else /* ZERO_COPY_SOCKETS */
  868                         if (top == NULL) {
  869                                 m = m_getcl(M_TRYWAIT, MT_DATA, M_PKTHDR);
  870                                 m->m_pkthdr.len = 0;
  871                                 m->m_pkthdr.rcvif = NULL;
  872                         } else
  873                                 m = m_getcl(M_TRYWAIT, MT_DATA, 0);
  874                         len = min(min(MCLBYTES, resid), *space);
  875 #endif /* ZERO_COPY_SOCKETS */
  876                 } else {
  877                         if (top == NULL) {
  878                                 m = m_gethdr(M_TRYWAIT, MT_DATA);
  879                                 m->m_pkthdr.len = 0;
  880                                 m->m_pkthdr.rcvif = NULL;
  881 
  882                                 len = min(min(MHLEN, resid), *space);
  883                                 /*
  884                                  * For datagram protocols, leave room
  885                                  * for protocol headers in first mbuf.
  886                                  */
  887                                 if (atomic && m && len < MHLEN)
  888                                         MH_ALIGN(m, len);
  889                         } else {
  890                                 m = m_get(M_TRYWAIT, MT_DATA);
  891                                 len = min(min(MLEN, resid), *space);
  892                         }
  893                 }
  894                 if (m == NULL) {
  895                         error = ENOBUFS;
  896                         goto out;
  897                 }
  898 
  899                 *space -= len;
  900 #ifdef ZERO_COPY_SOCKETS
  901                 if (cow_send)
  902                         error = 0;
  903                 else
  904 #endif /* ZERO_COPY_SOCKETS */
  905                 error = uiomove(mtod(m, void *), (int)len, uio);
  906                 resid = uio->uio_resid;
  907                 m->m_len = len;
  908                 *mp = m;
  909                 top->m_pkthdr.len += len;
  910                 if (error)
  911                         goto out;
  912                 mp = &m->m_next;
  913                 if (resid <= 0) {
  914                         if (flags & MSG_EOR)
  915                                 top->m_flags |= M_EOR;
  916                         break;
  917                 }
  918         } while (*space > 0 && atomic);
  919 out:
  920         *retmp = top;
  921         return (error);
  922 }
  923 #endif /*ZERO_COPY_SOCKETS*/
  924 
  925 #define SBLOCKWAIT(f)   (((f) & MSG_DONTWAIT) ? 0 : SBL_WAIT)
  926 
  927 int
  928 sosend_dgram(struct socket *so, struct sockaddr *addr, struct uio *uio,
  929     struct mbuf *top, struct mbuf *control, int flags, struct thread *td)
  930 {
  931         long space, resid;
  932         int clen = 0, error, dontroute;
  933 #ifdef ZERO_COPY_SOCKETS
  934         int atomic = sosendallatonce(so) || top;
  935 #endif
  936 
  937         KASSERT(so->so_type == SOCK_DGRAM, ("sodgram_send: !SOCK_DGRAM"));
  938         KASSERT(so->so_proto->pr_flags & PR_ATOMIC,
  939             ("sodgram_send: !PR_ATOMIC"));
  940 
  941         if (uio != NULL)
  942                 resid = uio->uio_resid;
  943         else
  944                 resid = top->m_pkthdr.len;
  945         /*
  946          * In theory resid should be unsigned.  However, space must be
  947          * signed, as it might be less than 0 if we over-committed, and we
  948          * must use a signed comparison of space and resid.  On the other
  949          * hand, a negative resid causes us to loop sending 0-length
  950          * segments to the protocol.
  951          *
  952          * Also check to make sure that MSG_EOR isn't used on SOCK_STREAM
  953          * type sockets since that's an error.
  954          */
  955         if (resid < 0) {
  956                 error = EINVAL;
  957                 goto out;
  958         }
  959 
  960         dontroute =
  961             (flags & MSG_DONTROUTE) && (so->so_options & SO_DONTROUTE) == 0;
  962         if (td != NULL)
  963                 td->td_ru.ru_msgsnd++;
  964         if (control != NULL)
  965                 clen = control->m_len;
  966 
  967         SOCKBUF_LOCK(&so->so_snd);
  968         if (so->so_snd.sb_state & SBS_CANTSENDMORE) {
  969                 SOCKBUF_UNLOCK(&so->so_snd);
  970                 error = EPIPE;
  971                 goto out;
  972         }
  973         if (so->so_error) {
  974                 error = so->so_error;
  975                 so->so_error = 0;
  976                 SOCKBUF_UNLOCK(&so->so_snd);
  977                 goto out;
  978         }
  979         if ((so->so_state & SS_ISCONNECTED) == 0) {
  980                 /*
  981                  * `sendto' and `sendmsg' is allowed on a connection-based
  982                  * socket if it supports implied connect.  Return ENOTCONN if
  983                  * not connected and no address is supplied.
  984                  */
  985                 if ((so->so_proto->pr_flags & PR_CONNREQUIRED) &&
  986                     (so->so_proto->pr_flags & PR_IMPLOPCL) == 0) {
  987                         if ((so->so_state & SS_ISCONFIRMING) == 0 &&
  988                             !(resid == 0 && clen != 0)) {
  989                                 SOCKBUF_UNLOCK(&so->so_snd);
  990                                 error = ENOTCONN;
  991                                 goto out;
  992                         }
  993                 } else if (addr == NULL) {
  994                         if (so->so_proto->pr_flags & PR_CONNREQUIRED)
  995                                 error = ENOTCONN;
  996                         else
  997                                 error = EDESTADDRREQ;
  998                         SOCKBUF_UNLOCK(&so->so_snd);
  999                         goto out;
 1000                 }
 1001         }
 1002 
 1003         /*
 1004          * Do we need MSG_OOB support in SOCK_DGRAM?  Signs here may be a
 1005          * problem and need fixing.
 1006          */
 1007         space = sbspace(&so->so_snd);
 1008         if (flags & MSG_OOB)
 1009                 space += 1024;
 1010         space -= clen;
 1011         SOCKBUF_UNLOCK(&so->so_snd);
 1012         if (resid > space) {
 1013                 error = EMSGSIZE;
 1014                 goto out;
 1015         }
 1016         if (uio == NULL) {
 1017                 resid = 0;
 1018                 if (flags & MSG_EOR)
 1019                         top->m_flags |= M_EOR;
 1020         } else {
 1021 #ifdef ZERO_COPY_SOCKETS
 1022                 error = sosend_copyin(uio, &top, atomic, &space, flags);
 1023                 if (error)
 1024                         goto out;
 1025 #else
 1026                 /*
 1027                  * Copy the data from userland into a mbuf chain.
 1028                  * If no data is to be copied in, a single empty mbuf
 1029                  * is returned.
 1030                  */
 1031                 top = m_uiotombuf(uio, M_WAITOK, space, max_hdr,
 1032                     (M_PKTHDR | ((flags & MSG_EOR) ? M_EOR : 0)));
 1033                 if (top == NULL) {
 1034                         error = EFAULT; /* only possible error */
 1035                         goto out;
 1036                 }
 1037                 space -= resid - uio->uio_resid;
 1038 #endif
 1039                 resid = uio->uio_resid;
 1040         }
 1041         KASSERT(resid == 0, ("sosend_dgram: resid != 0"));
 1042         /*
 1043          * XXXRW: Frobbing SO_DONTROUTE here is even worse without sblock
 1044          * than with.
 1045          */
 1046         if (dontroute) {
 1047                 SOCK_LOCK(so);
 1048                 so->so_options |= SO_DONTROUTE;
 1049                 SOCK_UNLOCK(so);
 1050         }
 1051         /*
 1052          * XXX all the SBS_CANTSENDMORE checks previously done could be out
 1053          * of date.  We could have recieved a reset packet in an interrupt or
 1054          * maybe we slept while doing page faults in uiomove() etc.  We could
 1055          * probably recheck again inside the locking protection here, but
 1056          * there are probably other places that this also happens.  We must
 1057          * rethink this.
 1058          */
 1059         error = (*so->so_proto->pr_usrreqs->pru_send)(so,
 1060             (flags & MSG_OOB) ? PRUS_OOB :
 1061         /*
 1062          * If the user set MSG_EOF, the protocol understands this flag and
 1063          * nothing left to send then use PRU_SEND_EOF instead of PRU_SEND.
 1064          */
 1065             ((flags & MSG_EOF) &&
 1066              (so->so_proto->pr_flags & PR_IMPLOPCL) &&
 1067              (resid <= 0)) ?
 1068                 PRUS_EOF :
 1069                 /* If there is more to send set PRUS_MORETOCOME */
 1070                 (resid > 0 && space > 0) ? PRUS_MORETOCOME : 0,
 1071                 top, addr, control, td);
 1072         if (dontroute) {
 1073                 SOCK_LOCK(so);
 1074                 so->so_options &= ~SO_DONTROUTE;
 1075                 SOCK_UNLOCK(so);
 1076         }
 1077         clen = 0;
 1078         control = NULL;
 1079         top = NULL;
 1080 out:
 1081         if (top != NULL)
 1082                 m_freem(top);
 1083         if (control != NULL)
 1084                 m_freem(control);
 1085         return (error);
 1086 }
 1087 
 1088 /*
 1089  * Send on a socket.  If send must go all at once and message is larger than
 1090  * send buffering, then hard error.  Lock against other senders.  If must go
 1091  * all at once and not enough room now, then inform user that this would
 1092  * block and do nothing.  Otherwise, if nonblocking, send as much as
 1093  * possible.  The data to be sent is described by "uio" if nonzero, otherwise
 1094  * by the mbuf chain "top" (which must be null if uio is not).  Data provided
 1095  * in mbuf chain must be small enough to send all at once.
 1096  *
 1097  * Returns nonzero on error, timeout or signal; callers must check for short
 1098  * counts if EINTR/ERESTART are returned.  Data and control buffers are freed
 1099  * on return.
 1100  */
 1101 int
 1102 sosend_generic(struct socket *so, struct sockaddr *addr, struct uio *uio,
 1103     struct mbuf *top, struct mbuf *control, int flags, struct thread *td)
 1104 {
 1105         long space, resid;
 1106         int clen = 0, error, dontroute;
 1107         int atomic = sosendallatonce(so) || top;
 1108 
 1109         if (uio != NULL)
 1110                 resid = uio->uio_resid;
 1111         else
 1112                 resid = top->m_pkthdr.len;
 1113         /*
 1114          * In theory resid should be unsigned.  However, space must be
 1115          * signed, as it might be less than 0 if we over-committed, and we
 1116          * must use a signed comparison of space and resid.  On the other
 1117          * hand, a negative resid causes us to loop sending 0-length
 1118          * segments to the protocol.
 1119          *
 1120          * Also check to make sure that MSG_EOR isn't used on SOCK_STREAM
 1121          * type sockets since that's an error.
 1122          */
 1123         if (resid < 0 || (so->so_type == SOCK_STREAM && (flags & MSG_EOR))) {
 1124                 error = EINVAL;
 1125                 goto out;
 1126         }
 1127 
 1128         dontroute =
 1129             (flags & MSG_DONTROUTE) && (so->so_options & SO_DONTROUTE) == 0 &&
 1130             (so->so_proto->pr_flags & PR_ATOMIC);
 1131         if (td != NULL)
 1132                 td->td_ru.ru_msgsnd++;
 1133         if (control != NULL)
 1134                 clen = control->m_len;
 1135 
 1136         error = sblock(&so->so_snd, SBLOCKWAIT(flags));
 1137         if (error)
 1138                 goto out;
 1139 
 1140 restart:
 1141         do {
 1142                 SOCKBUF_LOCK(&so->so_snd);
 1143                 if (so->so_snd.sb_state & SBS_CANTSENDMORE) {
 1144                         SOCKBUF_UNLOCK(&so->so_snd);
 1145                         error = EPIPE;
 1146                         goto release;
 1147                 }
 1148                 if (so->so_error) {
 1149                         error = so->so_error;
 1150                         so->so_error = 0;
 1151                         SOCKBUF_UNLOCK(&so->so_snd);
 1152                         goto release;
 1153                 }
 1154                 if ((so->so_state & SS_ISCONNECTED) == 0) {
 1155                         /*
 1156                          * `sendto' and `sendmsg' is allowed on a connection-
 1157                          * based socket if it supports implied connect.
 1158                          * Return ENOTCONN if not connected and no address is
 1159                          * supplied.
 1160                          */
 1161                         if ((so->so_proto->pr_flags & PR_CONNREQUIRED) &&
 1162                             (so->so_proto->pr_flags & PR_IMPLOPCL) == 0) {
 1163                                 if ((so->so_state & SS_ISCONFIRMING) == 0 &&
 1164                                     !(resid == 0 && clen != 0)) {
 1165                                         SOCKBUF_UNLOCK(&so->so_snd);
 1166                                         error = ENOTCONN;
 1167                                         goto release;
 1168                                 }
 1169                         } else if (addr == NULL) {
 1170                                 SOCKBUF_UNLOCK(&so->so_snd);
 1171                                 if (so->so_proto->pr_flags & PR_CONNREQUIRED)
 1172                                         error = ENOTCONN;
 1173                                 else
 1174                                         error = EDESTADDRREQ;
 1175                                 goto release;
 1176                         }
 1177                 }
 1178                 space = sbspace(&so->so_snd);
 1179                 if (flags & MSG_OOB)
 1180                         space += 1024;
 1181                 if ((atomic && resid > so->so_snd.sb_hiwat) ||
 1182                     clen > so->so_snd.sb_hiwat) {
 1183                         SOCKBUF_UNLOCK(&so->so_snd);
 1184                         error = EMSGSIZE;
 1185                         goto release;
 1186                 }
 1187                 if (space < resid + clen &&
 1188                     (atomic || space < so->so_snd.sb_lowat || space < clen)) {
 1189                         if ((so->so_state & SS_NBIO) || (flags & MSG_NBIO)) {
 1190                                 SOCKBUF_UNLOCK(&so->so_snd);
 1191                                 error = EWOULDBLOCK;
 1192                                 goto release;
 1193                         }
 1194                         error = sbwait(&so->so_snd);
 1195                         SOCKBUF_UNLOCK(&so->so_snd);
 1196                         if (error)
 1197                                 goto release;
 1198                         goto restart;
 1199                 }
 1200                 SOCKBUF_UNLOCK(&so->so_snd);
 1201                 space -= clen;
 1202                 do {
 1203                         if (uio == NULL) {
 1204                                 resid = 0;
 1205                                 if (flags & MSG_EOR)
 1206                                         top->m_flags |= M_EOR;
 1207                         } else {
 1208 #ifdef ZERO_COPY_SOCKETS
 1209                                 error = sosend_copyin(uio, &top, atomic,
 1210                                     &space, flags);
 1211                                 if (error != 0)
 1212                                         goto release;
 1213 #else
 1214                                 /*
 1215                                  * Copy the data from userland into a mbuf
 1216                                  * chain.  If no data is to be copied in,
 1217                                  * a single empty mbuf is returned.
 1218                                  */
 1219                                 top = m_uiotombuf(uio, M_WAITOK, space,
 1220                                     (atomic ? max_hdr : 0),
 1221                                     (atomic ? M_PKTHDR : 0) |
 1222                                     ((flags & MSG_EOR) ? M_EOR : 0));
 1223                                 if (top == NULL) {
 1224                                         error = EFAULT; /* only possible error */
 1225                                         goto release;
 1226                                 }
 1227                                 space -= resid - uio->uio_resid;
 1228 #endif
 1229                                 resid = uio->uio_resid;
 1230                         }
 1231                         if (dontroute) {
 1232                                 SOCK_LOCK(so);
 1233                                 so->so_options |= SO_DONTROUTE;
 1234                                 SOCK_UNLOCK(so);
 1235                         }
 1236                         /*
 1237                          * XXX all the SBS_CANTSENDMORE checks previously
 1238                          * done could be out of date.  We could have recieved
 1239                          * a reset packet in an interrupt or maybe we slept
 1240                          * while doing page faults in uiomove() etc.  We
 1241                          * could probably recheck again inside the locking
 1242                          * protection here, but there are probably other
 1243                          * places that this also happens.  We must rethink
 1244                          * this.
 1245                          */
 1246                         error = (*so->so_proto->pr_usrreqs->pru_send)(so,
 1247                             (flags & MSG_OOB) ? PRUS_OOB :
 1248                         /*
 1249                          * If the user set MSG_EOF, the protocol understands
 1250                          * this flag and nothing left to send then use
 1251                          * PRU_SEND_EOF instead of PRU_SEND.
 1252                          */
 1253                             ((flags & MSG_EOF) &&
 1254                              (so->so_proto->pr_flags & PR_IMPLOPCL) &&
 1255                              (resid <= 0)) ?
 1256                                 PRUS_EOF :
 1257                         /* If there is more to send set PRUS_MORETOCOME. */
 1258                             (resid > 0 && space > 0) ? PRUS_MORETOCOME : 0,
 1259                             top, addr, control, td);
 1260                         if (dontroute) {
 1261                                 SOCK_LOCK(so);
 1262                                 so->so_options &= ~SO_DONTROUTE;
 1263                                 SOCK_UNLOCK(so);
 1264                         }
 1265                         clen = 0;
 1266                         control = NULL;
 1267                         top = NULL;
 1268                         if (error)
 1269                                 goto release;
 1270                 } while (resid && space > 0);
 1271         } while (resid);
 1272 
 1273 release:
 1274         sbunlock(&so->so_snd);
 1275 out:
 1276         if (top != NULL)
 1277                 m_freem(top);
 1278         if (control != NULL)
 1279                 m_freem(control);
 1280         return (error);
 1281 }
 1282 
 1283 int
 1284 sosend(struct socket *so, struct sockaddr *addr, struct uio *uio,
 1285     struct mbuf *top, struct mbuf *control, int flags, struct thread *td)
 1286 {
 1287 
 1288         return (so->so_proto->pr_usrreqs->pru_sosend(so, addr, uio, top,
 1289             control, flags, td));
 1290 }
 1291 
 1292 /*
 1293  * The part of soreceive() that implements reading non-inline out-of-band
 1294  * data from a socket.  For more complete comments, see soreceive(), from
 1295  * which this code originated.
 1296  *
 1297  * Note that soreceive_rcvoob(), unlike the remainder of soreceive(), is
 1298  * unable to return an mbuf chain to the caller.
 1299  */
 1300 static int
 1301 soreceive_rcvoob(struct socket *so, struct uio *uio, int flags)
 1302 {
 1303         struct protosw *pr = so->so_proto;
 1304         struct mbuf *m;
 1305         int error;
 1306 
 1307         KASSERT(flags & MSG_OOB, ("soreceive_rcvoob: (flags & MSG_OOB) == 0"));
 1308 
 1309         m = m_get(M_TRYWAIT, MT_DATA);
 1310         if (m == NULL)
 1311                 return (ENOBUFS);
 1312         error = (*pr->pr_usrreqs->pru_rcvoob)(so, m, flags & MSG_PEEK);
 1313         if (error)
 1314                 goto bad;
 1315         do {
 1316 #ifdef ZERO_COPY_SOCKETS
 1317                 if (so_zero_copy_receive) {
 1318                         int disposable;
 1319 
 1320                         if ((m->m_flags & M_EXT)
 1321                          && (m->m_ext.ext_type == EXT_DISPOSABLE))
 1322                                 disposable = 1;
 1323                         else
 1324                                 disposable = 0;
 1325 
 1326                         error = uiomoveco(mtod(m, void *),
 1327                                           min(uio->uio_resid, m->m_len),
 1328                                           uio, disposable);
 1329                 } else
 1330 #endif /* ZERO_COPY_SOCKETS */
 1331                 error = uiomove(mtod(m, void *),
 1332                     (int) min(uio->uio_resid, m->m_len), uio);
 1333                 m = m_free(m);
 1334         } while (uio->uio_resid && error == 0 && m);
 1335 bad:
 1336         if (m != NULL)
 1337                 m_freem(m);
 1338         return (error);
 1339 }
 1340 
 1341 /*
 1342  * Following replacement or removal of the first mbuf on the first mbuf chain
 1343  * of a socket buffer, push necessary state changes back into the socket
 1344  * buffer so that other consumers see the values consistently.  'nextrecord'
 1345  * is the callers locally stored value of the original value of
 1346  * sb->sb_mb->m_nextpkt which must be restored when the lead mbuf changes.
 1347  * NOTE: 'nextrecord' may be NULL.
 1348  */
 1349 static __inline void
 1350 sockbuf_pushsync(struct sockbuf *sb, struct mbuf *nextrecord)
 1351 {
 1352 
 1353         SOCKBUF_LOCK_ASSERT(sb);
 1354         /*
 1355          * First, update for the new value of nextrecord.  If necessary, make
 1356          * it the first record.
 1357          */
 1358         if (sb->sb_mb != NULL)
 1359                 sb->sb_mb->m_nextpkt = nextrecord;
 1360         else
 1361                 sb->sb_mb = nextrecord;
 1362 
 1363         /*
 1364          * Now update any dependent socket buffer fields to reflect the new
 1365          * state.  This is an expanded inline of SB_EMPTY_FIXUP(), with the
 1366          * addition of a second clause that takes care of the case where
 1367          * sb_mb has been updated, but remains the last record.
 1368          */
 1369         if (sb->sb_mb == NULL) {
 1370                 sb->sb_mbtail = NULL;
 1371                 sb->sb_lastrecord = NULL;
 1372         } else if (sb->sb_mb->m_nextpkt == NULL)
 1373                 sb->sb_lastrecord = sb->sb_mb;
 1374 }
 1375 
 1376 
 1377 /*
 1378  * Implement receive operations on a socket.  We depend on the way that
 1379  * records are added to the sockbuf by sbappend.  In particular, each record
 1380  * (mbufs linked through m_next) must begin with an address if the protocol
 1381  * so specifies, followed by an optional mbuf or mbufs containing ancillary
 1382  * data, and then zero or more mbufs of data.  In order to allow parallelism
 1383  * between network receive and copying to user space, as well as avoid
 1384  * sleeping with a mutex held, we release the socket buffer mutex during the
 1385  * user space copy.  Although the sockbuf is locked, new data may still be
 1386  * appended, and thus we must maintain consistency of the sockbuf during that
 1387  * time.
 1388  *
 1389  * The caller may receive the data as a single mbuf chain by supplying an
 1390  * mbuf **mp0 for use in returning the chain.  The uio is then used only for
 1391  * the count in uio_resid.
 1392  */
 1393 int
 1394 soreceive_generic(struct socket *so, struct sockaddr **psa, struct uio *uio,
 1395     struct mbuf **mp0, struct mbuf **controlp, int *flagsp)
 1396 {
 1397         struct mbuf *m, **mp;
 1398         int flags, len, error, offset;
 1399         struct protosw *pr = so->so_proto;
 1400         struct mbuf *nextrecord;
 1401         int moff, type = 0;
 1402         int orig_resid = uio->uio_resid;
 1403 
 1404         mp = mp0;
 1405         if (psa != NULL)
 1406                 *psa = NULL;
 1407         if (controlp != NULL)
 1408                 *controlp = NULL;
 1409         if (flagsp != NULL)
 1410                 flags = *flagsp &~ MSG_EOR;
 1411         else
 1412                 flags = 0;
 1413         if (flags & MSG_OOB)
 1414                 return (soreceive_rcvoob(so, uio, flags));
 1415         if (mp != NULL)
 1416                 *mp = NULL;
 1417         if ((pr->pr_flags & PR_WANTRCVD) && (so->so_state & SS_ISCONFIRMING)
 1418             && uio->uio_resid)
 1419                 (*pr->pr_usrreqs->pru_rcvd)(so, 0);
 1420 
 1421         error = sblock(&so->so_rcv, SBLOCKWAIT(flags));
 1422         if (error)
 1423                 return (error);
 1424 
 1425 restart:
 1426         SOCKBUF_LOCK(&so->so_rcv);
 1427         m = so->so_rcv.sb_mb;
 1428         /*
 1429          * If we have less data than requested, block awaiting more (subject
 1430          * to any timeout) if:
 1431          *   1. the current count is less than the low water mark, or
 1432          *   2. MSG_WAITALL is set, and it is possible to do the entire
 1433          *      receive operation at once if we block (resid <= hiwat).
 1434          *   3. MSG_DONTWAIT is not set
 1435          * If MSG_WAITALL is set but resid is larger than the receive buffer,
 1436          * we have to do the receive in sections, and thus risk returning a
 1437          * short count if a timeout or signal occurs after we start.
 1438          */
 1439         if (m == NULL || (((flags & MSG_DONTWAIT) == 0 &&
 1440             so->so_rcv.sb_cc < uio->uio_resid) &&
 1441             (so->so_rcv.sb_cc < so->so_rcv.sb_lowat ||
 1442             ((flags & MSG_WAITALL) && uio->uio_resid <= so->so_rcv.sb_hiwat)) &&
 1443             m->m_nextpkt == NULL && (pr->pr_flags & PR_ATOMIC) == 0)) {
 1444                 KASSERT(m != NULL || !so->so_rcv.sb_cc,
 1445                     ("receive: m == %p so->so_rcv.sb_cc == %u",
 1446                     m, so->so_rcv.sb_cc));
 1447                 if (so->so_error) {
 1448                         if (m != NULL)
 1449                                 goto dontblock;
 1450                         error = so->so_error;
 1451                         if ((flags & MSG_PEEK) == 0)
 1452                                 so->so_error = 0;
 1453                         SOCKBUF_UNLOCK(&so->so_rcv);
 1454                         goto release;
 1455                 }
 1456                 SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1457                 if (so->so_rcv.sb_state & SBS_CANTRCVMORE) {
 1458                         if (m == NULL) {
 1459                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1460                                 goto release;
 1461                         } else
 1462                                 goto dontblock;
 1463                 }
 1464                 for (; m != NULL; m = m->m_next)
 1465                         if (m->m_type == MT_OOBDATA  || (m->m_flags & M_EOR)) {
 1466                                 m = so->so_rcv.sb_mb;
 1467                                 goto dontblock;
 1468                         }
 1469                 if ((so->so_state & (SS_ISCONNECTED|SS_ISCONNECTING)) == 0 &&
 1470                     (so->so_proto->pr_flags & PR_CONNREQUIRED)) {
 1471                         SOCKBUF_UNLOCK(&so->so_rcv);
 1472                         error = ENOTCONN;
 1473                         goto release;
 1474                 }
 1475                 if (uio->uio_resid == 0) {
 1476                         SOCKBUF_UNLOCK(&so->so_rcv);
 1477                         goto release;
 1478                 }
 1479                 if ((so->so_state & SS_NBIO) ||
 1480                     (flags & (MSG_DONTWAIT|MSG_NBIO))) {
 1481                         SOCKBUF_UNLOCK(&so->so_rcv);
 1482                         error = EWOULDBLOCK;
 1483                         goto release;
 1484                 }
 1485                 SBLASTRECORDCHK(&so->so_rcv);
 1486                 SBLASTMBUFCHK(&so->so_rcv);
 1487                 error = sbwait(&so->so_rcv);
 1488                 SOCKBUF_UNLOCK(&so->so_rcv);
 1489                 if (error)
 1490                         goto release;
 1491                 goto restart;
 1492         }
 1493 dontblock:
 1494         /*
 1495          * From this point onward, we maintain 'nextrecord' as a cache of the
 1496          * pointer to the next record in the socket buffer.  We must keep the
 1497          * various socket buffer pointers and local stack versions of the
 1498          * pointers in sync, pushing out modifications before dropping the
 1499          * socket buffer mutex, and re-reading them when picking it up.
 1500          *
 1501          * Otherwise, we will race with the network stack appending new data
 1502          * or records onto the socket buffer by using inconsistent/stale
 1503          * versions of the field, possibly resulting in socket buffer
 1504          * corruption.
 1505          *
 1506          * By holding the high-level sblock(), we prevent simultaneous
 1507          * readers from pulling off the front of the socket buffer.
 1508          */
 1509         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1510         if (uio->uio_td)
 1511                 uio->uio_td->td_ru.ru_msgrcv++;
 1512         KASSERT(m == so->so_rcv.sb_mb, ("soreceive: m != so->so_rcv.sb_mb"));
 1513         SBLASTRECORDCHK(&so->so_rcv);
 1514         SBLASTMBUFCHK(&so->so_rcv);
 1515         nextrecord = m->m_nextpkt;
 1516         if (pr->pr_flags & PR_ADDR) {
 1517                 KASSERT(m->m_type == MT_SONAME,
 1518                     ("m->m_type == %d", m->m_type));
 1519                 orig_resid = 0;
 1520                 if (psa != NULL)
 1521                         *psa = sodupsockaddr(mtod(m, struct sockaddr *),
 1522                             M_NOWAIT);
 1523                 if (flags & MSG_PEEK) {
 1524                         m = m->m_next;
 1525                 } else {
 1526                         sbfree(&so->so_rcv, m);
 1527                         so->so_rcv.sb_mb = m_free(m);
 1528                         m = so->so_rcv.sb_mb;
 1529                         sockbuf_pushsync(&so->so_rcv, nextrecord);
 1530                 }
 1531         }
 1532 
 1533         /*
 1534          * Process one or more MT_CONTROL mbufs present before any data mbufs
 1535          * in the first mbuf chain on the socket buffer.  If MSG_PEEK, we
 1536          * just copy the data; if !MSG_PEEK, we call into the protocol to
 1537          * perform externalization (or freeing if controlp == NULL).
 1538          */
 1539         if (m != NULL && m->m_type == MT_CONTROL) {
 1540                 struct mbuf *cm = NULL, *cmn;
 1541                 struct mbuf **cme = &cm;
 1542 
 1543                 do {
 1544                         if (flags & MSG_PEEK) {
 1545                                 if (controlp != NULL) {
 1546                                         *controlp = m_copy(m, 0, m->m_len);
 1547                                         controlp = &(*controlp)->m_next;
 1548                                 }
 1549                                 m = m->m_next;
 1550                         } else {
 1551                                 sbfree(&so->so_rcv, m);
 1552                                 so->so_rcv.sb_mb = m->m_next;
 1553                                 m->m_next = NULL;
 1554                                 *cme = m;
 1555                                 cme = &(*cme)->m_next;
 1556                                 m = so->so_rcv.sb_mb;
 1557                         }
 1558                 } while (m != NULL && m->m_type == MT_CONTROL);
 1559                 if ((flags & MSG_PEEK) == 0)
 1560                         sockbuf_pushsync(&so->so_rcv, nextrecord);
 1561                 while (cm != NULL) {
 1562                         cmn = cm->m_next;
 1563                         cm->m_next = NULL;
 1564                         if (pr->pr_domain->dom_externalize != NULL) {
 1565                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1566                                 error = (*pr->pr_domain->dom_externalize)
 1567                                     (cm, controlp);
 1568                                 SOCKBUF_LOCK(&so->so_rcv);
 1569                         } else if (controlp != NULL)
 1570                                 *controlp = cm;
 1571                         else
 1572                                 m_freem(cm);
 1573                         if (controlp != NULL) {
 1574                                 orig_resid = 0;
 1575                                 while (*controlp != NULL)
 1576                                         controlp = &(*controlp)->m_next;
 1577                         }
 1578                         cm = cmn;
 1579                 }
 1580                 if (m != NULL)
 1581                         nextrecord = so->so_rcv.sb_mb->m_nextpkt;
 1582                 else
 1583                         nextrecord = so->so_rcv.sb_mb;
 1584                 orig_resid = 0;
 1585         }
 1586         if (m != NULL) {
 1587                 if ((flags & MSG_PEEK) == 0) {
 1588                         KASSERT(m->m_nextpkt == nextrecord,
 1589                             ("soreceive: post-control, nextrecord !sync"));
 1590                         if (nextrecord == NULL) {
 1591                                 KASSERT(so->so_rcv.sb_mb == m,
 1592                                     ("soreceive: post-control, sb_mb!=m"));
 1593                                 KASSERT(so->so_rcv.sb_lastrecord == m,
 1594                                     ("soreceive: post-control, lastrecord!=m"));
 1595                         }
 1596                 }
 1597                 type = m->m_type;
 1598                 if (type == MT_OOBDATA)
 1599                         flags |= MSG_OOB;
 1600         } else {
 1601                 if ((flags & MSG_PEEK) == 0) {
 1602                         KASSERT(so->so_rcv.sb_mb == nextrecord,
 1603                             ("soreceive: sb_mb != nextrecord"));
 1604                         if (so->so_rcv.sb_mb == NULL) {
 1605                                 KASSERT(so->so_rcv.sb_lastrecord == NULL,
 1606                                     ("soreceive: sb_lastercord != NULL"));
 1607                         }
 1608                 }
 1609         }
 1610         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1611         SBLASTRECORDCHK(&so->so_rcv);
 1612         SBLASTMBUFCHK(&so->so_rcv);
 1613 
 1614         /*
 1615          * Now continue to read any data mbufs off of the head of the socket
 1616          * buffer until the read request is satisfied.  Note that 'type' is
 1617          * used to store the type of any mbuf reads that have happened so far
 1618          * such that soreceive() can stop reading if the type changes, which
 1619          * causes soreceive() to return only one of regular data and inline
 1620          * out-of-band data in a single socket receive operation.
 1621          */
 1622         moff = 0;
 1623         offset = 0;
 1624         while (m != NULL && uio->uio_resid > 0 && error == 0) {
 1625                 /*
 1626                  * If the type of mbuf has changed since the last mbuf
 1627                  * examined ('type'), end the receive operation.
 1628                  */
 1629                 SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1630                 if (m->m_type == MT_OOBDATA) {
 1631                         if (type != MT_OOBDATA)
 1632                                 break;
 1633                 } else if (type == MT_OOBDATA)
 1634                         break;
 1635                 else
 1636                     KASSERT(m->m_type == MT_DATA,
 1637                         ("m->m_type == %d", m->m_type));
 1638                 so->so_rcv.sb_state &= ~SBS_RCVATMARK;
 1639                 len = uio->uio_resid;
 1640                 if (so->so_oobmark && len > so->so_oobmark - offset)
 1641                         len = so->so_oobmark - offset;
 1642                 if (len > m->m_len - moff)
 1643                         len = m->m_len - moff;
 1644                 /*
 1645                  * If mp is set, just pass back the mbufs.  Otherwise copy
 1646                  * them out via the uio, then free.  Sockbuf must be
 1647                  * consistent here (points to current mbuf, it points to next
 1648                  * record) when we drop priority; we must note any additions
 1649                  * to the sockbuf when we block interrupts again.
 1650                  */
 1651                 if (mp == NULL) {
 1652                         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1653                         SBLASTRECORDCHK(&so->so_rcv);
 1654                         SBLASTMBUFCHK(&so->so_rcv);
 1655                         SOCKBUF_UNLOCK(&so->so_rcv);
 1656 #ifdef ZERO_COPY_SOCKETS
 1657                         if (so_zero_copy_receive) {
 1658                                 int disposable;
 1659 
 1660                                 if ((m->m_flags & M_EXT)
 1661                                  && (m->m_ext.ext_type == EXT_DISPOSABLE))
 1662                                         disposable = 1;
 1663                                 else
 1664                                         disposable = 0;
 1665 
 1666                                 error = uiomoveco(mtod(m, char *) + moff,
 1667                                                   (int)len, uio,
 1668                                                   disposable);
 1669                         } else
 1670 #endif /* ZERO_COPY_SOCKETS */
 1671                         error = uiomove(mtod(m, char *) + moff, (int)len, uio);
 1672                         SOCKBUF_LOCK(&so->so_rcv);
 1673                         if (error) {
 1674                                 /*
 1675                                  * The MT_SONAME mbuf has already been removed
 1676                                  * from the record, so it is necessary to
 1677                                  * remove the data mbufs, if any, to preserve
 1678                                  * the invariant in the case of PR_ADDR that
 1679                                  * requires MT_SONAME mbufs at the head of
 1680                                  * each record.
 1681                                  */
 1682                                 if (m && pr->pr_flags & PR_ATOMIC &&
 1683                                     ((flags & MSG_PEEK) == 0))
 1684                                         (void)sbdroprecord_locked(&so->so_rcv);
 1685                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1686                                 goto release;
 1687                         }
 1688                 } else
 1689                         uio->uio_resid -= len;
 1690                 SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1691                 if (len == m->m_len - moff) {
 1692                         if (m->m_flags & M_EOR)
 1693                                 flags |= MSG_EOR;
 1694                         if (flags & MSG_PEEK) {
 1695                                 m = m->m_next;
 1696                                 moff = 0;
 1697                         } else {
 1698                                 nextrecord = m->m_nextpkt;
 1699                                 sbfree(&so->so_rcv, m);
 1700                                 if (mp != NULL) {
 1701                                         *mp = m;
 1702                                         mp = &m->m_next;
 1703                                         so->so_rcv.sb_mb = m = m->m_next;
 1704                                         *mp = NULL;
 1705                                 } else {
 1706                                         so->so_rcv.sb_mb = m_free(m);
 1707                                         m = so->so_rcv.sb_mb;
 1708                                 }
 1709                                 sockbuf_pushsync(&so->so_rcv, nextrecord);
 1710                                 SBLASTRECORDCHK(&so->so_rcv);
 1711                                 SBLASTMBUFCHK(&so->so_rcv);
 1712                         }
 1713                 } else {
 1714                         if (flags & MSG_PEEK)
 1715                                 moff += len;
 1716                         else {
 1717                                 if (mp != NULL) {
 1718                                         int copy_flag;
 1719 
 1720                                         if (flags & MSG_DONTWAIT)
 1721                                                 copy_flag = M_DONTWAIT;
 1722                                         else
 1723                                                 copy_flag = M_TRYWAIT;
 1724                                         if (copy_flag == M_TRYWAIT)
 1725                                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1726                                         *mp = m_copym(m, 0, len, copy_flag);
 1727                                         if (copy_flag == M_TRYWAIT)
 1728                                                 SOCKBUF_LOCK(&so->so_rcv);
 1729                                         if (*mp == NULL) {
 1730                                                 /*
 1731                                                  * m_copym() couldn't
 1732                                                  * allocate an mbuf.  Adjust
 1733                                                  * uio_resid back (it was
 1734                                                  * adjusted down by len
 1735                                                  * bytes, which we didn't end
 1736                                                  * up "copying" over).
 1737                                                  */
 1738                                                 uio->uio_resid += len;
 1739                                                 break;
 1740                                         }
 1741                                 }
 1742                                 m->m_data += len;
 1743                                 m->m_len -= len;
 1744                                 so->so_rcv.sb_cc -= len;
 1745                         }
 1746                 }
 1747                 SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1748                 if (so->so_oobmark) {
 1749                         if ((flags & MSG_PEEK) == 0) {
 1750                                 so->so_oobmark -= len;
 1751                                 if (so->so_oobmark == 0) {
 1752                                         so->so_rcv.sb_state |= SBS_RCVATMARK;
 1753                                         break;
 1754                                 }
 1755                         } else {
 1756                                 offset += len;
 1757                                 if (offset == so->so_oobmark)
 1758                                         break;
 1759                         }
 1760                 }
 1761                 if (flags & MSG_EOR)
 1762                         break;
 1763                 /*
 1764                  * If the MSG_WAITALL flag is set (for non-atomic socket), we
 1765                  * must not quit until "uio->uio_resid == 0" or an error
 1766                  * termination.  If a signal/timeout occurs, return with a
 1767                  * short count but without error.  Keep sockbuf locked
 1768                  * against other readers.
 1769                  */
 1770                 while (flags & MSG_WAITALL && m == NULL && uio->uio_resid > 0 &&
 1771                     !sosendallatonce(so) && nextrecord == NULL) {
 1772                         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1773                         if (so->so_error || so->so_rcv.sb_state & SBS_CANTRCVMORE)
 1774                                 break;
 1775                         /*
 1776                          * Notify the protocol that some data has been
 1777                          * drained before blocking.
 1778                          */
 1779                         if (pr->pr_flags & PR_WANTRCVD) {
 1780                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1781                                 (*pr->pr_usrreqs->pru_rcvd)(so, flags);
 1782                                 SOCKBUF_LOCK(&so->so_rcv);
 1783                         }
 1784                         SBLASTRECORDCHK(&so->so_rcv);
 1785                         SBLASTMBUFCHK(&so->so_rcv);
 1786                         error = sbwait(&so->so_rcv);
 1787                         if (error) {
 1788                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1789                                 goto release;
 1790                         }
 1791                         m = so->so_rcv.sb_mb;
 1792                         if (m != NULL)
 1793                                 nextrecord = m->m_nextpkt;
 1794                 }
 1795         }
 1796 
 1797         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1798         if (m != NULL && pr->pr_flags & PR_ATOMIC) {
 1799                 flags |= MSG_TRUNC;
 1800                 if ((flags & MSG_PEEK) == 0)
 1801                         (void) sbdroprecord_locked(&so->so_rcv);
 1802         }
 1803         if ((flags & MSG_PEEK) == 0) {
 1804                 if (m == NULL) {
 1805                         /*
 1806                          * First part is an inline SB_EMPTY_FIXUP().  Second
 1807                          * part makes sure sb_lastrecord is up-to-date if
 1808                          * there is still data in the socket buffer.
 1809                          */
 1810                         so->so_rcv.sb_mb = nextrecord;
 1811                         if (so->so_rcv.sb_mb == NULL) {
 1812                                 so->so_rcv.sb_mbtail = NULL;
 1813                                 so->so_rcv.sb_lastrecord = NULL;
 1814                         } else if (nextrecord->m_nextpkt == NULL)
 1815                                 so->so_rcv.sb_lastrecord = nextrecord;
 1816                 }
 1817                 SBLASTRECORDCHK(&so->so_rcv);
 1818                 SBLASTMBUFCHK(&so->so_rcv);
 1819                 /*
 1820                  * If soreceive() is being done from the socket callback,
 1821                  * then don't need to generate ACK to peer to update window,
 1822                  * since ACK will be generated on return to TCP.
 1823                  */
 1824                 if (!(flags & MSG_SOCALLBCK) &&
 1825                     (pr->pr_flags & PR_WANTRCVD)) {
 1826                         SOCKBUF_UNLOCK(&so->so_rcv);
 1827                         (*pr->pr_usrreqs->pru_rcvd)(so, flags);
 1828                         SOCKBUF_LOCK(&so->so_rcv);
 1829                 }
 1830         }
 1831         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1832         if (orig_resid == uio->uio_resid && orig_resid &&
 1833             (flags & MSG_EOR) == 0 && (so->so_rcv.sb_state & SBS_CANTRCVMORE) == 0) {
 1834                 SOCKBUF_UNLOCK(&so->so_rcv);
 1835                 goto restart;
 1836         }
 1837         SOCKBUF_UNLOCK(&so->so_rcv);
 1838 
 1839         if (flagsp != NULL)
 1840                 *flagsp |= flags;
 1841 release:
 1842         sbunlock(&so->so_rcv);
 1843         return (error);
 1844 }
 1845 
 1846 /*
 1847  * Optimized version of soreceive() for simple datagram cases from userspace.
 1848  * Unlike in the stream case, we're able to drop a datagram if copyout()
 1849  * fails, and because we handle datagrams atomically, we don't need to use a
 1850  * sleep lock to prevent I/O interlacing.
 1851  */
 1852 int
 1853 soreceive_dgram(struct socket *so, struct sockaddr **psa, struct uio *uio,
 1854     struct mbuf **mp0, struct mbuf **controlp, int *flagsp)
 1855 {
 1856         struct mbuf *m, *m2;
 1857         int flags, len, error, offset;
 1858         struct protosw *pr = so->so_proto;
 1859         struct mbuf *nextrecord;
 1860 
 1861         if (psa != NULL)
 1862                 *psa = NULL;
 1863         if (controlp != NULL)
 1864                 *controlp = NULL;
 1865         if (flagsp != NULL)
 1866                 flags = *flagsp &~ MSG_EOR;
 1867         else
 1868                 flags = 0;
 1869 
 1870         /*
 1871          * For any complicated cases, fall back to the full
 1872          * soreceive_generic().
 1873          */
 1874         if (mp0 != NULL || (flags & MSG_PEEK) || (flags & MSG_OOB))
 1875                 return (soreceive_generic(so, psa, uio, mp0, controlp,
 1876                     flagsp));
 1877 
 1878         /*
 1879          * Enforce restrictions on use.
 1880          */
 1881         KASSERT((pr->pr_flags & PR_WANTRCVD) == 0,
 1882             ("soreceive_dgram: wantrcvd"));
 1883         KASSERT(pr->pr_flags & PR_ATOMIC, ("soreceive_dgram: !atomic"));
 1884         KASSERT((so->so_rcv.sb_state & SBS_RCVATMARK) == 0,
 1885             ("soreceive_dgram: SBS_RCVATMARK"));
 1886         KASSERT((so->so_proto->pr_flags & PR_CONNREQUIRED) == 0,
 1887             ("soreceive_dgram: P_CONNREQUIRED"));
 1888 
 1889         /*
 1890          * Loop blocking while waiting for a datagram.
 1891          */
 1892         SOCKBUF_LOCK(&so->so_rcv);
 1893         while ((m = so->so_rcv.sb_mb) == NULL) {
 1894                 KASSERT(so->so_rcv.sb_cc == 0,
 1895                     ("soreceive_dgram: sb_mb NULL but sb_cc %u",
 1896                     so->so_rcv.sb_cc));
 1897                 if (so->so_error) {
 1898                         error = so->so_error;
 1899                         so->so_error = 0;
 1900                         SOCKBUF_UNLOCK(&so->so_rcv);
 1901                         return (error);
 1902                 }
 1903                 if (so->so_rcv.sb_state & SBS_CANTRCVMORE ||
 1904                     uio->uio_resid == 0) {
 1905                         SOCKBUF_UNLOCK(&so->so_rcv);
 1906                         return (0);
 1907                 }
 1908                 if ((so->so_state & SS_NBIO) ||
 1909                     (flags & (MSG_DONTWAIT|MSG_NBIO))) {
 1910                         SOCKBUF_UNLOCK(&so->so_rcv);
 1911                         return (EWOULDBLOCK);
 1912                 }
 1913                 SBLASTRECORDCHK(&so->so_rcv);
 1914                 SBLASTMBUFCHK(&so->so_rcv);
 1915                 error = sbwait(&so->so_rcv);
 1916                 if (error) {
 1917                         SOCKBUF_UNLOCK(&so->so_rcv);
 1918                         return (error);
 1919                 }
 1920         }
 1921         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1922 
 1923         if (uio->uio_td)
 1924                 uio->uio_td->td_ru.ru_msgrcv++;
 1925         SBLASTRECORDCHK(&so->so_rcv);
 1926         SBLASTMBUFCHK(&so->so_rcv);
 1927         nextrecord = m->m_nextpkt;
 1928         if (nextrecord == NULL) {
 1929                 KASSERT(so->so_rcv.sb_lastrecord == m,
 1930                     ("soreceive_dgram: lastrecord != m"));
 1931         }
 1932 
 1933         KASSERT(so->so_rcv.sb_mb->m_nextpkt == nextrecord,
 1934             ("soreceive_dgram: m_nextpkt != nextrecord"));
 1935 
 1936         /*
 1937          * Pull 'm' and its chain off the front of the packet queue.
 1938          */
 1939         so->so_rcv.sb_mb = NULL;
 1940         sockbuf_pushsync(&so->so_rcv, nextrecord);
 1941 
 1942         /*
 1943          * Walk 'm's chain and free that many bytes from the socket buffer.
 1944          */
 1945         for (m2 = m; m2 != NULL; m2 = m2->m_next)
 1946                 sbfree(&so->so_rcv, m2);
 1947 
 1948         /*
 1949          * Do a few last checks before we let go of the lock.
 1950          */
 1951         SBLASTRECORDCHK(&so->so_rcv);
 1952         SBLASTMBUFCHK(&so->so_rcv);
 1953         SOCKBUF_UNLOCK(&so->so_rcv);
 1954 
 1955         if (pr->pr_flags & PR_ADDR) {
 1956                 KASSERT(m->m_type == MT_SONAME,
 1957                     ("m->m_type == %d", m->m_type));
 1958                 if (psa != NULL)
 1959                         *psa = sodupsockaddr(mtod(m, struct sockaddr *),
 1960                             M_NOWAIT);
 1961                 m = m_free(m);
 1962         }
 1963         if (m == NULL) {
 1964                 /* XXXRW: Can this happen? */
 1965                 return (0);
 1966         }
 1967 
 1968         /*
 1969          * Packet to copyout() is now in 'm' and it is disconnected from the
 1970          * queue.
 1971          *
 1972          * Process one or more MT_CONTROL mbufs present before any data mbufs
 1973          * in the first mbuf chain on the socket buffer.  We call into the
 1974          * protocol to perform externalization (or freeing if controlp ==
 1975          * NULL).
 1976          */
 1977         if (m->m_type == MT_CONTROL) {
 1978                 struct mbuf *cm = NULL, *cmn;
 1979                 struct mbuf **cme = &cm;
 1980 
 1981                 do {
 1982                         m2 = m->m_next;
 1983                         m->m_next = NULL;
 1984                         *cme = m;
 1985                         cme = &(*cme)->m_next;
 1986                         m = m2;
 1987                 } while (m != NULL && m->m_type == MT_CONTROL);
 1988                 while (cm != NULL) {
 1989                         cmn = cm->m_next;
 1990                         cm->m_next = NULL;
 1991                         if (pr->pr_domain->dom_externalize != NULL) {
 1992                                 error = (*pr->pr_domain->dom_externalize)
 1993                                     (cm, controlp);
 1994                         } else if (controlp != NULL)
 1995                                 *controlp = cm;
 1996                         else
 1997                                 m_freem(cm);
 1998                         if (controlp != NULL) {
 1999                                 while (*controlp != NULL)
 2000                                         controlp = &(*controlp)->m_next;
 2001                         }
 2002                         cm = cmn;
 2003                 }
 2004         }
 2005         KASSERT(m->m_type == MT_DATA, ("soreceive_dgram: !data"));
 2006 
 2007         offset = 0;
 2008         while (m != NULL && uio->uio_resid > 0) {
 2009                 len = uio->uio_resid;
 2010                 if (len > m->m_len)
 2011                         len = m->m_len;
 2012                 error = uiomove(mtod(m, char *), (int)len, uio);
 2013                 if (error) {
 2014                         m_freem(m);
 2015                         return (error);
 2016                 }
 2017                 m = m_free(m);
 2018         }
 2019         if (m != NULL)
 2020                 flags |= MSG_TRUNC;
 2021         m_freem(m);
 2022         if (flagsp != NULL)
 2023                 *flagsp |= flags;
 2024         return (0);
 2025 }
 2026 
 2027 int
 2028 soreceive(struct socket *so, struct sockaddr **psa, struct uio *uio,
 2029     struct mbuf **mp0, struct mbuf **controlp, int *flagsp)
 2030 {
 2031 
 2032         return (so->so_proto->pr_usrreqs->pru_soreceive(so, psa, uio, mp0,
 2033             controlp, flagsp));
 2034 }
 2035 
 2036 int
 2037 soshutdown(struct socket *so, int how)
 2038 {
 2039         struct protosw *pr = so->so_proto;
 2040 
 2041         if (!(how == SHUT_RD || how == SHUT_WR || how == SHUT_RDWR))
 2042                 return (EINVAL);
 2043         if (pr->pr_usrreqs->pru_flush != NULL) {
 2044                 (*pr->pr_usrreqs->pru_flush)(so, how);
 2045         }
 2046         if (how != SHUT_WR)
 2047                 sorflush(so);
 2048         if (how != SHUT_RD)
 2049                 return ((*pr->pr_usrreqs->pru_shutdown)(so));
 2050         return (0);
 2051 }
 2052 
 2053 void
 2054 sorflush(struct socket *so)
 2055 {
 2056         struct sockbuf *sb = &so->so_rcv;
 2057         struct protosw *pr = so->so_proto;
 2058         struct sockbuf asb;
 2059 
 2060         /*
 2061          * In order to avoid calling dom_dispose with the socket buffer mutex
 2062          * held, and in order to generally avoid holding the lock for a long
 2063          * time, we make a copy of the socket buffer and clear the original
 2064          * (except locks, state).  The new socket buffer copy won't have
 2065          * initialized locks so we can only call routines that won't use or
 2066          * assert those locks.
 2067          *
 2068          * Dislodge threads currently blocked in receive and wait to acquire
 2069          * a lock against other simultaneous readers before clearing the
 2070          * socket buffer.  Don't let our acquire be interrupted by a signal
 2071          * despite any existing socket disposition on interruptable waiting.
 2072          */
 2073         socantrcvmore(so);
 2074         (void) sblock(sb, SBL_WAIT | SBL_NOINTR);
 2075 
 2076         /*
 2077          * Invalidate/clear most of the sockbuf structure, but leave selinfo
 2078          * and mutex data unchanged.
 2079          */
 2080         SOCKBUF_LOCK(sb);
 2081         bzero(&asb, offsetof(struct sockbuf, sb_startzero));
 2082         bcopy(&sb->sb_startzero, &asb.sb_startzero,
 2083             sizeof(*sb) - offsetof(struct sockbuf, sb_startzero));
 2084         bzero(&sb->sb_startzero,
 2085             sizeof(*sb) - offsetof(struct sockbuf, sb_startzero));
 2086         SOCKBUF_UNLOCK(sb);
 2087         sbunlock(sb);
 2088 
 2089         /*
 2090          * Dispose of special rights and flush the socket buffer.  Don't call
 2091          * any unsafe routines (that rely on locks being initialized) on asb.
 2092          */
 2093         if (pr->pr_flags & PR_RIGHTS && pr->pr_domain->dom_dispose != NULL)
 2094                 (*pr->pr_domain->dom_dispose)(asb.sb_mb);
 2095         sbrelease_internal(&asb, so);
 2096 }
 2097 
 2098 /*
 2099  * Perhaps this routine, and sooptcopyout(), below, ought to come in an
 2100  * additional variant to handle the case where the option value needs to be
 2101  * some kind of integer, but not a specific size.  In addition to their use
 2102  * here, these functions are also called by the protocol-level pr_ctloutput()
 2103  * routines.
 2104  */
 2105 int
 2106 sooptcopyin(struct sockopt *sopt, void *buf, size_t len, size_t minlen)
 2107 {
 2108         size_t  valsize;
 2109 
 2110         /*
 2111          * If the user gives us more than we wanted, we ignore it, but if we
 2112          * don't get the minimum length the caller wants, we return EINVAL.
 2113          * On success, sopt->sopt_valsize is set to however much we actually
 2114          * retrieved.
 2115          */
 2116         if ((valsize = sopt->sopt_valsize) < minlen)
 2117                 return EINVAL;
 2118         if (valsize > len)
 2119                 sopt->sopt_valsize = valsize = len;
 2120 
 2121         if (sopt->sopt_td != NULL)
 2122                 return (copyin(sopt->sopt_val, buf, valsize));
 2123 
 2124         bcopy(sopt->sopt_val, buf, valsize);
 2125         return (0);
 2126 }
 2127 
 2128 /*
 2129  * Kernel version of setsockopt(2).
 2130  *
 2131  * XXX: optlen is size_t, not socklen_t
 2132  */
 2133 int
 2134 so_setsockopt(struct socket *so, int level, int optname, void *optval,
 2135     size_t optlen)
 2136 {
 2137         struct sockopt sopt;
 2138 
 2139         sopt.sopt_level = level;
 2140         sopt.sopt_name = optname;
 2141         sopt.sopt_dir = SOPT_SET;
 2142         sopt.sopt_val = optval;
 2143         sopt.sopt_valsize = optlen;
 2144         sopt.sopt_td = NULL;
 2145         return (sosetopt(so, &sopt));
 2146 }
 2147 
 2148 int
 2149 sosetopt(struct socket *so, struct sockopt *sopt)
 2150 {
 2151         int     error, optval;
 2152         struct  linger l;
 2153         struct  timeval tv;
 2154         u_long  val;
 2155 #ifdef MAC
 2156         struct mac extmac;
 2157 #endif
 2158 
 2159         error = 0;
 2160         if (sopt->sopt_level != SOL_SOCKET) {
 2161                 if (so->so_proto && so->so_proto->pr_ctloutput)
 2162                         return ((*so->so_proto->pr_ctloutput)
 2163                                   (so, sopt));
 2164                 error = ENOPROTOOPT;
 2165         } else {
 2166                 switch (sopt->sopt_name) {
 2167 #ifdef INET
 2168                 case SO_ACCEPTFILTER:
 2169                         error = do_setopt_accept_filter(so, sopt);
 2170                         if (error)
 2171                                 goto bad;
 2172                         break;
 2173 #endif
 2174                 case SO_LINGER:
 2175                         error = sooptcopyin(sopt, &l, sizeof l, sizeof l);
 2176                         if (error)
 2177                                 goto bad;
 2178 
 2179                         SOCK_LOCK(so);
 2180                         so->so_linger = l.l_linger;
 2181                         if (l.l_onoff)
 2182                                 so->so_options |= SO_LINGER;
 2183                         else
 2184                                 so->so_options &= ~SO_LINGER;
 2185                         SOCK_UNLOCK(so);
 2186                         break;
 2187 
 2188                 case SO_DEBUG:
 2189                 case SO_KEEPALIVE:
 2190                 case SO_DONTROUTE:
 2191                 case SO_USELOOPBACK:
 2192                 case SO_BROADCAST:
 2193                 case SO_REUSEADDR:
 2194                 case SO_REUSEPORT:
 2195                 case SO_OOBINLINE:
 2196                 case SO_TIMESTAMP:
 2197                 case SO_BINTIME:
 2198                 case SO_NOSIGPIPE:
 2199                         error = sooptcopyin(sopt, &optval, sizeof optval,
 2200                                             sizeof optval);
 2201                         if (error)
 2202                                 goto bad;
 2203                         SOCK_LOCK(so);
 2204                         if (optval)
 2205                                 so->so_options |= sopt->sopt_name;
 2206                         else
 2207                                 so->so_options &= ~sopt->sopt_name;
 2208                         SOCK_UNLOCK(so);
 2209                         break;
 2210 
 2211                 case SO_SETFIB:
 2212                         error = sooptcopyin(sopt, &optval, sizeof optval,
 2213                                             sizeof optval);
 2214                         if (optval < 1 || optval > rt_numfibs) {
 2215                                 error = EINVAL;
 2216                                 goto bad;
 2217                         }
 2218                         if ((so->so_proto->pr_domain->dom_family == PF_INET) ||
 2219                             (so->so_proto->pr_domain->dom_family == PF_ROUTE)) {
 2220                                 so->so_fibnum = optval;
 2221                                 /* Note: ignore error */
 2222                                 if (so->so_proto && so->so_proto->pr_ctloutput)
 2223                                         (*so->so_proto->pr_ctloutput)(so, sopt);
 2224                         } else {
 2225                                 so->so_fibnum = 0;
 2226                         }
 2227                         break;
 2228                 case SO_SNDBUF:
 2229                 case SO_RCVBUF:
 2230                 case SO_SNDLOWAT:
 2231                 case SO_RCVLOWAT:
 2232                         error = sooptcopyin(sopt, &optval, sizeof optval,
 2233                                             sizeof optval);
 2234                         if (error)
 2235                                 goto bad;
 2236 
 2237                         /*
 2238                          * Values < 1 make no sense for any of these options,
 2239                          * so disallow them.
 2240                          */
 2241                         if (optval < 1) {
 2242                                 error = EINVAL;
 2243                                 goto bad;
 2244                         }
 2245 
 2246                         switch (sopt->sopt_name) {
 2247                         case SO_SNDBUF:
 2248                         case SO_RCVBUF:
 2249                                 if (sbreserve(sopt->sopt_name == SO_SNDBUF ?
 2250                                     &so->so_snd : &so->so_rcv, (u_long)optval,
 2251                                     so, curthread) == 0) {
 2252                                         error = ENOBUFS;
 2253                                         goto bad;
 2254                                 }
 2255                                 (sopt->sopt_name == SO_SNDBUF ? &so->so_snd :
 2256                                     &so->so_rcv)->sb_flags &= ~SB_AUTOSIZE;
 2257                                 break;
 2258 
 2259                         /*
 2260                          * Make sure the low-water is never greater than the
 2261                          * high-water.
 2262                          */
 2263                         case SO_SNDLOWAT:
 2264                                 SOCKBUF_LOCK(&so->so_snd);
 2265                                 so->so_snd.sb_lowat =
 2266                                     (optval > so->so_snd.sb_hiwat) ?
 2267                                     so->so_snd.sb_hiwat : optval;
 2268                                 SOCKBUF_UNLOCK(&so->so_snd);
 2269                                 break;
 2270                         case SO_RCVLOWAT:
 2271                                 SOCKBUF_LOCK(&so->so_rcv);
 2272                                 so->so_rcv.sb_lowat =
 2273                                     (optval > so->so_rcv.sb_hiwat) ?
 2274                                     so->so_rcv.sb_hiwat : optval;
 2275                                 SOCKBUF_UNLOCK(&so->so_rcv);
 2276                                 break;
 2277                         }
 2278                         break;
 2279 
 2280                 case SO_SNDTIMEO:
 2281                 case SO_RCVTIMEO:
 2282 #ifdef COMPAT_IA32
 2283                         if (curthread->td_proc->p_sysent == &ia32_freebsd_sysvec) {
 2284                                 struct timeval32 tv32;
 2285 
 2286                                 error = sooptcopyin(sopt, &tv32, sizeof tv32,
 2287                                     sizeof tv32);
 2288                                 CP(tv32, tv, tv_sec);
 2289                                 CP(tv32, tv, tv_usec);
 2290                         } else
 2291 #endif
 2292                                 error = sooptcopyin(sopt, &tv, sizeof tv,
 2293                                     sizeof tv);
 2294                         if (error)
 2295                                 goto bad;
 2296 
 2297                         /* assert(hz > 0); */
 2298                         if (tv.tv_sec < 0 || tv.tv_sec > INT_MAX / hz ||
 2299                             tv.tv_usec < 0 || tv.tv_usec >= 1000000) {
 2300                                 error = EDOM;
 2301                                 goto bad;
 2302                         }
 2303                         /* assert(tick > 0); */
 2304                         /* assert(ULONG_MAX - INT_MAX >= 1000000); */
 2305                         val = (u_long)(tv.tv_sec * hz) + tv.tv_usec / tick;
 2306                         if (val > INT_MAX) {
 2307                                 error = EDOM;
 2308                                 goto bad;
 2309                         }
 2310                         if (val == 0 && tv.tv_usec != 0)
 2311                                 val = 1;
 2312 
 2313                         switch (sopt->sopt_name) {
 2314                         case SO_SNDTIMEO:
 2315                                 so->so_snd.sb_timeo = val;
 2316                                 break;
 2317                         case SO_RCVTIMEO:
 2318                                 so->so_rcv.sb_timeo = val;
 2319                                 break;
 2320                         }
 2321                         break;
 2322 
 2323                 case SO_LABEL:
 2324 #ifdef MAC
 2325                         error = sooptcopyin(sopt, &extmac, sizeof extmac,
 2326                             sizeof extmac);
 2327                         if (error)
 2328                                 goto bad;
 2329                         error = mac_setsockopt_label(sopt->sopt_td->td_ucred,
 2330                             so, &extmac);
 2331 #else
 2332                         error = EOPNOTSUPP;
 2333 #endif
 2334                         break;
 2335 
 2336                 default:
 2337                         error = ENOPROTOOPT;
 2338                         break;
 2339                 }
 2340                 if (error == 0 && so->so_proto != NULL &&
 2341                     so->so_proto->pr_ctloutput != NULL) {
 2342                         (void) ((*so->so_proto->pr_ctloutput)
 2343                                   (so, sopt));
 2344                 }
 2345         }
 2346 bad:
 2347         return (error);
 2348 }
 2349 
 2350 /*
 2351  * Helper routine for getsockopt.
 2352  */
 2353 int
 2354 sooptcopyout(struct sockopt *sopt, const void *buf, size_t len)
 2355 {
 2356         int     error;
 2357         size_t  valsize;
 2358 
 2359         error = 0;
 2360 
 2361         /*
 2362          * Documented get behavior is that we always return a value, possibly
 2363          * truncated to fit in the user's buffer.  Traditional behavior is
 2364          * that we always tell the user precisely how much we copied, rather
 2365          * than something useful like the total amount we had available for
 2366          * her.  Note that this interface is not idempotent; the entire
 2367          * answer must generated ahead of time.
 2368          */
 2369         valsize = min(len, sopt->sopt_valsize);
 2370         sopt->sopt_valsize = valsize;
 2371         if (sopt->sopt_val != NULL) {
 2372                 if (sopt->sopt_td != NULL)
 2373                         error = copyout(buf, sopt->sopt_val, valsize);
 2374                 else
 2375                         bcopy(buf, sopt->sopt_val, valsize);
 2376         }
 2377         return (error);
 2378 }
 2379 
 2380 int
 2381 sogetopt(struct socket *so, struct sockopt *sopt)
 2382 {
 2383         int     error, optval;
 2384         struct  linger l;
 2385         struct  timeval tv;
 2386 #ifdef MAC
 2387         struct mac extmac;
 2388 #endif
 2389 
 2390         error = 0;
 2391         if (sopt->sopt_level != SOL_SOCKET) {
 2392                 if (so->so_proto && so->so_proto->pr_ctloutput) {
 2393                         return ((*so->so_proto->pr_ctloutput)
 2394                                   (so, sopt));
 2395                 } else
 2396                         return (ENOPROTOOPT);
 2397         } else {
 2398                 switch (sopt->sopt_name) {
 2399 #ifdef INET
 2400                 case SO_ACCEPTFILTER:
 2401                         error = do_getopt_accept_filter(so, sopt);
 2402                         break;
 2403 #endif
 2404                 case SO_LINGER:
 2405                         SOCK_LOCK(so);
 2406                         l.l_onoff = so->so_options & SO_LINGER;
 2407                         l.l_linger = so->so_linger;
 2408                         SOCK_UNLOCK(so);
 2409                         error = sooptcopyout(sopt, &l, sizeof l);
 2410                         break;
 2411 
 2412                 case SO_USELOOPBACK:
 2413                 case SO_DONTROUTE:
 2414                 case SO_DEBUG:
 2415                 case SO_KEEPALIVE:
 2416                 case SO_REUSEADDR:
 2417                 case SO_REUSEPORT:
 2418                 case SO_BROADCAST:
 2419                 case SO_OOBINLINE:
 2420                 case SO_ACCEPTCONN:
 2421                 case SO_TIMESTAMP:
 2422                 case SO_BINTIME:
 2423                 case SO_NOSIGPIPE:
 2424                         optval = so->so_options & sopt->sopt_name;
 2425 integer:
 2426                         error = sooptcopyout(sopt, &optval, sizeof optval);
 2427                         break;
 2428 
 2429                 case SO_TYPE:
 2430                         optval = so->so_type;
 2431                         goto integer;
 2432 
 2433                 case SO_ERROR:
 2434                         SOCK_LOCK(so);
 2435                         optval = so->so_error;
 2436                         so->so_error = 0;
 2437                         SOCK_UNLOCK(so);
 2438                         goto integer;
 2439 
 2440                 case SO_SNDBUF:
 2441                         optval = so->so_snd.sb_hiwat;
 2442                         goto integer;
 2443 
 2444                 case SO_RCVBUF:
 2445                         optval = so->so_rcv.sb_hiwat;
 2446                         goto integer;
 2447 
 2448                 case SO_SNDLOWAT:
 2449                         optval = so->so_snd.sb_lowat;
 2450                         goto integer;
 2451 
 2452                 case SO_RCVLOWAT:
 2453                         optval = so->so_rcv.sb_lowat;
 2454                         goto integer;
 2455 
 2456                 case SO_SNDTIMEO:
 2457                 case SO_RCVTIMEO:
 2458                         optval = (sopt->sopt_name == SO_SNDTIMEO ?
 2459                                   so->so_snd.sb_timeo : so->so_rcv.sb_timeo);
 2460 
 2461                         tv.tv_sec = optval / hz;
 2462                         tv.tv_usec = (optval % hz) * tick;
 2463 #ifdef COMPAT_IA32
 2464                         if (curthread->td_proc->p_sysent == &ia32_freebsd_sysvec) {
 2465                                 struct timeval32 tv32;
 2466 
 2467                                 CP(tv, tv32, tv_sec);
 2468                                 CP(tv, tv32, tv_usec);
 2469                                 error = sooptcopyout(sopt, &tv32, sizeof tv32);
 2470                         } else
 2471 #endif
 2472                                 error = sooptcopyout(sopt, &tv, sizeof tv);
 2473                         break;
 2474 
 2475                 case SO_LABEL:
 2476 #ifdef MAC
 2477                         error = sooptcopyin(sopt, &extmac, sizeof(extmac),
 2478                             sizeof(extmac));
 2479                         if (error)
 2480                                 return (error);
 2481                         error = mac_getsockopt_label(sopt->sopt_td->td_ucred,
 2482                             so, &extmac);
 2483                         if (error)
 2484                                 return (error);
 2485                         error = sooptcopyout(sopt, &extmac, sizeof extmac);
 2486 #else
 2487                         error = EOPNOTSUPP;
 2488 #endif
 2489                         break;
 2490 
 2491                 case SO_PEERLABEL:
 2492 #ifdef MAC
 2493                         error = sooptcopyin(sopt, &extmac, sizeof(extmac),
 2494                             sizeof(extmac));
 2495                         if (error)
 2496                                 return (error);
 2497                         error = mac_getsockopt_peerlabel(
 2498                             sopt->sopt_td->td_ucred, so, &extmac);
 2499                         if (error)
 2500                                 return (error);
 2501                         error = sooptcopyout(sopt, &extmac, sizeof extmac);
 2502 #else
 2503                         error = EOPNOTSUPP;
 2504 #endif
 2505                         break;
 2506 
 2507                 case SO_LISTENQLIMIT:
 2508                         optval = so->so_qlimit;
 2509                         goto integer;
 2510 
 2511                 case SO_LISTENQLEN:
 2512                         optval = so->so_qlen;
 2513                         goto integer;
 2514 
 2515                 case SO_LISTENINCQLEN:
 2516                         optval = so->so_incqlen;
 2517                         goto integer;
 2518 
 2519                 default:
 2520                         error = ENOPROTOOPT;
 2521                         break;
 2522                 }
 2523                 return (error);
 2524         }
 2525 }
 2526 
 2527 /* XXX; prepare mbuf for (__FreeBSD__ < 3) routines. */
 2528 int
 2529 soopt_getm(struct sockopt *sopt, struct mbuf **mp)
 2530 {
 2531         struct mbuf *m, *m_prev;
 2532         int sopt_size = sopt->sopt_valsize;
 2533 
 2534         MGET(m, sopt->sopt_td ? M_TRYWAIT : M_DONTWAIT, MT_DATA);
 2535         if (m == NULL)
 2536                 return ENOBUFS;
 2537         if (sopt_size > MLEN) {
 2538                 MCLGET(m, sopt->sopt_td ? M_TRYWAIT : M_DONTWAIT);
 2539                 if ((m->m_flags & M_EXT) == 0) {
 2540                         m_free(m);
 2541                         return ENOBUFS;
 2542                 }
 2543                 m->m_len = min(MCLBYTES, sopt_size);
 2544         } else {
 2545                 m->m_len = min(MLEN, sopt_size);
 2546         }
 2547         sopt_size -= m->m_len;
 2548         *mp = m;
 2549         m_prev = m;
 2550 
 2551         while (sopt_size) {
 2552                 MGET(m, sopt->sopt_td ? M_TRYWAIT : M_DONTWAIT, MT_DATA);
 2553                 if (m == NULL) {
 2554                         m_freem(*mp);
 2555                         return ENOBUFS;
 2556                 }
 2557                 if (sopt_size > MLEN) {
 2558                         MCLGET(m, sopt->sopt_td != NULL ? M_TRYWAIT :
 2559                             M_DONTWAIT);
 2560                         if ((m->m_flags & M_EXT) == 0) {
 2561                                 m_freem(m);
 2562                                 m_freem(*mp);
 2563                                 return ENOBUFS;
 2564                         }
 2565                         m->m_len = min(MCLBYTES, sopt_size);
 2566                 } else {
 2567                         m->m_len = min(MLEN, sopt_size);
 2568                 }
 2569                 sopt_size -= m->m_len;
 2570                 m_prev->m_next = m;
 2571                 m_prev = m;
 2572         }
 2573         return (0);
 2574 }
 2575 
 2576 /* XXX; copyin sopt data into mbuf chain for (__FreeBSD__ < 3) routines. */
 2577 int
 2578 soopt_mcopyin(struct sockopt *sopt, struct mbuf *m)
 2579 {
 2580         struct mbuf *m0 = m;
 2581 
 2582         if (sopt->sopt_val == NULL)
 2583                 return (0);
 2584         while (m != NULL && sopt->sopt_valsize >= m->m_len) {
 2585                 if (sopt->sopt_td != NULL) {
 2586                         int error;
 2587 
 2588                         error = copyin(sopt->sopt_val, mtod(m, char *),
 2589                                        m->m_len);
 2590                         if (error != 0) {
 2591                                 m_freem(m0);
 2592                                 return(error);
 2593                         }
 2594                 } else
 2595                         bcopy(sopt->sopt_val, mtod(m, char *), m->m_len);
 2596                 sopt->sopt_valsize -= m->m_len;
 2597                 sopt->sopt_val = (char *)sopt->sopt_val + m->m_len;
 2598                 m = m->m_next;
 2599         }
 2600         if (m != NULL) /* should be allocated enoughly at ip6_sooptmcopyin() */
 2601                 panic("ip6_sooptmcopyin");
 2602         return (0);
 2603 }
 2604 
 2605 /* XXX; copyout mbuf chain data into soopt for (__FreeBSD__ < 3) routines. */
 2606 int
 2607 soopt_mcopyout(struct sockopt *sopt, struct mbuf *m)
 2608 {
 2609         struct mbuf *m0 = m;
 2610         size_t valsize = 0;
 2611 
 2612         if (sopt->sopt_val == NULL)
 2613                 return (0);
 2614         while (m != NULL && sopt->sopt_valsize >= m->m_len) {
 2615                 if (sopt->sopt_td != NULL) {
 2616                         int error;
 2617 
 2618                         error = copyout(mtod(m, char *), sopt->sopt_val,
 2619                                        m->m_len);
 2620                         if (error != 0) {
 2621                                 m_freem(m0);
 2622                                 return(error);
 2623                         }
 2624                 } else
 2625                         bcopy(mtod(m, char *), sopt->sopt_val, m->m_len);
 2626                sopt->sopt_valsize -= m->m_len;
 2627                sopt->sopt_val = (char *)sopt->sopt_val + m->m_len;
 2628                valsize += m->m_len;
 2629                m = m->m_next;
 2630         }
 2631         if (m != NULL) {
 2632                 /* enough soopt buffer should be given from user-land */
 2633                 m_freem(m0);
 2634                 return(EINVAL);
 2635         }
 2636         sopt->sopt_valsize = valsize;
 2637         return (0);
 2638 }
 2639 
 2640 /*
 2641  * sohasoutofband(): protocol notifies socket layer of the arrival of new
 2642  * out-of-band data, which will then notify socket consumers.
 2643  */
 2644 void
 2645 sohasoutofband(struct socket *so)
 2646 {
 2647 
 2648         if (so->so_sigio != NULL)
 2649                 pgsigio(&so->so_sigio, SIGURG, 0);
 2650         selwakeuppri(&so->so_rcv.sb_sel, PSOCK);
 2651 }
 2652 
 2653 int
 2654 sopoll(struct socket *so, int events, struct ucred *active_cred,
 2655     struct thread *td)
 2656 {
 2657 
 2658         return (so->so_proto->pr_usrreqs->pru_sopoll(so, events, active_cred,
 2659             td));
 2660 }
 2661 
 2662 int
 2663 sopoll_generic(struct socket *so, int events, struct ucred *active_cred,
 2664     struct thread *td)
 2665 {
 2666         int revents = 0;
 2667 
 2668         SOCKBUF_LOCK(&so->so_snd);
 2669         SOCKBUF_LOCK(&so->so_rcv);
 2670         if (events & (POLLIN | POLLRDNORM))
 2671                 if (soreadable(so))
 2672                         revents |= events & (POLLIN | POLLRDNORM);
 2673 
 2674         if (events & POLLINIGNEOF)
 2675                 if (so->so_rcv.sb_cc >= so->so_rcv.sb_lowat ||
 2676                     !TAILQ_EMPTY(&so->so_comp) || so->so_error)
 2677                         revents |= POLLINIGNEOF;
 2678 
 2679         if (events & (POLLOUT | POLLWRNORM))
 2680                 if (sowriteable(so))
 2681                         revents |= events & (POLLOUT | POLLWRNORM);
 2682 
 2683         if (events & (POLLPRI | POLLRDBAND))
 2684                 if (so->so_oobmark || (so->so_rcv.sb_state & SBS_RCVATMARK))
 2685                         revents |= events & (POLLPRI | POLLRDBAND);
 2686 
 2687         if (revents == 0) {
 2688                 if (events &
 2689                     (POLLIN | POLLINIGNEOF | POLLPRI | POLLRDNORM |
 2690                      POLLRDBAND)) {
 2691                         selrecord(td, &so->so_rcv.sb_sel);
 2692                         so->so_rcv.sb_flags |= SB_SEL;
 2693                 }
 2694 
 2695                 if (events & (POLLOUT | POLLWRNORM)) {
 2696                         selrecord(td, &so->so_snd.sb_sel);
 2697                         so->so_snd.sb_flags |= SB_SEL;
 2698                 }
 2699         }
 2700 
 2701         SOCKBUF_UNLOCK(&so->so_rcv);
 2702         SOCKBUF_UNLOCK(&so->so_snd);
 2703         return (revents);
 2704 }
 2705 
 2706 int
 2707 soo_kqfilter(struct file *fp, struct knote *kn)
 2708 {
 2709         struct socket *so = kn->kn_fp->f_data;
 2710         struct sockbuf *sb;
 2711 
 2712         switch (kn->kn_filter) {
 2713         case EVFILT_READ:
 2714                 if (so->so_options & SO_ACCEPTCONN)
 2715                         kn->kn_fop = &solisten_filtops;
 2716                 else
 2717                         kn->kn_fop = &soread_filtops;
 2718                 sb = &so->so_rcv;
 2719                 break;
 2720         case EVFILT_WRITE:
 2721                 kn->kn_fop = &sowrite_filtops;
 2722                 sb = &so->so_snd;
 2723                 break;
 2724         default:
 2725                 return (EINVAL);
 2726         }
 2727 
 2728         SOCKBUF_LOCK(sb);
 2729         knlist_add(&sb->sb_sel.si_note, kn, 1);
 2730         sb->sb_flags |= SB_KNOTE;
 2731         SOCKBUF_UNLOCK(sb);
 2732         return (0);
 2733 }
 2734 
 2735 /*
 2736  * Some routines that return EOPNOTSUPP for entry points that are not
 2737  * supported by a protocol.  Fill in as needed.
 2738  */
 2739 int
 2740 pru_accept_notsupp(struct socket *so, struct sockaddr **nam)
 2741 {
 2742 
 2743         return EOPNOTSUPP;
 2744 }
 2745 
 2746 int
 2747 pru_attach_notsupp(struct socket *so, int proto, struct thread *td)
 2748 {
 2749 
 2750         return EOPNOTSUPP;
 2751 }
 2752 
 2753 int
 2754 pru_bind_notsupp(struct socket *so, struct sockaddr *nam, struct thread *td)
 2755 {
 2756 
 2757         return EOPNOTSUPP;
 2758 }
 2759 
 2760 int
 2761 pru_connect_notsupp(struct socket *so, struct sockaddr *nam, struct thread *td)
 2762 {
 2763 
 2764         return EOPNOTSUPP;
 2765 }
 2766 
 2767 int
 2768 pru_connect2_notsupp(struct socket *so1, struct socket *so2)
 2769 {
 2770 
 2771         return EOPNOTSUPP;
 2772 }
 2773 
 2774 int
 2775 pru_control_notsupp(struct socket *so, u_long cmd, caddr_t data,
 2776     struct ifnet *ifp, struct thread *td)
 2777 {
 2778 
 2779         return EOPNOTSUPP;
 2780 }
 2781 
 2782 int
 2783 pru_disconnect_notsupp(struct socket *so)
 2784 {
 2785 
 2786         return EOPNOTSUPP;
 2787 }
 2788 
 2789 int
 2790 pru_listen_notsupp(struct socket *so, int backlog, struct thread *td)
 2791 {
 2792 
 2793         return EOPNOTSUPP;
 2794 }
 2795 
 2796 int
 2797 pru_peeraddr_notsupp(struct socket *so, struct sockaddr **nam)
 2798 {
 2799 
 2800         return EOPNOTSUPP;
 2801 }
 2802 
 2803 int
 2804 pru_rcvd_notsupp(struct socket *so, int flags)
 2805 {
 2806 
 2807         return EOPNOTSUPP;
 2808 }
 2809 
 2810 int
 2811 pru_rcvoob_notsupp(struct socket *so, struct mbuf *m, int flags)
 2812 {
 2813 
 2814         return EOPNOTSUPP;
 2815 }
 2816 
 2817 int
 2818 pru_send_notsupp(struct socket *so, int flags, struct mbuf *m,
 2819     struct sockaddr *addr, struct mbuf *control, struct thread *td)
 2820 {
 2821 
 2822         return EOPNOTSUPP;
 2823 }
 2824 
 2825 /*
 2826  * This isn't really a ``null'' operation, but it's the default one and
 2827  * doesn't do anything destructive.
 2828  */
 2829 int
 2830 pru_sense_null(struct socket *so, struct stat *sb)
 2831 {
 2832 
 2833         sb->st_blksize = so->so_snd.sb_hiwat;
 2834         return 0;
 2835 }
 2836 
 2837 int
 2838 pru_shutdown_notsupp(struct socket *so)
 2839 {
 2840 
 2841         return EOPNOTSUPP;
 2842 }
 2843 
 2844 int
 2845 pru_sockaddr_notsupp(struct socket *so, struct sockaddr **nam)
 2846 {
 2847 
 2848         return EOPNOTSUPP;
 2849 }
 2850 
 2851 int
 2852 pru_sosend_notsupp(struct socket *so, struct sockaddr *addr, struct uio *uio,
 2853     struct mbuf *top, struct mbuf *control, int flags, struct thread *td)
 2854 {
 2855 
 2856         return EOPNOTSUPP;
 2857 }
 2858 
 2859 int
 2860 pru_soreceive_notsupp(struct socket *so, struct sockaddr **paddr,
 2861     struct uio *uio, struct mbuf **mp0, struct mbuf **controlp, int *flagsp)
 2862 {
 2863 
 2864         return EOPNOTSUPP;
 2865 }
 2866 
 2867 int
 2868 pru_sopoll_notsupp(struct socket *so, int events, struct ucred *cred,
 2869     struct thread *td)
 2870 {
 2871 
 2872         return EOPNOTSUPP;
 2873 }
 2874 
 2875 static void
 2876 filt_sordetach(struct knote *kn)
 2877 {
 2878         struct socket *so = kn->kn_fp->f_data;
 2879 
 2880         SOCKBUF_LOCK(&so->so_rcv);
 2881         knlist_remove(&so->so_rcv.sb_sel.si_note, kn, 1);
 2882         if (knlist_empty(&so->so_rcv.sb_sel.si_note))
 2883                 so->so_rcv.sb_flags &= ~SB_KNOTE;
 2884         SOCKBUF_UNLOCK(&so->so_rcv);
 2885 }
 2886 
 2887 /*ARGSUSED*/
 2888 static int
 2889 filt_soread(struct knote *kn, long hint)
 2890 {
 2891         struct socket *so;
 2892 
 2893         so = kn->kn_fp->f_data;
 2894         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 2895 
 2896         kn->kn_data = so->so_rcv.sb_cc - so->so_rcv.sb_ctl;
 2897         if (so->so_rcv.sb_state & SBS_CANTRCVMORE) {
 2898                 kn->kn_flags |= EV_EOF;
 2899                 kn->kn_fflags = so->so_error;
 2900                 return (1);
 2901         } else if (so->so_error)        /* temporary udp error */
 2902                 return (1);
 2903         else if (kn->kn_sfflags & NOTE_LOWAT)
 2904                 return (kn->kn_data >= kn->kn_sdata);
 2905         else
 2906                 return (so->so_rcv.sb_cc >= so->so_rcv.sb_lowat);
 2907 }
 2908 
 2909 static void
 2910 filt_sowdetach(struct knote *kn)
 2911 {
 2912         struct socket *so = kn->kn_fp->f_data;
 2913 
 2914         SOCKBUF_LOCK(&so->so_snd);
 2915         knlist_remove(&so->so_snd.sb_sel.si_note, kn, 1);
 2916         if (knlist_empty(&so->so_snd.sb_sel.si_note))
 2917                 so->so_snd.sb_flags &= ~SB_KNOTE;
 2918         SOCKBUF_UNLOCK(&so->so_snd);
 2919 }
 2920 
 2921 /*ARGSUSED*/
 2922 static int
 2923 filt_sowrite(struct knote *kn, long hint)
 2924 {
 2925         struct socket *so;
 2926 
 2927         so = kn->kn_fp->f_data;
 2928         SOCKBUF_LOCK_ASSERT(&so->so_snd);
 2929         kn->kn_data = sbspace(&so->so_snd);
 2930         if (so->so_snd.sb_state & SBS_CANTSENDMORE) {
 2931                 kn->kn_flags |= EV_EOF;
 2932                 kn->kn_fflags = so->so_error;
 2933                 return (1);
 2934         } else if (so->so_error)        /* temporary udp error */
 2935                 return (1);
 2936         else if (((so->so_state & SS_ISCONNECTED) == 0) &&
 2937             (so->so_proto->pr_flags & PR_CONNREQUIRED))
 2938                 return (0);
 2939         else if (kn->kn_sfflags & NOTE_LOWAT)
 2940                 return (kn->kn_data >= kn->kn_sdata);
 2941         else
 2942                 return (kn->kn_data >= so->so_snd.sb_lowat);
 2943 }
 2944 
 2945 /*ARGSUSED*/
 2946 static int
 2947 filt_solisten(struct knote *kn, long hint)
 2948 {
 2949         struct socket *so = kn->kn_fp->f_data;
 2950 
 2951         kn->kn_data = so->so_qlen;
 2952         return (! TAILQ_EMPTY(&so->so_comp));
 2953 }
 2954 
 2955 int
 2956 socheckuid(struct socket *so, uid_t uid)
 2957 {
 2958 
 2959         if (so == NULL)
 2960                 return (EPERM);
 2961         if (so->so_cred->cr_uid != uid)
 2962                 return (EPERM);
 2963         return (0);
 2964 }
 2965 
 2966 static int
 2967 sysctl_somaxconn(SYSCTL_HANDLER_ARGS)
 2968 {
 2969         int error;
 2970         int val;
 2971 
 2972         val = somaxconn;
 2973         error = sysctl_handle_int(oidp, &val, 0, req);
 2974         if (error || !req->newptr )
 2975                 return (error);
 2976 
 2977         if (val < 1 || val > USHRT_MAX)
 2978                 return (EINVAL);
 2979 
 2980         somaxconn = val;
 2981         return (0);
 2982 }
 2983 
 2984 /*
 2985  * These functions are used by protocols to notify the socket layer (and its
 2986  * consumers) of state changes in the sockets driven by protocol-side events.
 2987  */
 2988 
 2989 /*
 2990  * Procedures to manipulate state flags of socket and do appropriate wakeups.
 2991  *
 2992  * Normal sequence from the active (originating) side is that
 2993  * soisconnecting() is called during processing of connect() call, resulting
 2994  * in an eventual call to soisconnected() if/when the connection is
 2995  * established.  When the connection is torn down soisdisconnecting() is
 2996  * called during processing of disconnect() call, and soisdisconnected() is
 2997  * called when the connection to the peer is totally severed.  The semantics
 2998  * of these routines are such that connectionless protocols can call
 2999  * soisconnected() and soisdisconnected() only, bypassing the in-progress
 3000  * calls when setting up a ``connection'' takes no time.
 3001  *
 3002  * From the passive side, a socket is created with two queues of sockets:
 3003  * so_incomp for connections in progress and so_comp for connections already
 3004  * made and awaiting user acceptance.  As a protocol is preparing incoming
 3005  * connections, it creates a socket structure queued on so_incomp by calling
 3006  * sonewconn().  When the connection is established, soisconnected() is
 3007  * called, and transfers the socket structure to so_comp, making it available
 3008  * to accept().
 3009  *
 3010  * If a socket is closed with sockets on either so_incomp or so_comp, these
 3011  * sockets are dropped.
 3012  *
 3013  * If higher-level protocols are implemented in the kernel, the wakeups done
 3014  * here will sometimes cause software-interrupt process scheduling.
 3015  */
 3016 void
 3017 soisconnecting(struct socket *so)
 3018 {
 3019 
 3020         SOCK_LOCK(so);
 3021         so->so_state &= ~(SS_ISCONNECTED|SS_ISDISCONNECTING);
 3022         so->so_state |= SS_ISCONNECTING;
 3023         SOCK_UNLOCK(so);
 3024 }
 3025 
 3026 void
 3027 soisconnected(struct socket *so)
 3028 {
 3029         struct socket *head;
 3030 
 3031         ACCEPT_LOCK();
 3032         SOCK_LOCK(so);
 3033         so->so_state &= ~(SS_ISCONNECTING|SS_ISDISCONNECTING|SS_ISCONFIRMING);
 3034         so->so_state |= SS_ISCONNECTED;
 3035         head = so->so_head;
 3036         if (head != NULL && (so->so_qstate & SQ_INCOMP)) {
 3037                 if ((so->so_options & SO_ACCEPTFILTER) == 0) {
 3038                         SOCK_UNLOCK(so);
 3039                         TAILQ_REMOVE(&head->so_incomp, so, so_list);
 3040                         head->so_incqlen--;
 3041                         so->so_qstate &= ~SQ_INCOMP;
 3042                         TAILQ_INSERT_TAIL(&head->so_comp, so, so_list);
 3043                         head->so_qlen++;
 3044                         so->so_qstate |= SQ_COMP;
 3045                         ACCEPT_UNLOCK();
 3046                         sorwakeup(head);
 3047                         wakeup_one(&head->so_timeo);
 3048                 } else {
 3049                         ACCEPT_UNLOCK();
 3050                         so->so_upcall =
 3051                             head->so_accf->so_accept_filter->accf_callback;
 3052                         so->so_upcallarg = head->so_accf->so_accept_filter_arg;
 3053                         so->so_rcv.sb_flags |= SB_UPCALL;
 3054                         so->so_options &= ~SO_ACCEPTFILTER;
 3055                         SOCK_UNLOCK(so);
 3056                         so->so_upcall(so, so->so_upcallarg, M_DONTWAIT);
 3057                 }
 3058                 return;
 3059         }
 3060         SOCK_UNLOCK(so);
 3061         ACCEPT_UNLOCK();
 3062         wakeup(&so->so_timeo);
 3063         sorwakeup(so);
 3064         sowwakeup(so);
 3065 }
 3066 
 3067 void
 3068 soisdisconnecting(struct socket *so)
 3069 {
 3070 
 3071         /*
 3072          * Note: This code assumes that SOCK_LOCK(so) and
 3073          * SOCKBUF_LOCK(&so->so_rcv) are the same.
 3074          */
 3075         SOCKBUF_LOCK(&so->so_rcv);
 3076         so->so_state &= ~SS_ISCONNECTING;
 3077         so->so_state |= SS_ISDISCONNECTING;
 3078         so->so_rcv.sb_state |= SBS_CANTRCVMORE;
 3079         sorwakeup_locked(so);
 3080         SOCKBUF_LOCK(&so->so_snd);
 3081         so->so_snd.sb_state |= SBS_CANTSENDMORE;
 3082         sowwakeup_locked(so);
 3083         wakeup(&so->so_timeo);
 3084 }
 3085 
 3086 void
 3087 soisdisconnected(struct socket *so)
 3088 {
 3089 
 3090         /*
 3091          * Note: This code assumes that SOCK_LOCK(so) and
 3092          * SOCKBUF_LOCK(&so->so_rcv) are the same.
 3093          */
 3094         SOCKBUF_LOCK(&so->so_rcv);
 3095         so->so_state &= ~(SS_ISCONNECTING|SS_ISCONNECTED|SS_ISDISCONNECTING);
 3096         so->so_state |= SS_ISDISCONNECTED;
 3097         so->so_rcv.sb_state |= SBS_CANTRCVMORE;
 3098         sorwakeup_locked(so);
 3099         SOCKBUF_LOCK(&so->so_snd);
 3100         so->so_snd.sb_state |= SBS_CANTSENDMORE;
 3101         sbdrop_locked(&so->so_snd, so->so_snd.sb_cc);
 3102         sowwakeup_locked(so);
 3103         wakeup(&so->so_timeo);
 3104 }
 3105 
 3106 /*
 3107  * Make a copy of a sockaddr in a malloced buffer of type M_SONAME.
 3108  */
 3109 struct sockaddr *
 3110 sodupsockaddr(const struct sockaddr *sa, int mflags)
 3111 {
 3112         struct sockaddr *sa2;
 3113 
 3114         sa2 = malloc(sa->sa_len, M_SONAME, mflags);
 3115         if (sa2)
 3116                 bcopy(sa, sa2, sa->sa_len);
 3117         return sa2;
 3118 }
 3119 
 3120 /*
 3121  * Create an external-format (``xsocket'') structure using the information in
 3122  * the kernel-format socket structure pointed to by so.  This is done to
 3123  * reduce the spew of irrelevant information over this interface, to isolate
 3124  * user code from changes in the kernel structure, and potentially to provide
 3125  * information-hiding if we decide that some of this information should be
 3126  * hidden from users.
 3127  */
 3128 void
 3129 sotoxsocket(struct socket *so, struct xsocket *xso)
 3130 {
 3131 
 3132         xso->xso_len = sizeof *xso;
 3133         xso->xso_so = so;
 3134         xso->so_type = so->so_type;
 3135         xso->so_options = so->so_options;
 3136         xso->so_linger = so->so_linger;
 3137         xso->so_state = so->so_state;
 3138         xso->so_pcb = so->so_pcb;
 3139         xso->xso_protocol = so->so_proto->pr_protocol;
 3140         xso->xso_family = so->so_proto->pr_domain->dom_family;
 3141         xso->so_qlen = so->so_qlen;
 3142         xso->so_incqlen = so->so_incqlen;
 3143         xso->so_qlimit = so->so_qlimit;
 3144         xso->so_timeo = so->so_timeo;
 3145         xso->so_error = so->so_error;
 3146         xso->so_pgid = so->so_sigio ? so->so_sigio->sio_pgid : 0;
 3147         xso->so_oobmark = so->so_oobmark;
 3148         sbtoxsockbuf(&so->so_snd, &xso->so_snd);
 3149         sbtoxsockbuf(&so->so_rcv, &xso->so_rcv);
 3150         xso->so_uid = so->so_cred->cr_uid;
 3151 }
 3152 
 3153 
 3154 /*
 3155  * Socket accessor functions to provide external consumers with
 3156  * a safe interface to socket state
 3157  *
 3158  */
 3159 
 3160 void
 3161 so_listeners_apply_all(struct socket *so, void (*func)(struct socket *, void *), void *arg)
 3162 {
 3163         
 3164         TAILQ_FOREACH(so, &so->so_comp, so_list)
 3165                 func(so, arg);
 3166 }
 3167 
 3168 struct sockbuf *
 3169 so_sockbuf_rcv(struct socket *so)
 3170 {
 3171 
 3172         return (&so->so_rcv);
 3173 }
 3174 
 3175 struct sockbuf *
 3176 so_sockbuf_snd(struct socket *so)
 3177 {
 3178 
 3179         return (&so->so_snd);
 3180 }
 3181 
 3182 int
 3183 so_state_get(const struct socket *so)
 3184 {
 3185 
 3186         return (so->so_state);
 3187 }
 3188 
 3189 void
 3190 so_state_set(struct socket *so, int val)
 3191 {
 3192 
 3193         so->so_state = val;
 3194 }
 3195 
 3196 int
 3197 so_options_get(const struct socket *so)
 3198 {
 3199 
 3200         return (so->so_options);
 3201 }
 3202 
 3203 void
 3204 so_options_set(struct socket *so, int val)
 3205 {
 3206 
 3207         so->so_options = val;
 3208 }
 3209 
 3210 int
 3211 so_error_get(const struct socket *so)
 3212 {
 3213 
 3214         return (so->so_error);
 3215 }
 3216 
 3217 void
 3218 so_error_set(struct socket *so, int val)
 3219 {
 3220 
 3221         so->so_error = val;
 3222 }
 3223 
 3224 int
 3225 so_linger_get(const struct socket *so)
 3226 {
 3227 
 3228         return (so->so_linger);
 3229 }
 3230 
 3231 void
 3232 so_linger_set(struct socket *so, int val)
 3233 {
 3234 
 3235         so->so_linger = val;
 3236 }
 3237 
 3238 struct protosw *
 3239 so_protosw_get(const struct socket *so)
 3240 {
 3241 
 3242         return (so->so_proto);
 3243 }
 3244 
 3245 void
 3246 so_protosw_set(struct socket *so, struct protosw *val)
 3247 {
 3248 
 3249         so->so_proto = val;
 3250 }
 3251 
 3252 void
 3253 so_sorwakeup(struct socket *so)
 3254 {
 3255 
 3256         sorwakeup(so);
 3257 }
 3258 
 3259 void
 3260 so_sowwakeup(struct socket *so)
 3261 {
 3262 
 3263         sowwakeup(so);
 3264 }
 3265 
 3266 void
 3267 so_sorwakeup_locked(struct socket *so)
 3268 {
 3269 
 3270         sorwakeup_locked(so);
 3271 }
 3272 
 3273 void
 3274 so_sowwakeup_locked(struct socket *so)
 3275 {
 3276 
 3277         sowwakeup_locked(so);
 3278 }
 3279 
 3280 void
 3281 so_lock(struct socket *so)
 3282 {
 3283         SOCK_LOCK(so);
 3284 }
 3285 
 3286 void
 3287 so_unlock(struct socket *so)
 3288 {
 3289         SOCK_UNLOCK(so);
 3290 }

Cache object: 42a29d06c0ffb726e6d119079251cc89


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.