The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]

FreeBSD/Linux Kernel Cross Reference
sys/kern/uipc_socket.c

Version: -  FREEBSD  -  FREEBSD-13-STABLE  -  FREEBSD-13-0  -  FREEBSD-12-STABLE  -  FREEBSD-12-0  -  FREEBSD-11-STABLE  -  FREEBSD-11-0  -  FREEBSD-10-STABLE  -  FREEBSD-10-0  -  FREEBSD-9-STABLE  -  FREEBSD-9-0  -  FREEBSD-8-STABLE  -  FREEBSD-8-0  -  FREEBSD-7-STABLE  -  FREEBSD-7-0  -  FREEBSD-6-STABLE  -  FREEBSD-6-0  -  FREEBSD-5-STABLE  -  FREEBSD-5-0  -  FREEBSD-4-STABLE  -  FREEBSD-3-STABLE  -  FREEBSD22  -  l41  -  OPENBSD  -  linux-2.6  -  MK84  -  PLAN9  -  xnu-8792 
SearchContext: -  none  -  3  -  10 

    1 /*-
    2  * Copyright (c) 1982, 1986, 1988, 1990, 1993
    3  *      The Regents of the University of California.
    4  * Copyright (c) 2004 The FreeBSD Foundation
    5  * Copyright (c) 2004-2008 Robert N. M. Watson
    6  * All rights reserved.
    7  *
    8  * Redistribution and use in source and binary forms, with or without
    9  * modification, are permitted provided that the following conditions
   10  * are met:
   11  * 1. Redistributions of source code must retain the above copyright
   12  *    notice, this list of conditions and the following disclaimer.
   13  * 2. Redistributions in binary form must reproduce the above copyright
   14  *    notice, this list of conditions and the following disclaimer in the
   15  *    documentation and/or other materials provided with the distribution.
   16  * 4. Neither the name of the University nor the names of its contributors
   17  *    may be used to endorse or promote products derived from this software
   18  *    without specific prior written permission.
   19  *
   20  * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
   21  * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
   22  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
   23  * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
   24  * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
   25  * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
   26  * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
   27  * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
   28  * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
   29  * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
   30  * SUCH DAMAGE.
   31  *
   32  *      @(#)uipc_socket.c       8.3 (Berkeley) 4/15/94
   33  */
   34 
   35 /*
   36  * Comments on the socket life cycle:
   37  *
   38  * soalloc() sets of socket layer state for a socket, called only by
   39  * socreate() and sonewconn().  Socket layer private.
   40  *
   41  * sodealloc() tears down socket layer state for a socket, called only by
   42  * sofree() and sonewconn().  Socket layer private.
   43  *
   44  * pru_attach() associates protocol layer state with an allocated socket;
   45  * called only once, may fail, aborting socket allocation.  This is called
   46  * from socreate() and sonewconn().  Socket layer private.
   47  *
   48  * pru_detach() disassociates protocol layer state from an attached socket,
   49  * and will be called exactly once for sockets in which pru_attach() has
   50  * been successfully called.  If pru_attach() returned an error,
   51  * pru_detach() will not be called.  Socket layer private.
   52  *
   53  * pru_abort() and pru_close() notify the protocol layer that the last
   54  * consumer of a socket is starting to tear down the socket, and that the
   55  * protocol should terminate the connection.  Historically, pru_abort() also
   56  * detached protocol state from the socket state, but this is no longer the
   57  * case.
   58  *
   59  * socreate() creates a socket and attaches protocol state.  This is a public
   60  * interface that may be used by socket layer consumers to create new
   61  * sockets.
   62  *
   63  * sonewconn() creates a socket and attaches protocol state.  This is a
   64  * public interface  that may be used by protocols to create new sockets when
   65  * a new connection is received and will be available for accept() on a
   66  * listen socket.
   67  *
   68  * soclose() destroys a socket after possibly waiting for it to disconnect.
   69  * This is a public interface that socket consumers should use to close and
   70  * release a socket when done with it.
   71  *
   72  * soabort() destroys a socket without waiting for it to disconnect (used
   73  * only for incoming connections that are already partially or fully
   74  * connected).  This is used internally by the socket layer when clearing
   75  * listen socket queues (due to overflow or close on the listen socket), but
   76  * is also a public interface protocols may use to abort connections in
   77  * their incomplete listen queues should they no longer be required.  Sockets
   78  * placed in completed connection listen queues should not be aborted for
   79  * reasons described in the comment above the soclose() implementation.  This
   80  * is not a general purpose close routine, and except in the specific
   81  * circumstances described here, should not be used.
   82  *
   83  * sofree() will free a socket and its protocol state if all references on
   84  * the socket have been released, and is the public interface to attempt to
   85  * free a socket when a reference is removed.  This is a socket layer private
   86  * interface.
   87  *
   88  * NOTE: In addition to socreate() and soclose(), which provide a single
   89  * socket reference to the consumer to be managed as required, there are two
   90  * calls to explicitly manage socket references, soref(), and sorele().
   91  * Currently, these are generally required only when transitioning a socket
   92  * from a listen queue to a file descriptor, in order to prevent garbage
   93  * collection of the socket at an untimely moment.  For a number of reasons,
   94  * these interfaces are not preferred, and should be avoided.
   95  */
   96 
   97 #include <sys/cdefs.h>
   98 __FBSDID("$FreeBSD: releng/7.3/sys/kern/uipc_socket.c 202814 2010-01-22 17:02:07Z jhb $");
   99 
  100 #include "opt_inet.h"
  101 #include "opt_inet6.h"
  102 #include "opt_mac.h"
  103 #include "opt_zero.h"
  104 #include "opt_compat.h"
  105 
  106 #include <sys/param.h>
  107 #include <sys/systm.h>
  108 #include <sys/fcntl.h>
  109 #include <sys/limits.h>
  110 #include <sys/lock.h>
  111 #include <sys/mac.h>
  112 #include <sys/malloc.h>
  113 #include <sys/mbuf.h>
  114 #include <sys/mutex.h>
  115 #include <sys/domain.h>
  116 #include <sys/file.h>                   /* for struct knote */
  117 #include <sys/kernel.h>
  118 #include <sys/event.h>
  119 #include <sys/eventhandler.h>
  120 #include <sys/poll.h>
  121 #include <sys/proc.h>
  122 #include <sys/protosw.h>
  123 #include <sys/socket.h>
  124 #include <sys/socketvar.h>
  125 #include <sys/resourcevar.h>
  126 #include <net/route.h>
  127 #include <sys/signalvar.h>
  128 #include <sys/stat.h>
  129 #include <sys/sx.h>
  130 #include <sys/sysctl.h>
  131 #include <sys/uio.h>
  132 #include <sys/jail.h>
  133 
  134 #include <security/mac/mac_framework.h>
  135 
  136 #include <vm/uma.h>
  137 
  138 #ifdef COMPAT_IA32
  139 #include <sys/mount.h>
  140 #include <compat/freebsd32/freebsd32.h>
  141 
  142 extern struct sysentvec ia32_freebsd_sysvec;
  143 #endif
  144 
  145 static int      soreceive_rcvoob(struct socket *so, struct uio *uio,
  146                     int flags);
  147 
  148 static void     filt_sordetach(struct knote *kn);
  149 static int      filt_soread(struct knote *kn, long hint);
  150 static void     filt_sowdetach(struct knote *kn);
  151 static int      filt_sowrite(struct knote *kn, long hint);
  152 static int      filt_solisten(struct knote *kn, long hint);
  153 
  154 static struct filterops solisten_filtops =
  155         { 1, NULL, filt_sordetach, filt_solisten };
  156 static struct filterops soread_filtops =
  157         { 1, NULL, filt_sordetach, filt_soread };
  158 static struct filterops sowrite_filtops =
  159         { 1, NULL, filt_sowdetach, filt_sowrite };
  160 
  161 uma_zone_t socket_zone;
  162 so_gen_t        so_gencnt;      /* generation count for sockets */
  163 
  164 int     maxsockets;
  165 
  166 MALLOC_DEFINE(M_SONAME, "soname", "socket name");
  167 MALLOC_DEFINE(M_PCB, "pcb", "protocol control block");
  168 
  169 static int somaxconn = SOMAXCONN;
  170 static int sysctl_somaxconn(SYSCTL_HANDLER_ARGS);
  171 /* XXX: we dont have SYSCTL_USHORT */
  172 SYSCTL_PROC(_kern_ipc, KIPC_SOMAXCONN, somaxconn, CTLTYPE_UINT | CTLFLAG_RW,
  173     0, sizeof(int), sysctl_somaxconn, "I", "Maximum pending socket connection "
  174     "queue size");
  175 static int numopensockets;
  176 SYSCTL_INT(_kern_ipc, OID_AUTO, numopensockets, CTLFLAG_RD,
  177     &numopensockets, 0, "Number of open sockets");
  178 #ifdef ZERO_COPY_SOCKETS
  179 /* These aren't static because they're used in other files. */
  180 int so_zero_copy_send = 1;
  181 int so_zero_copy_receive = 1;
  182 SYSCTL_NODE(_kern_ipc, OID_AUTO, zero_copy, CTLFLAG_RD, 0,
  183     "Zero copy controls");
  184 SYSCTL_INT(_kern_ipc_zero_copy, OID_AUTO, receive, CTLFLAG_RW,
  185     &so_zero_copy_receive, 0, "Enable zero copy receive");
  186 SYSCTL_INT(_kern_ipc_zero_copy, OID_AUTO, send, CTLFLAG_RW,
  187     &so_zero_copy_send, 0, "Enable zero copy send");
  188 #endif /* ZERO_COPY_SOCKETS */
  189 
  190 /*
  191  * accept_mtx locks down per-socket fields relating to accept queues.  See
  192  * socketvar.h for an annotation of the protected fields of struct socket.
  193  */
  194 struct mtx accept_mtx;
  195 MTX_SYSINIT(accept_mtx, &accept_mtx, "accept", MTX_DEF);
  196 
  197 /*
  198  * so_global_mtx protects so_gencnt, numopensockets, and the per-socket
  199  * so_gencnt field.
  200  */
  201 static struct mtx so_global_mtx;
  202 MTX_SYSINIT(so_global_mtx, &so_global_mtx, "so_glabel", MTX_DEF);
  203 
  204 /*
  205  * General IPC sysctl name space, used by sockets and a variety of other IPC
  206  * types.
  207  */
  208 SYSCTL_NODE(_kern, KERN_IPC, ipc, CTLFLAG_RW, 0, "IPC");
  209 
  210 /*
  211  * Sysctl to get and set the maximum global sockets limit.  Notify protocols
  212  * of the change so that they can update their dependent limits as required.
  213  */
  214 static int
  215 sysctl_maxsockets(SYSCTL_HANDLER_ARGS)
  216 {
  217         int error, newmaxsockets;
  218 
  219         newmaxsockets = maxsockets;
  220         error = sysctl_handle_int(oidp, &newmaxsockets, 0, req);
  221         if (error == 0 && req->newptr) {
  222                 if (newmaxsockets > maxsockets) {
  223                         maxsockets = newmaxsockets;
  224                         if (maxsockets > ((maxfiles / 4) * 3)) {
  225                                 maxfiles = (maxsockets * 5) / 4;
  226                                 maxfilesperproc = (maxfiles * 9) / 10;
  227                         }
  228                         EVENTHANDLER_INVOKE(maxsockets_change);
  229                 } else
  230                         error = EINVAL;
  231         }
  232         return (error);
  233 }
  234 
  235 SYSCTL_PROC(_kern_ipc, OID_AUTO, maxsockets, CTLTYPE_INT|CTLFLAG_RW,
  236     &maxsockets, 0, sysctl_maxsockets, "IU",
  237     "Maximum number of sockets avaliable");
  238 
  239 /*
  240  * Initialise maxsockets.  This SYSINIT must be run after
  241  * tunable_mbinit().
  242  */
  243 static void
  244 init_maxsockets(void *ignored)
  245 {
  246 
  247         TUNABLE_INT_FETCH("kern.ipc.maxsockets", &maxsockets);
  248         maxsockets = imax(maxsockets, imax(maxfiles, nmbclusters));
  249 }
  250 SYSINIT(param, SI_SUB_TUNABLES, SI_ORDER_ANY, init_maxsockets, NULL);
  251 
  252 /*
  253  * Socket operation routines.  These routines are called by the routines in
  254  * sys_socket.c or from a system process, and implement the semantics of
  255  * socket operations by switching out to the protocol specific routines.
  256  */
  257 
  258 /*
  259  * Get a socket structure from our zone, and initialize it.  Note that it
  260  * would probably be better to allocate socket and PCB at the same time, but
  261  * I'm not convinced that all the protocols can be easily modified to do
  262  * this.
  263  *
  264  * soalloc() returns a socket with a ref count of 0.
  265  */
  266 static struct socket *
  267 soalloc(void)
  268 {
  269         struct socket *so;
  270 
  271         so = uma_zalloc(socket_zone, M_NOWAIT | M_ZERO);
  272         if (so == NULL)
  273                 return (NULL);
  274 #ifdef MAC
  275         if (mac_init_socket(so, M_NOWAIT) != 0) {
  276                 uma_zfree(socket_zone, so);
  277                 return (NULL);
  278         }
  279 #endif
  280         SOCKBUF_LOCK_INIT(&so->so_snd, "so_snd");
  281         SOCKBUF_LOCK_INIT(&so->so_rcv, "so_rcv");
  282         sx_init(&so->so_snd.sb_sx, "so_snd_sx");
  283         sx_init(&so->so_rcv.sb_sx, "so_rcv_sx");
  284         TAILQ_INIT(&so->so_aiojobq);
  285         mtx_lock(&so_global_mtx);
  286         so->so_gencnt = ++so_gencnt;
  287         ++numopensockets;
  288         mtx_unlock(&so_global_mtx);
  289         return (so);
  290 }
  291 
  292 /*
  293  * Free the storage associated with a socket at the socket layer, tear down
  294  * locks, labels, etc.  All protocol state is assumed already to have been
  295  * torn down (and possibly never set up) by the caller.
  296  */
  297 static void
  298 sodealloc(struct socket *so)
  299 {
  300 
  301         KASSERT(so->so_count == 0, ("sodealloc(): so_count %d", so->so_count));
  302         KASSERT(so->so_pcb == NULL, ("sodealloc(): so_pcb != NULL"));
  303 
  304         mtx_lock(&so_global_mtx);
  305         so->so_gencnt = ++so_gencnt;
  306         --numopensockets;       /* Could be below, but faster here. */
  307         mtx_unlock(&so_global_mtx);
  308         if (so->so_rcv.sb_hiwat)
  309                 (void)chgsbsize(so->so_cred->cr_uidinfo,
  310                     &so->so_rcv.sb_hiwat, 0, RLIM_INFINITY);
  311         if (so->so_snd.sb_hiwat)
  312                 (void)chgsbsize(so->so_cred->cr_uidinfo,
  313                     &so->so_snd.sb_hiwat, 0, RLIM_INFINITY);
  314 #ifdef INET
  315         /* remove acccept filter if one is present. */
  316         if (so->so_accf != NULL)
  317                 do_setopt_accept_filter(so, NULL);
  318 #endif
  319 #ifdef MAC
  320         mac_destroy_socket(so);
  321 #endif
  322         crfree(so->so_cred);
  323         sx_destroy(&so->so_snd.sb_sx);
  324         sx_destroy(&so->so_rcv.sb_sx);
  325         SOCKBUF_LOCK_DESTROY(&so->so_snd);
  326         SOCKBUF_LOCK_DESTROY(&so->so_rcv);
  327         uma_zfree(socket_zone, so);
  328 }
  329 
  330 /*
  331  * socreate returns a socket with a ref count of 1.  The socket should be
  332  * closed with soclose().
  333  */
  334 int
  335 socreate(int dom, struct socket **aso, int type, int proto,
  336     struct ucred *cred, struct thread *td)
  337 {
  338         struct protosw *prp;
  339         struct socket *so;
  340         int error;
  341 
  342         if (proto)
  343                 prp = pffindproto(dom, proto, type);
  344         else
  345                 prp = pffindtype(dom, type);
  346 
  347         if (prp == NULL || prp->pr_usrreqs->pru_attach == NULL ||
  348             prp->pr_usrreqs->pru_attach == pru_attach_notsupp)
  349                 return (EPROTONOSUPPORT);
  350 
  351         if (prison_check_af(cred, prp->pr_domain->dom_family) != 0)
  352                 return (EPROTONOSUPPORT);
  353 
  354         if (prp->pr_type != type)
  355                 return (EPROTOTYPE);
  356         so = soalloc();
  357         if (so == NULL)
  358                 return (ENOBUFS);
  359 
  360         TAILQ_INIT(&so->so_incomp);
  361         TAILQ_INIT(&so->so_comp);
  362         so->so_type = type;
  363         so->so_cred = crhold(cred);
  364         if ((prp->pr_domain->dom_family == PF_INET) ||
  365             (prp->pr_domain->dom_family == PF_ROUTE))
  366                 so->so_fibnum = td->td_proc->p_fibnum;
  367         else
  368                 so->so_fibnum = 0;
  369         so->so_proto = prp;
  370 #ifdef MAC
  371         mac_create_socket(cred, so);
  372 #endif
  373         knlist_init_mtx(&so->so_rcv.sb_sel.si_note, SOCKBUF_MTX(&so->so_rcv));
  374         knlist_init_mtx(&so->so_snd.sb_sel.si_note, SOCKBUF_MTX(&so->so_snd));
  375         so->so_count = 1;
  376         /*
  377          * Auto-sizing of socket buffers is managed by the protocols and
  378          * the appropriate flags must be set in the pru_attach function.
  379          */
  380         error = (*prp->pr_usrreqs->pru_attach)(so, proto, td);
  381         if (error) {
  382                 KASSERT(so->so_count == 1, ("socreate: so_count %d",
  383                     so->so_count));
  384                 so->so_count = 0;
  385                 sodealloc(so);
  386                 return (error);
  387         }
  388         *aso = so;
  389         return (0);
  390 }
  391 
  392 #ifdef REGRESSION
  393 static int regression_sonewconn_earlytest = 1;
  394 SYSCTL_INT(_regression, OID_AUTO, sonewconn_earlytest, CTLFLAG_RW,
  395     &regression_sonewconn_earlytest, 0, "Perform early sonewconn limit test");
  396 #endif
  397 
  398 /*
  399  * When an attempt at a new connection is noted on a socket which accepts
  400  * connections, sonewconn is called.  If the connection is possible (subject
  401  * to space constraints, etc.) then we allocate a new structure, propoerly
  402  * linked into the data structure of the original socket, and return this.
  403  * Connstatus may be 0, or SO_ISCONFIRMING, or SO_ISCONNECTED.
  404  *
  405  * Note: the ref count on the socket is 0 on return.
  406  */
  407 struct socket *
  408 sonewconn(struct socket *head, int connstatus)
  409 {
  410         struct socket *so;
  411         int over;
  412 
  413         ACCEPT_LOCK();
  414         over = (head->so_qlen > 3 * head->so_qlimit / 2);
  415         ACCEPT_UNLOCK();
  416 #ifdef REGRESSION
  417         if (regression_sonewconn_earlytest && over)
  418 #else
  419         if (over)
  420 #endif
  421                 return (NULL);
  422         so = soalloc();
  423         if (so == NULL)
  424                 return (NULL);
  425         if ((head->so_options & SO_ACCEPTFILTER) != 0)
  426                 connstatus = 0;
  427         so->so_head = head;
  428         so->so_type = head->so_type;
  429         so->so_options = head->so_options &~ SO_ACCEPTCONN;
  430         so->so_linger = head->so_linger;
  431         so->so_state = head->so_state | SS_NOFDREF;
  432         so->so_fibnum = head->so_fibnum;
  433         so->so_proto = head->so_proto;
  434         so->so_cred = crhold(head->so_cred);
  435 #ifdef MAC
  436         SOCK_LOCK(head);
  437         mac_create_socket_from_socket(head, so);
  438         SOCK_UNLOCK(head);
  439 #endif
  440         knlist_init_mtx(&so->so_rcv.sb_sel.si_note, SOCKBUF_MTX(&so->so_rcv));
  441         knlist_init_mtx(&so->so_snd.sb_sel.si_note, SOCKBUF_MTX(&so->so_snd));
  442         if (soreserve(so, head->so_snd.sb_hiwat, head->so_rcv.sb_hiwat) ||
  443             (*so->so_proto->pr_usrreqs->pru_attach)(so, 0, NULL)) {
  444                 sodealloc(so);
  445                 return (NULL);
  446         }
  447         so->so_rcv.sb_lowat = head->so_rcv.sb_lowat;
  448         so->so_snd.sb_lowat = head->so_snd.sb_lowat;
  449         so->so_rcv.sb_timeo = head->so_rcv.sb_timeo;
  450         so->so_snd.sb_timeo = head->so_snd.sb_timeo;
  451         so->so_rcv.sb_flags |= head->so_rcv.sb_flags & SB_AUTOSIZE;
  452         so->so_snd.sb_flags |= head->so_snd.sb_flags & SB_AUTOSIZE;
  453         so->so_state |= connstatus;
  454         ACCEPT_LOCK();
  455         if (connstatus) {
  456                 TAILQ_INSERT_TAIL(&head->so_comp, so, so_list);
  457                 so->so_qstate |= SQ_COMP;
  458                 head->so_qlen++;
  459         } else {
  460                 /*
  461                  * Keep removing sockets from the head until there's room for
  462                  * us to insert on the tail.  In pre-locking revisions, this
  463                  * was a simple if(), but as we could be racing with other
  464                  * threads and soabort() requires dropping locks, we must
  465                  * loop waiting for the condition to be true.
  466                  */
  467                 while (head->so_incqlen > head->so_qlimit) {
  468                         struct socket *sp;
  469                         sp = TAILQ_FIRST(&head->so_incomp);
  470                         TAILQ_REMOVE(&head->so_incomp, sp, so_list);
  471                         head->so_incqlen--;
  472                         sp->so_qstate &= ~SQ_INCOMP;
  473                         sp->so_head = NULL;
  474                         ACCEPT_UNLOCK();
  475                         soabort(sp);
  476                         ACCEPT_LOCK();
  477                 }
  478                 TAILQ_INSERT_TAIL(&head->so_incomp, so, so_list);
  479                 so->so_qstate |= SQ_INCOMP;
  480                 head->so_incqlen++;
  481         }
  482         ACCEPT_UNLOCK();
  483         if (connstatus) {
  484                 sorwakeup(head);
  485                 wakeup_one(&head->so_timeo);
  486         }
  487         return (so);
  488 }
  489 
  490 int
  491 sobind(struct socket *so, struct sockaddr *nam, struct thread *td)
  492 {
  493 
  494         return ((*so->so_proto->pr_usrreqs->pru_bind)(so, nam, td));
  495 }
  496 
  497 /*
  498  * solisten() transitions a socket from a non-listening state to a listening
  499  * state, but can also be used to update the listen queue depth on an
  500  * existing listen socket.  The protocol will call back into the sockets
  501  * layer using solisten_proto_check() and solisten_proto() to check and set
  502  * socket-layer listen state.  Call backs are used so that the protocol can
  503  * acquire both protocol and socket layer locks in whatever order is required
  504  * by the protocol.
  505  *
  506  * Protocol implementors are advised to hold the socket lock across the
  507  * socket-layer test and set to avoid races at the socket layer.
  508  */
  509 int
  510 solisten(struct socket *so, int backlog, struct thread *td)
  511 {
  512 
  513         return ((*so->so_proto->pr_usrreqs->pru_listen)(so, backlog, td));
  514 }
  515 
  516 int
  517 solisten_proto_check(struct socket *so)
  518 {
  519 
  520         SOCK_LOCK_ASSERT(so);
  521 
  522         if (so->so_state & (SS_ISCONNECTED | SS_ISCONNECTING |
  523             SS_ISDISCONNECTING))
  524                 return (EINVAL);
  525         return (0);
  526 }
  527 
  528 void
  529 solisten_proto(struct socket *so, int backlog)
  530 {
  531 
  532         SOCK_LOCK_ASSERT(so);
  533 
  534         if (backlog < 0 || backlog > somaxconn)
  535                 backlog = somaxconn;
  536         so->so_qlimit = backlog;
  537         so->so_options |= SO_ACCEPTCONN;
  538 }
  539 
  540 /*
  541  * Attempt to free a socket.  This should really be sotryfree().
  542  *
  543  * sofree() will succeed if:
  544  *
  545  * - There are no outstanding file descriptor references or related consumers
  546  *   (so_count == 0).
  547  *
  548  * - The socket has been closed by user space, if ever open (SS_NOFDREF).
  549  *
  550  * - The protocol does not have an outstanding strong reference on the socket
  551  *   (SS_PROTOREF).
  552  *
  553  * - The socket is not in a completed connection queue, so a process has been
  554  *   notified that it is present.  If it is removed, the user process may
  555  *   block in accept() despite select() saying the socket was ready.
  556  *
  557  * Otherwise, it will quietly abort so that a future call to sofree(), when
  558  * conditions are right, can succeed.
  559  */
  560 void
  561 sofree(struct socket *so)
  562 {
  563         struct protosw *pr = so->so_proto;
  564         struct socket *head;
  565 
  566         ACCEPT_LOCK_ASSERT();
  567         SOCK_LOCK_ASSERT(so);
  568 
  569         if ((so->so_state & SS_NOFDREF) == 0 || so->so_count != 0 ||
  570             (so->so_state & SS_PROTOREF) || (so->so_qstate & SQ_COMP)) {
  571                 SOCK_UNLOCK(so);
  572                 ACCEPT_UNLOCK();
  573                 return;
  574         }
  575 
  576         head = so->so_head;
  577         if (head != NULL) {
  578                 KASSERT((so->so_qstate & SQ_COMP) != 0 ||
  579                     (so->so_qstate & SQ_INCOMP) != 0,
  580                     ("sofree: so_head != NULL, but neither SQ_COMP nor "
  581                     "SQ_INCOMP"));
  582                 KASSERT((so->so_qstate & SQ_COMP) == 0 ||
  583                     (so->so_qstate & SQ_INCOMP) == 0,
  584                     ("sofree: so->so_qstate is SQ_COMP and also SQ_INCOMP"));
  585                 TAILQ_REMOVE(&head->so_incomp, so, so_list);
  586                 head->so_incqlen--;
  587                 so->so_qstate &= ~SQ_INCOMP;
  588                 so->so_head = NULL;
  589         }
  590         KASSERT((so->so_qstate & SQ_COMP) == 0 &&
  591             (so->so_qstate & SQ_INCOMP) == 0,
  592             ("sofree: so_head == NULL, but still SQ_COMP(%d) or SQ_INCOMP(%d)",
  593             so->so_qstate & SQ_COMP, so->so_qstate & SQ_INCOMP));
  594         if (so->so_options & SO_ACCEPTCONN) {
  595                 KASSERT((TAILQ_EMPTY(&so->so_comp)), ("sofree: so_comp populated"));
  596                 KASSERT((TAILQ_EMPTY(&so->so_incomp)), ("sofree: so_comp populated"));
  597         }
  598         SOCK_UNLOCK(so);
  599         ACCEPT_UNLOCK();
  600 
  601         if (pr->pr_flags & PR_RIGHTS && pr->pr_domain->dom_dispose != NULL)
  602                 (*pr->pr_domain->dom_dispose)(so->so_rcv.sb_mb);
  603         if (pr->pr_usrreqs->pru_detach != NULL)
  604                 (*pr->pr_usrreqs->pru_detach)(so);
  605 
  606         /*
  607          * From this point on, we assume that no other references to this
  608          * socket exist anywhere else in the stack.  Therefore, no locks need
  609          * to be acquired or held.
  610          *
  611          * We used to do a lot of socket buffer and socket locking here, as
  612          * well as invoke sorflush() and perform wakeups.  The direct call to
  613          * dom_dispose() and sbrelease_internal() are an inlining of what was
  614          * necessary from sorflush().
  615          *
  616          * Notice that the socket buffer and kqueue state are torn down
  617          * before calling pru_detach.  This means that protocols shold not
  618          * assume they can perform socket wakeups, etc, in their detach code.
  619          */
  620         sbdestroy(&so->so_snd, so);
  621         sbdestroy(&so->so_rcv, so);
  622         knlist_destroy(&so->so_rcv.sb_sel.si_note);
  623         knlist_destroy(&so->so_snd.sb_sel.si_note);
  624         sodealloc(so);
  625 }
  626 
  627 /*
  628  * Close a socket on last file table reference removal.  Initiate disconnect
  629  * if connected.  Free socket when disconnect complete.
  630  *
  631  * This function will sorele() the socket.  Note that soclose() may be called
  632  * prior to the ref count reaching zero.  The actual socket structure will
  633  * not be freed until the ref count reaches zero.
  634  */
  635 int
  636 soclose(struct socket *so)
  637 {
  638         int error = 0;
  639 
  640         KASSERT(!(so->so_state & SS_NOFDREF), ("soclose: SS_NOFDREF on enter"));
  641 
  642         funsetown(&so->so_sigio);
  643         if (so->so_state & SS_ISCONNECTED) {
  644                 if ((so->so_state & SS_ISDISCONNECTING) == 0) {
  645                         error = sodisconnect(so);
  646                         if (error)
  647                                 goto drop;
  648                 }
  649                 if (so->so_options & SO_LINGER) {
  650                         if ((so->so_state & SS_ISDISCONNECTING) &&
  651                             (so->so_state & SS_NBIO))
  652                                 goto drop;
  653                         while (so->so_state & SS_ISCONNECTED) {
  654                                 error = tsleep(&so->so_timeo,
  655                                     PSOCK | PCATCH, "soclos", so->so_linger * hz);
  656                                 if (error)
  657                                         break;
  658                         }
  659                 }
  660         }
  661 
  662 drop:
  663         if (so->so_proto->pr_usrreqs->pru_close != NULL)
  664                 (*so->so_proto->pr_usrreqs->pru_close)(so);
  665         if (so->so_options & SO_ACCEPTCONN) {
  666                 struct socket *sp;
  667                 ACCEPT_LOCK();
  668                 while ((sp = TAILQ_FIRST(&so->so_incomp)) != NULL) {
  669                         TAILQ_REMOVE(&so->so_incomp, sp, so_list);
  670                         so->so_incqlen--;
  671                         sp->so_qstate &= ~SQ_INCOMP;
  672                         sp->so_head = NULL;
  673                         ACCEPT_UNLOCK();
  674                         soabort(sp);
  675                         ACCEPT_LOCK();
  676                 }
  677                 while ((sp = TAILQ_FIRST(&so->so_comp)) != NULL) {
  678                         TAILQ_REMOVE(&so->so_comp, sp, so_list);
  679                         so->so_qlen--;
  680                         sp->so_qstate &= ~SQ_COMP;
  681                         sp->so_head = NULL;
  682                         ACCEPT_UNLOCK();
  683                         soabort(sp);
  684                         ACCEPT_LOCK();
  685                 }
  686                 ACCEPT_UNLOCK();
  687         }
  688         ACCEPT_LOCK();
  689         SOCK_LOCK(so);
  690         KASSERT((so->so_state & SS_NOFDREF) == 0, ("soclose: NOFDREF"));
  691         so->so_state |= SS_NOFDREF;
  692         sorele(so);
  693         return (error);
  694 }
  695 
  696 /*
  697  * soabort() is used to abruptly tear down a connection, such as when a
  698  * resource limit is reached (listen queue depth exceeded), or if a listen
  699  * socket is closed while there are sockets waiting to be accepted.
  700  *
  701  * This interface is tricky, because it is called on an unreferenced socket,
  702  * and must be called only by a thread that has actually removed the socket
  703  * from the listen queue it was on, or races with other threads are risked.
  704  *
  705  * This interface will call into the protocol code, so must not be called
  706  * with any socket locks held.  Protocols do call it while holding their own
  707  * recursible protocol mutexes, but this is something that should be subject
  708  * to review in the future.
  709  */
  710 void
  711 soabort(struct socket *so)
  712 {
  713 
  714         /*
  715          * In as much as is possible, assert that no references to this
  716          * socket are held.  This is not quite the same as asserting that the
  717          * current thread is responsible for arranging for no references, but
  718          * is as close as we can get for now.
  719          */
  720         KASSERT(so->so_count == 0, ("soabort: so_count"));
  721         KASSERT((so->so_state & SS_PROTOREF) == 0, ("soabort: SS_PROTOREF"));
  722         KASSERT(so->so_state & SS_NOFDREF, ("soabort: !SS_NOFDREF"));
  723         KASSERT((so->so_state & SQ_COMP) == 0, ("soabort: SQ_COMP"));
  724         KASSERT((so->so_state & SQ_INCOMP) == 0, ("soabort: SQ_INCOMP"));
  725 
  726         if (so->so_proto->pr_usrreqs->pru_abort != NULL)
  727                 (*so->so_proto->pr_usrreqs->pru_abort)(so);
  728         ACCEPT_LOCK();
  729         SOCK_LOCK(so);
  730         sofree(so);
  731 }
  732 
  733 int
  734 soaccept(struct socket *so, struct sockaddr **nam)
  735 {
  736         int error;
  737 
  738         SOCK_LOCK(so);
  739         KASSERT((so->so_state & SS_NOFDREF) != 0, ("soaccept: !NOFDREF"));
  740         so->so_state &= ~SS_NOFDREF;
  741         SOCK_UNLOCK(so);
  742         error = (*so->so_proto->pr_usrreqs->pru_accept)(so, nam);
  743         return (error);
  744 }
  745 
  746 int
  747 soconnect(struct socket *so, struct sockaddr *nam, struct thread *td)
  748 {
  749         int error;
  750 
  751         if (so->so_options & SO_ACCEPTCONN)
  752                 return (EOPNOTSUPP);
  753         /*
  754          * If protocol is connection-based, can only connect once.
  755          * Otherwise, if connected, try to disconnect first.  This allows
  756          * user to disconnect by connecting to, e.g., a null address.
  757          */
  758         if (so->so_state & (SS_ISCONNECTED|SS_ISCONNECTING) &&
  759             ((so->so_proto->pr_flags & PR_CONNREQUIRED) ||
  760             (error = sodisconnect(so)))) {
  761                 error = EISCONN;
  762         } else {
  763                 /*
  764                  * Prevent accumulated error from previous connection from
  765                  * biting us.
  766                  */
  767                 so->so_error = 0;
  768                 error = (*so->so_proto->pr_usrreqs->pru_connect)(so, nam, td);
  769         }
  770 
  771         return (error);
  772 }
  773 
  774 int
  775 soconnect2(struct socket *so1, struct socket *so2)
  776 {
  777 
  778         return ((*so1->so_proto->pr_usrreqs->pru_connect2)(so1, so2));
  779 }
  780 
  781 int
  782 sodisconnect(struct socket *so)
  783 {
  784         int error;
  785 
  786         if ((so->so_state & SS_ISCONNECTED) == 0)
  787                 return (ENOTCONN);
  788         if (so->so_state & SS_ISDISCONNECTING)
  789                 return (EALREADY);
  790         error = (*so->so_proto->pr_usrreqs->pru_disconnect)(so);
  791         return (error);
  792 }
  793 
  794 #ifdef ZERO_COPY_SOCKETS
  795 struct so_zerocopy_stats{
  796         int size_ok;
  797         int align_ok;
  798         int found_ifp;
  799 };
  800 struct so_zerocopy_stats so_zerocp_stats = {0,0,0};
  801 #include <netinet/in.h>
  802 #include <net/route.h>
  803 #include <netinet/in_pcb.h>
  804 #include <vm/vm.h>
  805 #include <vm/vm_page.h>
  806 #include <vm/vm_object.h>
  807 
  808 /*
  809  * sosend_copyin() is only used if zero copy sockets are enabled.  Otherwise
  810  * sosend_dgram() and sosend_generic() use m_uiotombuf().
  811  * 
  812  * sosend_copyin() accepts a uio and prepares an mbuf chain holding part or
  813  * all of the data referenced by the uio.  If desired, it uses zero-copy.
  814  * *space will be updated to reflect data copied in.
  815  *
  816  * NB: If atomic I/O is requested, the caller must already have checked that
  817  * space can hold resid bytes.
  818  *
  819  * NB: In the event of an error, the caller may need to free the partial
  820  * chain pointed to by *mpp.  The contents of both *uio and *space may be
  821  * modified even in the case of an error.
  822  */
  823 static int
  824 sosend_copyin(struct uio *uio, struct mbuf **retmp, int atomic, long *space,
  825     int flags)
  826 {
  827         struct mbuf *m, **mp, *top;
  828         long len, resid;
  829         int error;
  830 #ifdef ZERO_COPY_SOCKETS
  831         int cow_send;
  832 #endif
  833 
  834         *retmp = top = NULL;
  835         mp = &top;
  836         len = 0;
  837         resid = uio->uio_resid;
  838         error = 0;
  839         do {
  840 #ifdef ZERO_COPY_SOCKETS
  841                 cow_send = 0;
  842 #endif /* ZERO_COPY_SOCKETS */
  843                 if (resid >= MINCLSIZE) {
  844 #ifdef ZERO_COPY_SOCKETS
  845                         if (top == NULL) {
  846                                 m = m_gethdr(M_WAITOK, MT_DATA);
  847                                 m->m_pkthdr.len = 0;
  848                                 m->m_pkthdr.rcvif = NULL;
  849                         } else
  850                                 m = m_get(M_WAITOK, MT_DATA);
  851                         if (so_zero_copy_send &&
  852                             resid>=PAGE_SIZE &&
  853                             *space>=PAGE_SIZE &&
  854                             uio->uio_iov->iov_len>=PAGE_SIZE) {
  855                                 so_zerocp_stats.size_ok++;
  856                                 so_zerocp_stats.align_ok++;
  857                                 cow_send = socow_setup(m, uio);
  858                                 len = cow_send;
  859                         }
  860                         if (!cow_send) {
  861                                 m_clget(m, M_WAITOK);
  862                                 len = min(min(MCLBYTES, resid), *space);
  863                         }
  864 #else /* ZERO_COPY_SOCKETS */
  865                         if (top == NULL) {
  866                                 m = m_getcl(M_TRYWAIT, MT_DATA, M_PKTHDR);
  867                                 m->m_pkthdr.len = 0;
  868                                 m->m_pkthdr.rcvif = NULL;
  869                         } else
  870                                 m = m_getcl(M_TRYWAIT, MT_DATA, 0);
  871                         len = min(min(MCLBYTES, resid), *space);
  872 #endif /* ZERO_COPY_SOCKETS */
  873                 } else {
  874                         if (top == NULL) {
  875                                 m = m_gethdr(M_TRYWAIT, MT_DATA);
  876                                 m->m_pkthdr.len = 0;
  877                                 m->m_pkthdr.rcvif = NULL;
  878 
  879                                 len = min(min(MHLEN, resid), *space);
  880                                 /*
  881                                  * For datagram protocols, leave room
  882                                  * for protocol headers in first mbuf.
  883                                  */
  884                                 if (atomic && m && len < MHLEN)
  885                                         MH_ALIGN(m, len);
  886                         } else {
  887                                 m = m_get(M_TRYWAIT, MT_DATA);
  888                                 len = min(min(MLEN, resid), *space);
  889                         }
  890                 }
  891                 if (m == NULL) {
  892                         error = ENOBUFS;
  893                         goto out;
  894                 }
  895 
  896                 *space -= len;
  897 #ifdef ZERO_COPY_SOCKETS
  898                 if (cow_send)
  899                         error = 0;
  900                 else
  901 #endif /* ZERO_COPY_SOCKETS */
  902                 error = uiomove(mtod(m, void *), (int)len, uio);
  903                 resid = uio->uio_resid;
  904                 m->m_len = len;
  905                 *mp = m;
  906                 top->m_pkthdr.len += len;
  907                 if (error)
  908                         goto out;
  909                 mp = &m->m_next;
  910                 if (resid <= 0) {
  911                         if (flags & MSG_EOR)
  912                                 top->m_flags |= M_EOR;
  913                         break;
  914                 }
  915         } while (*space > 0 && atomic);
  916 out:
  917         *retmp = top;
  918         return (error);
  919 }
  920 #endif /*ZERO_COPY_SOCKETS*/
  921 
  922 #define SBLOCKWAIT(f)   (((f) & MSG_DONTWAIT) ? 0 : SBL_WAIT)
  923 
  924 int
  925 sosend_dgram(struct socket *so, struct sockaddr *addr, struct uio *uio,
  926     struct mbuf *top, struct mbuf *control, int flags, struct thread *td)
  927 {
  928         long space, resid;
  929         int clen = 0, error, dontroute;
  930 #ifdef ZERO_COPY_SOCKETS
  931         int atomic = sosendallatonce(so) || top;
  932 #endif
  933 
  934         KASSERT(so->so_type == SOCK_DGRAM, ("sodgram_send: !SOCK_DGRAM"));
  935         KASSERT(so->so_proto->pr_flags & PR_ATOMIC,
  936             ("sodgram_send: !PR_ATOMIC"));
  937 
  938         if (uio != NULL)
  939                 resid = uio->uio_resid;
  940         else
  941                 resid = top->m_pkthdr.len;
  942         /*
  943          * In theory resid should be unsigned.  However, space must be
  944          * signed, as it might be less than 0 if we over-committed, and we
  945          * must use a signed comparison of space and resid.  On the other
  946          * hand, a negative resid causes us to loop sending 0-length
  947          * segments to the protocol.
  948          *
  949          * Also check to make sure that MSG_EOR isn't used on SOCK_STREAM
  950          * type sockets since that's an error.
  951          */
  952         if (resid < 0) {
  953                 error = EINVAL;
  954                 goto out;
  955         }
  956 
  957         dontroute =
  958             (flags & MSG_DONTROUTE) && (so->so_options & SO_DONTROUTE) == 0;
  959         if (td != NULL)
  960                 td->td_ru.ru_msgsnd++;
  961         if (control != NULL)
  962                 clen = control->m_len;
  963 
  964         SOCKBUF_LOCK(&so->so_snd);
  965         if (so->so_snd.sb_state & SBS_CANTSENDMORE) {
  966                 SOCKBUF_UNLOCK(&so->so_snd);
  967                 error = EPIPE;
  968                 goto out;
  969         }
  970         if (so->so_error) {
  971                 error = so->so_error;
  972                 so->so_error = 0;
  973                 SOCKBUF_UNLOCK(&so->so_snd);
  974                 goto out;
  975         }
  976         if ((so->so_state & SS_ISCONNECTED) == 0) {
  977                 /*
  978                  * `sendto' and `sendmsg' is allowed on a connection-based
  979                  * socket if it supports implied connect.  Return ENOTCONN if
  980                  * not connected and no address is supplied.
  981                  */
  982                 if ((so->so_proto->pr_flags & PR_CONNREQUIRED) &&
  983                     (so->so_proto->pr_flags & PR_IMPLOPCL) == 0) {
  984                         if ((so->so_state & SS_ISCONFIRMING) == 0 &&
  985                             !(resid == 0 && clen != 0)) {
  986                                 SOCKBUF_UNLOCK(&so->so_snd);
  987                                 error = ENOTCONN;
  988                                 goto out;
  989                         }
  990                 } else if (addr == NULL) {
  991                         if (so->so_proto->pr_flags & PR_CONNREQUIRED)
  992                                 error = ENOTCONN;
  993                         else
  994                                 error = EDESTADDRREQ;
  995                         SOCKBUF_UNLOCK(&so->so_snd);
  996                         goto out;
  997                 }
  998         }
  999 
 1000         /*
 1001          * Do we need MSG_OOB support in SOCK_DGRAM?  Signs here may be a
 1002          * problem and need fixing.
 1003          */
 1004         space = sbspace(&so->so_snd);
 1005         if (flags & MSG_OOB)
 1006                 space += 1024;
 1007         space -= clen;
 1008         SOCKBUF_UNLOCK(&so->so_snd);
 1009         if (resid > space) {
 1010                 error = EMSGSIZE;
 1011                 goto out;
 1012         }
 1013         if (uio == NULL) {
 1014                 resid = 0;
 1015                 if (flags & MSG_EOR)
 1016                         top->m_flags |= M_EOR;
 1017         } else {
 1018 #ifdef ZERO_COPY_SOCKETS
 1019                 error = sosend_copyin(uio, &top, atomic, &space, flags);
 1020                 if (error)
 1021                         goto out;
 1022 #else
 1023                 /*
 1024                  * Copy the data from userland into a mbuf chain.
 1025                  * If no data is to be copied in, a single empty mbuf
 1026                  * is returned.
 1027                  */
 1028                 top = m_uiotombuf(uio, M_WAITOK, space, max_hdr,
 1029                     (M_PKTHDR | ((flags & MSG_EOR) ? M_EOR : 0)));
 1030                 if (top == NULL) {
 1031                         error = EFAULT; /* only possible error */
 1032                         goto out;
 1033                 }
 1034                 space -= resid - uio->uio_resid;
 1035 #endif
 1036                 resid = uio->uio_resid;
 1037         }
 1038         KASSERT(resid == 0, ("sosend_dgram: resid != 0"));
 1039         /*
 1040          * XXXRW: Frobbing SO_DONTROUTE here is even worse without sblock
 1041          * than with.
 1042          */
 1043         if (dontroute) {
 1044                 SOCK_LOCK(so);
 1045                 so->so_options |= SO_DONTROUTE;
 1046                 SOCK_UNLOCK(so);
 1047         }
 1048         /*
 1049          * XXX all the SBS_CANTSENDMORE checks previously done could be out
 1050          * of date.  We could have recieved a reset packet in an interrupt or
 1051          * maybe we slept while doing page faults in uiomove() etc.  We could
 1052          * probably recheck again inside the locking protection here, but
 1053          * there are probably other places that this also happens.  We must
 1054          * rethink this.
 1055          */
 1056         error = (*so->so_proto->pr_usrreqs->pru_send)(so,
 1057             (flags & MSG_OOB) ? PRUS_OOB :
 1058         /*
 1059          * If the user set MSG_EOF, the protocol understands this flag and
 1060          * nothing left to send then use PRU_SEND_EOF instead of PRU_SEND.
 1061          */
 1062             ((flags & MSG_EOF) &&
 1063              (so->so_proto->pr_flags & PR_IMPLOPCL) &&
 1064              (resid <= 0)) ?
 1065                 PRUS_EOF :
 1066                 /* If there is more to send set PRUS_MORETOCOME */
 1067                 (resid > 0 && space > 0) ? PRUS_MORETOCOME : 0,
 1068                 top, addr, control, td);
 1069         if (dontroute) {
 1070                 SOCK_LOCK(so);
 1071                 so->so_options &= ~SO_DONTROUTE;
 1072                 SOCK_UNLOCK(so);
 1073         }
 1074         clen = 0;
 1075         control = NULL;
 1076         top = NULL;
 1077 out:
 1078         if (top != NULL)
 1079                 m_freem(top);
 1080         if (control != NULL)
 1081                 m_freem(control);
 1082         return (error);
 1083 }
 1084 
 1085 /*
 1086  * Send on a socket.  If send must go all at once and message is larger than
 1087  * send buffering, then hard error.  Lock against other senders.  If must go
 1088  * all at once and not enough room now, then inform user that this would
 1089  * block and do nothing.  Otherwise, if nonblocking, send as much as
 1090  * possible.  The data to be sent is described by "uio" if nonzero, otherwise
 1091  * by the mbuf chain "top" (which must be null if uio is not).  Data provided
 1092  * in mbuf chain must be small enough to send all at once.
 1093  *
 1094  * Returns nonzero on error, timeout or signal; callers must check for short
 1095  * counts if EINTR/ERESTART are returned.  Data and control buffers are freed
 1096  * on return.
 1097  */
 1098 int
 1099 sosend_generic(struct socket *so, struct sockaddr *addr, struct uio *uio,
 1100     struct mbuf *top, struct mbuf *control, int flags, struct thread *td)
 1101 {
 1102         long space, resid;
 1103         int clen = 0, error, dontroute;
 1104         int atomic = sosendallatonce(so) || top;
 1105 
 1106         if (uio != NULL)
 1107                 resid = uio->uio_resid;
 1108         else
 1109                 resid = top->m_pkthdr.len;
 1110         /*
 1111          * In theory resid should be unsigned.  However, space must be
 1112          * signed, as it might be less than 0 if we over-committed, and we
 1113          * must use a signed comparison of space and resid.  On the other
 1114          * hand, a negative resid causes us to loop sending 0-length
 1115          * segments to the protocol.
 1116          *
 1117          * Also check to make sure that MSG_EOR isn't used on SOCK_STREAM
 1118          * type sockets since that's an error.
 1119          */
 1120         if (resid < 0 || (so->so_type == SOCK_STREAM && (flags & MSG_EOR))) {
 1121                 error = EINVAL;
 1122                 goto out;
 1123         }
 1124 
 1125         dontroute =
 1126             (flags & MSG_DONTROUTE) && (so->so_options & SO_DONTROUTE) == 0 &&
 1127             (so->so_proto->pr_flags & PR_ATOMIC);
 1128         if (td != NULL)
 1129                 td->td_ru.ru_msgsnd++;
 1130         if (control != NULL)
 1131                 clen = control->m_len;
 1132 
 1133         error = sblock(&so->so_snd, SBLOCKWAIT(flags));
 1134         if (error)
 1135                 goto out;
 1136 
 1137 restart:
 1138         do {
 1139                 SOCKBUF_LOCK(&so->so_snd);
 1140                 if (so->so_snd.sb_state & SBS_CANTSENDMORE) {
 1141                         SOCKBUF_UNLOCK(&so->so_snd);
 1142                         error = EPIPE;
 1143                         goto release;
 1144                 }
 1145                 if (so->so_error) {
 1146                         error = so->so_error;
 1147                         so->so_error = 0;
 1148                         SOCKBUF_UNLOCK(&so->so_snd);
 1149                         goto release;
 1150                 }
 1151                 if ((so->so_state & SS_ISCONNECTED) == 0) {
 1152                         /*
 1153                          * `sendto' and `sendmsg' is allowed on a connection-
 1154                          * based socket if it supports implied connect.
 1155                          * Return ENOTCONN if not connected and no address is
 1156                          * supplied.
 1157                          */
 1158                         if ((so->so_proto->pr_flags & PR_CONNREQUIRED) &&
 1159                             (so->so_proto->pr_flags & PR_IMPLOPCL) == 0) {
 1160                                 if ((so->so_state & SS_ISCONFIRMING) == 0 &&
 1161                                     !(resid == 0 && clen != 0)) {
 1162                                         SOCKBUF_UNLOCK(&so->so_snd);
 1163                                         error = ENOTCONN;
 1164                                         goto release;
 1165                                 }
 1166                         } else if (addr == NULL) {
 1167                                 SOCKBUF_UNLOCK(&so->so_snd);
 1168                                 if (so->so_proto->pr_flags & PR_CONNREQUIRED)
 1169                                         error = ENOTCONN;
 1170                                 else
 1171                                         error = EDESTADDRREQ;
 1172                                 goto release;
 1173                         }
 1174                 }
 1175                 space = sbspace(&so->so_snd);
 1176                 if (flags & MSG_OOB)
 1177                         space += 1024;
 1178                 if ((atomic && resid > so->so_snd.sb_hiwat) ||
 1179                     clen > so->so_snd.sb_hiwat) {
 1180                         SOCKBUF_UNLOCK(&so->so_snd);
 1181                         error = EMSGSIZE;
 1182                         goto release;
 1183                 }
 1184                 if (space < resid + clen &&
 1185                     (atomic || space < so->so_snd.sb_lowat || space < clen)) {
 1186                         if ((so->so_state & SS_NBIO) || (flags & MSG_NBIO)) {
 1187                                 SOCKBUF_UNLOCK(&so->so_snd);
 1188                                 error = EWOULDBLOCK;
 1189                                 goto release;
 1190                         }
 1191                         error = sbwait(&so->so_snd);
 1192                         SOCKBUF_UNLOCK(&so->so_snd);
 1193                         if (error)
 1194                                 goto release;
 1195                         goto restart;
 1196                 }
 1197                 SOCKBUF_UNLOCK(&so->so_snd);
 1198                 space -= clen;
 1199                 do {
 1200                         if (uio == NULL) {
 1201                                 resid = 0;
 1202                                 if (flags & MSG_EOR)
 1203                                         top->m_flags |= M_EOR;
 1204                         } else {
 1205 #ifdef ZERO_COPY_SOCKETS
 1206                                 error = sosend_copyin(uio, &top, atomic,
 1207                                     &space, flags);
 1208                                 if (error != 0)
 1209                                         goto release;
 1210 #else
 1211                                 /*
 1212                                  * Copy the data from userland into a mbuf
 1213                                  * chain.  If no data is to be copied in,
 1214                                  * a single empty mbuf is returned.
 1215                                  */
 1216                                 top = m_uiotombuf(uio, M_WAITOK, space,
 1217                                     (atomic ? max_hdr : 0),
 1218                                     (atomic ? M_PKTHDR : 0) |
 1219                                     ((flags & MSG_EOR) ? M_EOR : 0));
 1220                                 if (top == NULL) {
 1221                                         error = EFAULT; /* only possible error */
 1222                                         goto release;
 1223                                 }
 1224                                 space -= resid - uio->uio_resid;
 1225 #endif
 1226                                 resid = uio->uio_resid;
 1227                         }
 1228                         if (dontroute) {
 1229                                 SOCK_LOCK(so);
 1230                                 so->so_options |= SO_DONTROUTE;
 1231                                 SOCK_UNLOCK(so);
 1232                         }
 1233                         /*
 1234                          * XXX all the SBS_CANTSENDMORE checks previously
 1235                          * done could be out of date.  We could have recieved
 1236                          * a reset packet in an interrupt or maybe we slept
 1237                          * while doing page faults in uiomove() etc.  We
 1238                          * could probably recheck again inside the locking
 1239                          * protection here, but there are probably other
 1240                          * places that this also happens.  We must rethink
 1241                          * this.
 1242                          */
 1243                         error = (*so->so_proto->pr_usrreqs->pru_send)(so,
 1244                             (flags & MSG_OOB) ? PRUS_OOB :
 1245                         /*
 1246                          * If the user set MSG_EOF, the protocol understands
 1247                          * this flag and nothing left to send then use
 1248                          * PRU_SEND_EOF instead of PRU_SEND.
 1249                          */
 1250                             ((flags & MSG_EOF) &&
 1251                              (so->so_proto->pr_flags & PR_IMPLOPCL) &&
 1252                              (resid <= 0)) ?
 1253                                 PRUS_EOF :
 1254                         /* If there is more to send set PRUS_MORETOCOME. */
 1255                             (resid > 0 && space > 0) ? PRUS_MORETOCOME : 0,
 1256                             top, addr, control, td);
 1257                         if (dontroute) {
 1258                                 SOCK_LOCK(so);
 1259                                 so->so_options &= ~SO_DONTROUTE;
 1260                                 SOCK_UNLOCK(so);
 1261                         }
 1262                         clen = 0;
 1263                         control = NULL;
 1264                         top = NULL;
 1265                         if (error)
 1266                                 goto release;
 1267                 } while (resid && space > 0);
 1268         } while (resid);
 1269 
 1270 release:
 1271         sbunlock(&so->so_snd);
 1272 out:
 1273         if (top != NULL)
 1274                 m_freem(top);
 1275         if (control != NULL)
 1276                 m_freem(control);
 1277         return (error);
 1278 }
 1279 
 1280 int
 1281 sosend(struct socket *so, struct sockaddr *addr, struct uio *uio,
 1282     struct mbuf *top, struct mbuf *control, int flags, struct thread *td)
 1283 {
 1284 
 1285         return (so->so_proto->pr_usrreqs->pru_sosend(so, addr, uio, top,
 1286             control, flags, td));
 1287 }
 1288 
 1289 /*
 1290  * The part of soreceive() that implements reading non-inline out-of-band
 1291  * data from a socket.  For more complete comments, see soreceive(), from
 1292  * which this code originated.
 1293  *
 1294  * Note that soreceive_rcvoob(), unlike the remainder of soreceive(), is
 1295  * unable to return an mbuf chain to the caller.
 1296  */
 1297 static int
 1298 soreceive_rcvoob(struct socket *so, struct uio *uio, int flags)
 1299 {
 1300         struct protosw *pr = so->so_proto;
 1301         struct mbuf *m;
 1302         int error;
 1303 
 1304         KASSERT(flags & MSG_OOB, ("soreceive_rcvoob: (flags & MSG_OOB) == 0"));
 1305 
 1306         m = m_get(M_TRYWAIT, MT_DATA);
 1307         if (m == NULL)
 1308                 return (ENOBUFS);
 1309         error = (*pr->pr_usrreqs->pru_rcvoob)(so, m, flags & MSG_PEEK);
 1310         if (error)
 1311                 goto bad;
 1312         do {
 1313 #ifdef ZERO_COPY_SOCKETS
 1314                 if (so_zero_copy_receive) {
 1315                         int disposable;
 1316 
 1317                         if ((m->m_flags & M_EXT)
 1318                          && (m->m_ext.ext_type == EXT_DISPOSABLE))
 1319                                 disposable = 1;
 1320                         else
 1321                                 disposable = 0;
 1322 
 1323                         error = uiomoveco(mtod(m, void *),
 1324                                           min(uio->uio_resid, m->m_len),
 1325                                           uio, disposable);
 1326                 } else
 1327 #endif /* ZERO_COPY_SOCKETS */
 1328                 error = uiomove(mtod(m, void *),
 1329                     (int) min(uio->uio_resid, m->m_len), uio);
 1330                 m = m_free(m);
 1331         } while (uio->uio_resid && error == 0 && m);
 1332 bad:
 1333         if (m != NULL)
 1334                 m_freem(m);
 1335         return (error);
 1336 }
 1337 
 1338 /*
 1339  * Following replacement or removal of the first mbuf on the first mbuf chain
 1340  * of a socket buffer, push necessary state changes back into the socket
 1341  * buffer so that other consumers see the values consistently.  'nextrecord'
 1342  * is the callers locally stored value of the original value of
 1343  * sb->sb_mb->m_nextpkt which must be restored when the lead mbuf changes.
 1344  * NOTE: 'nextrecord' may be NULL.
 1345  */
 1346 static __inline void
 1347 sockbuf_pushsync(struct sockbuf *sb, struct mbuf *nextrecord)
 1348 {
 1349 
 1350         SOCKBUF_LOCK_ASSERT(sb);
 1351         /*
 1352          * First, update for the new value of nextrecord.  If necessary, make
 1353          * it the first record.
 1354          */
 1355         if (sb->sb_mb != NULL)
 1356                 sb->sb_mb->m_nextpkt = nextrecord;
 1357         else
 1358                 sb->sb_mb = nextrecord;
 1359 
 1360         /*
 1361          * Now update any dependent socket buffer fields to reflect the new
 1362          * state.  This is an expanded inline of SB_EMPTY_FIXUP(), with the
 1363          * addition of a second clause that takes care of the case where
 1364          * sb_mb has been updated, but remains the last record.
 1365          */
 1366         if (sb->sb_mb == NULL) {
 1367                 sb->sb_mbtail = NULL;
 1368                 sb->sb_lastrecord = NULL;
 1369         } else if (sb->sb_mb->m_nextpkt == NULL)
 1370                 sb->sb_lastrecord = sb->sb_mb;
 1371 }
 1372 
 1373 
 1374 /*
 1375  * Implement receive operations on a socket.  We depend on the way that
 1376  * records are added to the sockbuf by sbappend.  In particular, each record
 1377  * (mbufs linked through m_next) must begin with an address if the protocol
 1378  * so specifies, followed by an optional mbuf or mbufs containing ancillary
 1379  * data, and then zero or more mbufs of data.  In order to allow parallelism
 1380  * between network receive and copying to user space, as well as avoid
 1381  * sleeping with a mutex held, we release the socket buffer mutex during the
 1382  * user space copy.  Although the sockbuf is locked, new data may still be
 1383  * appended, and thus we must maintain consistency of the sockbuf during that
 1384  * time.
 1385  *
 1386  * The caller may receive the data as a single mbuf chain by supplying an
 1387  * mbuf **mp0 for use in returning the chain.  The uio is then used only for
 1388  * the count in uio_resid.
 1389  */
 1390 int
 1391 soreceive_generic(struct socket *so, struct sockaddr **psa, struct uio *uio,
 1392     struct mbuf **mp0, struct mbuf **controlp, int *flagsp)
 1393 {
 1394         struct mbuf *m, **mp;
 1395         int flags, len, error, offset;
 1396         struct protosw *pr = so->so_proto;
 1397         struct mbuf *nextrecord;
 1398         int moff, type = 0;
 1399         int orig_resid = uio->uio_resid;
 1400 
 1401         mp = mp0;
 1402         if (psa != NULL)
 1403                 *psa = NULL;
 1404         if (controlp != NULL)
 1405                 *controlp = NULL;
 1406         if (flagsp != NULL)
 1407                 flags = *flagsp &~ MSG_EOR;
 1408         else
 1409                 flags = 0;
 1410         if (flags & MSG_OOB)
 1411                 return (soreceive_rcvoob(so, uio, flags));
 1412         if (mp != NULL)
 1413                 *mp = NULL;
 1414         if ((pr->pr_flags & PR_WANTRCVD) && (so->so_state & SS_ISCONFIRMING)
 1415             && uio->uio_resid)
 1416                 (*pr->pr_usrreqs->pru_rcvd)(so, 0);
 1417 
 1418         error = sblock(&so->so_rcv, SBLOCKWAIT(flags));
 1419         if (error)
 1420                 return (error);
 1421 
 1422 restart:
 1423         SOCKBUF_LOCK(&so->so_rcv);
 1424         m = so->so_rcv.sb_mb;
 1425         /*
 1426          * If we have less data than requested, block awaiting more (subject
 1427          * to any timeout) if:
 1428          *   1. the current count is less than the low water mark, or
 1429          *   2. MSG_WAITALL is set, and it is possible to do the entire
 1430          *      receive operation at once if we block (resid <= hiwat).
 1431          *   3. MSG_DONTWAIT is not set
 1432          * If MSG_WAITALL is set but resid is larger than the receive buffer,
 1433          * we have to do the receive in sections, and thus risk returning a
 1434          * short count if a timeout or signal occurs after we start.
 1435          */
 1436         if (m == NULL || (((flags & MSG_DONTWAIT) == 0 &&
 1437             so->so_rcv.sb_cc < uio->uio_resid) &&
 1438             (so->so_rcv.sb_cc < so->so_rcv.sb_lowat ||
 1439             ((flags & MSG_WAITALL) && uio->uio_resid <= so->so_rcv.sb_hiwat)) &&
 1440             m->m_nextpkt == NULL && (pr->pr_flags & PR_ATOMIC) == 0)) {
 1441                 KASSERT(m != NULL || !so->so_rcv.sb_cc,
 1442                     ("receive: m == %p so->so_rcv.sb_cc == %u",
 1443                     m, so->so_rcv.sb_cc));
 1444                 if (so->so_error) {
 1445                         if (m != NULL)
 1446                                 goto dontblock;
 1447                         error = so->so_error;
 1448                         if ((flags & MSG_PEEK) == 0)
 1449                                 so->so_error = 0;
 1450                         SOCKBUF_UNLOCK(&so->so_rcv);
 1451                         goto release;
 1452                 }
 1453                 SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1454                 if (so->so_rcv.sb_state & SBS_CANTRCVMORE) {
 1455                         if (m == NULL) {
 1456                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1457                                 goto release;
 1458                         } else
 1459                                 goto dontblock;
 1460                 }
 1461                 for (; m != NULL; m = m->m_next)
 1462                         if (m->m_type == MT_OOBDATA  || (m->m_flags & M_EOR)) {
 1463                                 m = so->so_rcv.sb_mb;
 1464                                 goto dontblock;
 1465                         }
 1466                 if ((so->so_state & (SS_ISCONNECTED|SS_ISCONNECTING)) == 0 &&
 1467                     (so->so_proto->pr_flags & PR_CONNREQUIRED)) {
 1468                         SOCKBUF_UNLOCK(&so->so_rcv);
 1469                         error = ENOTCONN;
 1470                         goto release;
 1471                 }
 1472                 if (uio->uio_resid == 0) {
 1473                         SOCKBUF_UNLOCK(&so->so_rcv);
 1474                         goto release;
 1475                 }
 1476                 if ((so->so_state & SS_NBIO) ||
 1477                     (flags & (MSG_DONTWAIT|MSG_NBIO))) {
 1478                         SOCKBUF_UNLOCK(&so->so_rcv);
 1479                         error = EWOULDBLOCK;
 1480                         goto release;
 1481                 }
 1482                 SBLASTRECORDCHK(&so->so_rcv);
 1483                 SBLASTMBUFCHK(&so->so_rcv);
 1484                 error = sbwait(&so->so_rcv);
 1485                 SOCKBUF_UNLOCK(&so->so_rcv);
 1486                 if (error)
 1487                         goto release;
 1488                 goto restart;
 1489         }
 1490 dontblock:
 1491         /*
 1492          * From this point onward, we maintain 'nextrecord' as a cache of the
 1493          * pointer to the next record in the socket buffer.  We must keep the
 1494          * various socket buffer pointers and local stack versions of the
 1495          * pointers in sync, pushing out modifications before dropping the
 1496          * socket buffer mutex, and re-reading them when picking it up.
 1497          *
 1498          * Otherwise, we will race with the network stack appending new data
 1499          * or records onto the socket buffer by using inconsistent/stale
 1500          * versions of the field, possibly resulting in socket buffer
 1501          * corruption.
 1502          *
 1503          * By holding the high-level sblock(), we prevent simultaneous
 1504          * readers from pulling off the front of the socket buffer.
 1505          */
 1506         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1507         if (uio->uio_td)
 1508                 uio->uio_td->td_ru.ru_msgrcv++;
 1509         KASSERT(m == so->so_rcv.sb_mb, ("soreceive: m != so->so_rcv.sb_mb"));
 1510         SBLASTRECORDCHK(&so->so_rcv);
 1511         SBLASTMBUFCHK(&so->so_rcv);
 1512         nextrecord = m->m_nextpkt;
 1513         if (pr->pr_flags & PR_ADDR) {
 1514                 KASSERT(m->m_type == MT_SONAME,
 1515                     ("m->m_type == %d", m->m_type));
 1516                 orig_resid = 0;
 1517                 if (psa != NULL)
 1518                         *psa = sodupsockaddr(mtod(m, struct sockaddr *),
 1519                             M_NOWAIT);
 1520                 if (flags & MSG_PEEK) {
 1521                         m = m->m_next;
 1522                 } else {
 1523                         sbfree(&so->so_rcv, m);
 1524                         so->so_rcv.sb_mb = m_free(m);
 1525                         m = so->so_rcv.sb_mb;
 1526                         sockbuf_pushsync(&so->so_rcv, nextrecord);
 1527                 }
 1528         }
 1529 
 1530         /*
 1531          * Process one or more MT_CONTROL mbufs present before any data mbufs
 1532          * in the first mbuf chain on the socket buffer.  If MSG_PEEK, we
 1533          * just copy the data; if !MSG_PEEK, we call into the protocol to
 1534          * perform externalization (or freeing if controlp == NULL).
 1535          */
 1536         if (m != NULL && m->m_type == MT_CONTROL) {
 1537                 struct mbuf *cm = NULL, *cmn;
 1538                 struct mbuf **cme = &cm;
 1539 
 1540                 do {
 1541                         if (flags & MSG_PEEK) {
 1542                                 if (controlp != NULL) {
 1543                                         *controlp = m_copy(m, 0, m->m_len);
 1544                                         controlp = &(*controlp)->m_next;
 1545                                 }
 1546                                 m = m->m_next;
 1547                         } else {
 1548                                 sbfree(&so->so_rcv, m);
 1549                                 so->so_rcv.sb_mb = m->m_next;
 1550                                 m->m_next = NULL;
 1551                                 *cme = m;
 1552                                 cme = &(*cme)->m_next;
 1553                                 m = so->so_rcv.sb_mb;
 1554                         }
 1555                 } while (m != NULL && m->m_type == MT_CONTROL);
 1556                 if ((flags & MSG_PEEK) == 0)
 1557                         sockbuf_pushsync(&so->so_rcv, nextrecord);
 1558                 while (cm != NULL) {
 1559                         cmn = cm->m_next;
 1560                         cm->m_next = NULL;
 1561                         if (pr->pr_domain->dom_externalize != NULL) {
 1562                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1563                                 error = (*pr->pr_domain->dom_externalize)
 1564                                     (cm, controlp);
 1565                                 SOCKBUF_LOCK(&so->so_rcv);
 1566                         } else if (controlp != NULL)
 1567                                 *controlp = cm;
 1568                         else
 1569                                 m_freem(cm);
 1570                         if (controlp != NULL) {
 1571                                 orig_resid = 0;
 1572                                 while (*controlp != NULL)
 1573                                         controlp = &(*controlp)->m_next;
 1574                         }
 1575                         cm = cmn;
 1576                 }
 1577                 if (m != NULL)
 1578                         nextrecord = so->so_rcv.sb_mb->m_nextpkt;
 1579                 else
 1580                         nextrecord = so->so_rcv.sb_mb;
 1581                 orig_resid = 0;
 1582         }
 1583         if (m != NULL) {
 1584                 if ((flags & MSG_PEEK) == 0) {
 1585                         KASSERT(m->m_nextpkt == nextrecord,
 1586                             ("soreceive: post-control, nextrecord !sync"));
 1587                         if (nextrecord == NULL) {
 1588                                 KASSERT(so->so_rcv.sb_mb == m,
 1589                                     ("soreceive: post-control, sb_mb!=m"));
 1590                                 KASSERT(so->so_rcv.sb_lastrecord == m,
 1591                                     ("soreceive: post-control, lastrecord!=m"));
 1592                         }
 1593                 }
 1594                 type = m->m_type;
 1595                 if (type == MT_OOBDATA)
 1596                         flags |= MSG_OOB;
 1597         } else {
 1598                 if ((flags & MSG_PEEK) == 0) {
 1599                         KASSERT(so->so_rcv.sb_mb == nextrecord,
 1600                             ("soreceive: sb_mb != nextrecord"));
 1601                         if (so->so_rcv.sb_mb == NULL) {
 1602                                 KASSERT(so->so_rcv.sb_lastrecord == NULL,
 1603                                     ("soreceive: sb_lastercord != NULL"));
 1604                         }
 1605                 }
 1606         }
 1607         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1608         SBLASTRECORDCHK(&so->so_rcv);
 1609         SBLASTMBUFCHK(&so->so_rcv);
 1610 
 1611         /*
 1612          * Now continue to read any data mbufs off of the head of the socket
 1613          * buffer until the read request is satisfied.  Note that 'type' is
 1614          * used to store the type of any mbuf reads that have happened so far
 1615          * such that soreceive() can stop reading if the type changes, which
 1616          * causes soreceive() to return only one of regular data and inline
 1617          * out-of-band data in a single socket receive operation.
 1618          */
 1619         moff = 0;
 1620         offset = 0;
 1621         while (m != NULL && uio->uio_resid > 0 && error == 0) {
 1622                 /*
 1623                  * If the type of mbuf has changed since the last mbuf
 1624                  * examined ('type'), end the receive operation.
 1625                  */
 1626                 SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1627                 if (m->m_type == MT_OOBDATA) {
 1628                         if (type != MT_OOBDATA)
 1629                                 break;
 1630                 } else if (type == MT_OOBDATA)
 1631                         break;
 1632                 else
 1633                     KASSERT(m->m_type == MT_DATA,
 1634                         ("m->m_type == %d", m->m_type));
 1635                 so->so_rcv.sb_state &= ~SBS_RCVATMARK;
 1636                 len = uio->uio_resid;
 1637                 if (so->so_oobmark && len > so->so_oobmark - offset)
 1638                         len = so->so_oobmark - offset;
 1639                 if (len > m->m_len - moff)
 1640                         len = m->m_len - moff;
 1641                 /*
 1642                  * If mp is set, just pass back the mbufs.  Otherwise copy
 1643                  * them out via the uio, then free.  Sockbuf must be
 1644                  * consistent here (points to current mbuf, it points to next
 1645                  * record) when we drop priority; we must note any additions
 1646                  * to the sockbuf when we block interrupts again.
 1647                  */
 1648                 if (mp == NULL) {
 1649                         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1650                         SBLASTRECORDCHK(&so->so_rcv);
 1651                         SBLASTMBUFCHK(&so->so_rcv);
 1652                         SOCKBUF_UNLOCK(&so->so_rcv);
 1653 #ifdef ZERO_COPY_SOCKETS
 1654                         if (so_zero_copy_receive) {
 1655                                 int disposable;
 1656 
 1657                                 if ((m->m_flags & M_EXT)
 1658                                  && (m->m_ext.ext_type == EXT_DISPOSABLE))
 1659                                         disposable = 1;
 1660                                 else
 1661                                         disposable = 0;
 1662 
 1663                                 error = uiomoveco(mtod(m, char *) + moff,
 1664                                                   (int)len, uio,
 1665                                                   disposable);
 1666                         } else
 1667 #endif /* ZERO_COPY_SOCKETS */
 1668                         error = uiomove(mtod(m, char *) + moff, (int)len, uio);
 1669                         SOCKBUF_LOCK(&so->so_rcv);
 1670                         if (error) {
 1671                                 /*
 1672                                  * The MT_SONAME mbuf has already been removed
 1673                                  * from the record, so it is necessary to
 1674                                  * remove the data mbufs, if any, to preserve
 1675                                  * the invariant in the case of PR_ADDR that
 1676                                  * requires MT_SONAME mbufs at the head of
 1677                                  * each record.
 1678                                  */
 1679                                 if (m && pr->pr_flags & PR_ATOMIC &&
 1680                                     ((flags & MSG_PEEK) == 0))
 1681                                         (void)sbdroprecord_locked(&so->so_rcv);
 1682                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1683                                 goto release;
 1684                         }
 1685                 } else
 1686                         uio->uio_resid -= len;
 1687                 SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1688                 if (len == m->m_len - moff) {
 1689                         if (m->m_flags & M_EOR)
 1690                                 flags |= MSG_EOR;
 1691                         if (flags & MSG_PEEK) {
 1692                                 m = m->m_next;
 1693                                 moff = 0;
 1694                         } else {
 1695                                 nextrecord = m->m_nextpkt;
 1696                                 sbfree(&so->so_rcv, m);
 1697                                 if (mp != NULL) {
 1698                                         *mp = m;
 1699                                         mp = &m->m_next;
 1700                                         so->so_rcv.sb_mb = m = m->m_next;
 1701                                         *mp = NULL;
 1702                                 } else {
 1703                                         so->so_rcv.sb_mb = m_free(m);
 1704                                         m = so->so_rcv.sb_mb;
 1705                                 }
 1706                                 sockbuf_pushsync(&so->so_rcv, nextrecord);
 1707                                 SBLASTRECORDCHK(&so->so_rcv);
 1708                                 SBLASTMBUFCHK(&so->so_rcv);
 1709                         }
 1710                 } else {
 1711                         if (flags & MSG_PEEK)
 1712                                 moff += len;
 1713                         else {
 1714                                 if (mp != NULL) {
 1715                                         int copy_flag;
 1716 
 1717                                         if (flags & MSG_DONTWAIT)
 1718                                                 copy_flag = M_DONTWAIT;
 1719                                         else
 1720                                                 copy_flag = M_TRYWAIT;
 1721                                         if (copy_flag == M_TRYWAIT)
 1722                                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1723                                         *mp = m_copym(m, 0, len, copy_flag);
 1724                                         if (copy_flag == M_TRYWAIT)
 1725                                                 SOCKBUF_LOCK(&so->so_rcv);
 1726                                         if (*mp == NULL) {
 1727                                                 /*
 1728                                                  * m_copym() couldn't
 1729                                                  * allocate an mbuf.  Adjust
 1730                                                  * uio_resid back (it was
 1731                                                  * adjusted down by len
 1732                                                  * bytes, which we didn't end
 1733                                                  * up "copying" over).
 1734                                                  */
 1735                                                 uio->uio_resid += len;
 1736                                                 break;
 1737                                         }
 1738                                 }
 1739                                 m->m_data += len;
 1740                                 m->m_len -= len;
 1741                                 so->so_rcv.sb_cc -= len;
 1742                         }
 1743                 }
 1744                 SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1745                 if (so->so_oobmark) {
 1746                         if ((flags & MSG_PEEK) == 0) {
 1747                                 so->so_oobmark -= len;
 1748                                 if (so->so_oobmark == 0) {
 1749                                         so->so_rcv.sb_state |= SBS_RCVATMARK;
 1750                                         break;
 1751                                 }
 1752                         } else {
 1753                                 offset += len;
 1754                                 if (offset == so->so_oobmark)
 1755                                         break;
 1756                         }
 1757                 }
 1758                 if (flags & MSG_EOR)
 1759                         break;
 1760                 /*
 1761                  * If the MSG_WAITALL flag is set (for non-atomic socket), we
 1762                  * must not quit until "uio->uio_resid == 0" or an error
 1763                  * termination.  If a signal/timeout occurs, return with a
 1764                  * short count but without error.  Keep sockbuf locked
 1765                  * against other readers.
 1766                  */
 1767                 while (flags & MSG_WAITALL && m == NULL && uio->uio_resid > 0 &&
 1768                     !sosendallatonce(so) && nextrecord == NULL) {
 1769                         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1770                         if (so->so_error || so->so_rcv.sb_state & SBS_CANTRCVMORE)
 1771                                 break;
 1772                         /*
 1773                          * Notify the protocol that some data has been
 1774                          * drained before blocking.
 1775                          */
 1776                         if (pr->pr_flags & PR_WANTRCVD) {
 1777                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1778                                 (*pr->pr_usrreqs->pru_rcvd)(so, flags);
 1779                                 SOCKBUF_LOCK(&so->so_rcv);
 1780                         }
 1781                         SBLASTRECORDCHK(&so->so_rcv);
 1782                         SBLASTMBUFCHK(&so->so_rcv);
 1783                         error = sbwait(&so->so_rcv);
 1784                         if (error) {
 1785                                 SOCKBUF_UNLOCK(&so->so_rcv);
 1786                                 goto release;
 1787                         }
 1788                         m = so->so_rcv.sb_mb;
 1789                         if (m != NULL)
 1790                                 nextrecord = m->m_nextpkt;
 1791                 }
 1792         }
 1793 
 1794         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1795         if (m != NULL && pr->pr_flags & PR_ATOMIC) {
 1796                 flags |= MSG_TRUNC;
 1797                 if ((flags & MSG_PEEK) == 0)
 1798                         (void) sbdroprecord_locked(&so->so_rcv);
 1799         }
 1800         if ((flags & MSG_PEEK) == 0) {
 1801                 if (m == NULL) {
 1802                         /*
 1803                          * First part is an inline SB_EMPTY_FIXUP().  Second
 1804                          * part makes sure sb_lastrecord is up-to-date if
 1805                          * there is still data in the socket buffer.
 1806                          */
 1807                         so->so_rcv.sb_mb = nextrecord;
 1808                         if (so->so_rcv.sb_mb == NULL) {
 1809                                 so->so_rcv.sb_mbtail = NULL;
 1810                                 so->so_rcv.sb_lastrecord = NULL;
 1811                         } else if (nextrecord->m_nextpkt == NULL)
 1812                                 so->so_rcv.sb_lastrecord = nextrecord;
 1813                 }
 1814                 SBLASTRECORDCHK(&so->so_rcv);
 1815                 SBLASTMBUFCHK(&so->so_rcv);
 1816                 /*
 1817                  * If soreceive() is being done from the socket callback,
 1818                  * then don't need to generate ACK to peer to update window,
 1819                  * since ACK will be generated on return to TCP.
 1820                  */
 1821                 if (!(flags & MSG_SOCALLBCK) &&
 1822                     (pr->pr_flags & PR_WANTRCVD)) {
 1823                         SOCKBUF_UNLOCK(&so->so_rcv);
 1824                         (*pr->pr_usrreqs->pru_rcvd)(so, flags);
 1825                         SOCKBUF_LOCK(&so->so_rcv);
 1826                 }
 1827         }
 1828         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1829         if (orig_resid == uio->uio_resid && orig_resid &&
 1830             (flags & MSG_EOR) == 0 && (so->so_rcv.sb_state & SBS_CANTRCVMORE) == 0) {
 1831                 SOCKBUF_UNLOCK(&so->so_rcv);
 1832                 goto restart;
 1833         }
 1834         SOCKBUF_UNLOCK(&so->so_rcv);
 1835 
 1836         if (flagsp != NULL)
 1837                 *flagsp |= flags;
 1838 release:
 1839         sbunlock(&so->so_rcv);
 1840         return (error);
 1841 }
 1842 
 1843 /*
 1844  * Optimized version of soreceive() for simple datagram cases from userspace.
 1845  * Unlike in the stream case, we're able to drop a datagram if copyout()
 1846  * fails, and because we handle datagrams atomically, we don't need to use a
 1847  * sleep lock to prevent I/O interlacing.
 1848  */
 1849 int
 1850 soreceive_dgram(struct socket *so, struct sockaddr **psa, struct uio *uio,
 1851     struct mbuf **mp0, struct mbuf **controlp, int *flagsp)
 1852 {
 1853         struct mbuf *m, *m2;
 1854         int flags, len, error;
 1855         struct protosw *pr = so->so_proto;
 1856         struct mbuf *nextrecord;
 1857 
 1858         if (psa != NULL)
 1859                 *psa = NULL;
 1860         if (controlp != NULL)
 1861                 *controlp = NULL;
 1862         if (flagsp != NULL)
 1863                 flags = *flagsp &~ MSG_EOR;
 1864         else
 1865                 flags = 0;
 1866 
 1867         /*
 1868          * For any complicated cases, fall back to the full
 1869          * soreceive_generic().
 1870          */
 1871         if (mp0 != NULL || (flags & MSG_PEEK) || (flags & MSG_OOB))
 1872                 return (soreceive_generic(so, psa, uio, mp0, controlp,
 1873                     flagsp));
 1874 
 1875         /*
 1876          * Enforce restrictions on use.
 1877          */
 1878         KASSERT((pr->pr_flags & PR_WANTRCVD) == 0,
 1879             ("soreceive_dgram: wantrcvd"));
 1880         KASSERT(pr->pr_flags & PR_ATOMIC, ("soreceive_dgram: !atomic"));
 1881         KASSERT((so->so_rcv.sb_state & SBS_RCVATMARK) == 0,
 1882             ("soreceive_dgram: SBS_RCVATMARK"));
 1883         KASSERT((so->so_proto->pr_flags & PR_CONNREQUIRED) == 0,
 1884             ("soreceive_dgram: P_CONNREQUIRED"));
 1885 
 1886         /*
 1887          * Loop blocking while waiting for a datagram.
 1888          */
 1889         SOCKBUF_LOCK(&so->so_rcv);
 1890         while ((m = so->so_rcv.sb_mb) == NULL) {
 1891                 KASSERT(so->so_rcv.sb_cc == 0,
 1892                     ("soreceive_dgram: sb_mb NULL but sb_cc %u",
 1893                     so->so_rcv.sb_cc));
 1894                 if (so->so_error) {
 1895                         error = so->so_error;
 1896                         so->so_error = 0;
 1897                         SOCKBUF_UNLOCK(&so->so_rcv);
 1898                         return (error);
 1899                 }
 1900                 if (so->so_rcv.sb_state & SBS_CANTRCVMORE ||
 1901                     uio->uio_resid == 0) {
 1902                         SOCKBUF_UNLOCK(&so->so_rcv);
 1903                         return (0);
 1904                 }
 1905                 if ((so->so_state & SS_NBIO) ||
 1906                     (flags & (MSG_DONTWAIT|MSG_NBIO))) {
 1907                         SOCKBUF_UNLOCK(&so->so_rcv);
 1908                         return (EWOULDBLOCK);
 1909                 }
 1910                 SBLASTRECORDCHK(&so->so_rcv);
 1911                 SBLASTMBUFCHK(&so->so_rcv);
 1912                 error = sbwait(&so->so_rcv);
 1913                 if (error) {
 1914                         SOCKBUF_UNLOCK(&so->so_rcv);
 1915                         return (error);
 1916                 }
 1917         }
 1918         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 1919 
 1920         if (uio->uio_td)
 1921                 uio->uio_td->td_ru.ru_msgrcv++;
 1922         SBLASTRECORDCHK(&so->so_rcv);
 1923         SBLASTMBUFCHK(&so->so_rcv);
 1924         nextrecord = m->m_nextpkt;
 1925         if (nextrecord == NULL) {
 1926                 KASSERT(so->so_rcv.sb_lastrecord == m,
 1927                     ("soreceive_dgram: lastrecord != m"));
 1928         }
 1929 
 1930         KASSERT(so->so_rcv.sb_mb->m_nextpkt == nextrecord,
 1931             ("soreceive_dgram: m_nextpkt != nextrecord"));
 1932 
 1933         /*
 1934          * Pull 'm' and its chain off the front of the packet queue.
 1935          */
 1936         so->so_rcv.sb_mb = NULL;
 1937         sockbuf_pushsync(&so->so_rcv, nextrecord);
 1938 
 1939         /*
 1940          * Walk 'm's chain and free that many bytes from the socket buffer.
 1941          */
 1942         for (m2 = m; m2 != NULL; m2 = m2->m_next)
 1943                 sbfree(&so->so_rcv, m2);
 1944 
 1945         /*
 1946          * Do a few last checks before we let go of the lock.
 1947          */
 1948         SBLASTRECORDCHK(&so->so_rcv);
 1949         SBLASTMBUFCHK(&so->so_rcv);
 1950         SOCKBUF_UNLOCK(&so->so_rcv);
 1951 
 1952         if (pr->pr_flags & PR_ADDR) {
 1953                 KASSERT(m->m_type == MT_SONAME,
 1954                     ("m->m_type == %d", m->m_type));
 1955                 if (psa != NULL)
 1956                         *psa = sodupsockaddr(mtod(m, struct sockaddr *),
 1957                             M_NOWAIT);
 1958                 m = m_free(m);
 1959         }
 1960         if (m == NULL) {
 1961                 /* XXXRW: Can this happen? */
 1962                 return (0);
 1963         }
 1964 
 1965         /*
 1966          * Packet to copyout() is now in 'm' and it is disconnected from the
 1967          * queue.
 1968          *
 1969          * Process one or more MT_CONTROL mbufs present before any data mbufs
 1970          * in the first mbuf chain on the socket buffer.  We call into the
 1971          * protocol to perform externalization (or freeing if controlp ==
 1972          * NULL).
 1973          */
 1974         if (m->m_type == MT_CONTROL) {
 1975                 struct mbuf *cm = NULL, *cmn;
 1976                 struct mbuf **cme = &cm;
 1977 
 1978                 do {
 1979                         m2 = m->m_next;
 1980                         m->m_next = NULL;
 1981                         *cme = m;
 1982                         cme = &(*cme)->m_next;
 1983                         m = m2;
 1984                 } while (m != NULL && m->m_type == MT_CONTROL);
 1985                 while (cm != NULL) {
 1986                         cmn = cm->m_next;
 1987                         cm->m_next = NULL;
 1988                         if (pr->pr_domain->dom_externalize != NULL) {
 1989                                 error = (*pr->pr_domain->dom_externalize)
 1990                                     (cm, controlp);
 1991                         } else if (controlp != NULL)
 1992                                 *controlp = cm;
 1993                         else
 1994                                 m_freem(cm);
 1995                         if (controlp != NULL) {
 1996                                 while (*controlp != NULL)
 1997                                         controlp = &(*controlp)->m_next;
 1998                         }
 1999                         cm = cmn;
 2000                 }
 2001         }
 2002         KASSERT(m->m_type == MT_DATA, ("soreceive_dgram: !data"));
 2003 
 2004         while (m != NULL && uio->uio_resid > 0) {
 2005                 len = uio->uio_resid;
 2006                 if (len > m->m_len)
 2007                         len = m->m_len;
 2008                 error = uiomove(mtod(m, char *), (int)len, uio);
 2009                 if (error) {
 2010                         m_freem(m);
 2011                         return (error);
 2012                 }
 2013                 m = m_free(m);
 2014         }
 2015         if (m != NULL)
 2016                 flags |= MSG_TRUNC;
 2017         m_freem(m);
 2018         if (flagsp != NULL)
 2019                 *flagsp |= flags;
 2020         return (0);
 2021 }
 2022 
 2023 int
 2024 soreceive(struct socket *so, struct sockaddr **psa, struct uio *uio,
 2025     struct mbuf **mp0, struct mbuf **controlp, int *flagsp)
 2026 {
 2027 
 2028         return (so->so_proto->pr_usrreqs->pru_soreceive(so, psa, uio, mp0,
 2029             controlp, flagsp));
 2030 }
 2031 
 2032 int
 2033 soshutdown(struct socket *so, int how)
 2034 {
 2035         struct protosw *pr = so->so_proto;
 2036 
 2037         if (!(how == SHUT_RD || how == SHUT_WR || how == SHUT_RDWR))
 2038                 return (EINVAL);
 2039         if (pr->pr_usrreqs->pru_flush != NULL) {
 2040                 (*pr->pr_usrreqs->pru_flush)(so, how);
 2041         }
 2042         if (how != SHUT_WR)
 2043                 sorflush(so);
 2044         if (how != SHUT_RD)
 2045                 return ((*pr->pr_usrreqs->pru_shutdown)(so));
 2046         return (0);
 2047 }
 2048 
 2049 void
 2050 sorflush(struct socket *so)
 2051 {
 2052         struct sockbuf *sb = &so->so_rcv;
 2053         struct protosw *pr = so->so_proto;
 2054         struct sockbuf asb;
 2055 
 2056         /*
 2057          * In order to avoid calling dom_dispose with the socket buffer mutex
 2058          * held, and in order to generally avoid holding the lock for a long
 2059          * time, we make a copy of the socket buffer and clear the original
 2060          * (except locks, state).  The new socket buffer copy won't have
 2061          * initialized locks so we can only call routines that won't use or
 2062          * assert those locks.
 2063          *
 2064          * Dislodge threads currently blocked in receive and wait to acquire
 2065          * a lock against other simultaneous readers before clearing the
 2066          * socket buffer.  Don't let our acquire be interrupted by a signal
 2067          * despite any existing socket disposition on interruptable waiting.
 2068          */
 2069         socantrcvmore(so);
 2070         (void) sblock(sb, SBL_WAIT | SBL_NOINTR);
 2071 
 2072         /*
 2073          * Invalidate/clear most of the sockbuf structure, but leave selinfo
 2074          * and mutex data unchanged.
 2075          */
 2076         SOCKBUF_LOCK(sb);
 2077         bzero(&asb, offsetof(struct sockbuf, sb_startzero));
 2078         bcopy(&sb->sb_startzero, &asb.sb_startzero,
 2079             sizeof(*sb) - offsetof(struct sockbuf, sb_startzero));
 2080         bzero(&sb->sb_startzero,
 2081             sizeof(*sb) - offsetof(struct sockbuf, sb_startzero));
 2082         SOCKBUF_UNLOCK(sb);
 2083         sbunlock(sb);
 2084 
 2085         /*
 2086          * Dispose of special rights and flush the socket buffer.  Don't call
 2087          * any unsafe routines (that rely on locks being initialized) on asb.
 2088          */
 2089         if (pr->pr_flags & PR_RIGHTS && pr->pr_domain->dom_dispose != NULL)
 2090                 (*pr->pr_domain->dom_dispose)(asb.sb_mb);
 2091         sbrelease_internal(&asb, so);
 2092 }
 2093 
 2094 /*
 2095  * Perhaps this routine, and sooptcopyout(), below, ought to come in an
 2096  * additional variant to handle the case where the option value needs to be
 2097  * some kind of integer, but not a specific size.  In addition to their use
 2098  * here, these functions are also called by the protocol-level pr_ctloutput()
 2099  * routines.
 2100  */
 2101 int
 2102 sooptcopyin(struct sockopt *sopt, void *buf, size_t len, size_t minlen)
 2103 {
 2104         size_t  valsize;
 2105 
 2106         /*
 2107          * If the user gives us more than we wanted, we ignore it, but if we
 2108          * don't get the minimum length the caller wants, we return EINVAL.
 2109          * On success, sopt->sopt_valsize is set to however much we actually
 2110          * retrieved.
 2111          */
 2112         if ((valsize = sopt->sopt_valsize) < minlen)
 2113                 return EINVAL;
 2114         if (valsize > len)
 2115                 sopt->sopt_valsize = valsize = len;
 2116 
 2117         if (sopt->sopt_td != NULL)
 2118                 return (copyin(sopt->sopt_val, buf, valsize));
 2119 
 2120         bcopy(sopt->sopt_val, buf, valsize);
 2121         return (0);
 2122 }
 2123 
 2124 /*
 2125  * Kernel version of setsockopt(2).
 2126  *
 2127  * XXX: optlen is size_t, not socklen_t
 2128  */
 2129 int
 2130 so_setsockopt(struct socket *so, int level, int optname, void *optval,
 2131     size_t optlen)
 2132 {
 2133         struct sockopt sopt;
 2134 
 2135         sopt.sopt_level = level;
 2136         sopt.sopt_name = optname;
 2137         sopt.sopt_dir = SOPT_SET;
 2138         sopt.sopt_val = optval;
 2139         sopt.sopt_valsize = optlen;
 2140         sopt.sopt_td = NULL;
 2141         return (sosetopt(so, &sopt));
 2142 }
 2143 
 2144 int
 2145 sosetopt(struct socket *so, struct sockopt *sopt)
 2146 {
 2147         int     error, optval;
 2148         struct  linger l;
 2149         struct  timeval tv;
 2150         u_long  val;
 2151 #ifdef MAC
 2152         struct mac extmac;
 2153 #endif
 2154 
 2155         error = 0;
 2156         if (sopt->sopt_level != SOL_SOCKET) {
 2157                 if (so->so_proto && so->so_proto->pr_ctloutput)
 2158                         return ((*so->so_proto->pr_ctloutput)
 2159                                   (so, sopt));
 2160                 error = ENOPROTOOPT;
 2161         } else {
 2162                 switch (sopt->sopt_name) {
 2163 #ifdef INET
 2164                 case SO_ACCEPTFILTER:
 2165                         error = do_setopt_accept_filter(so, sopt);
 2166                         if (error)
 2167                                 goto bad;
 2168                         break;
 2169 #endif
 2170                 case SO_LINGER:
 2171                         error = sooptcopyin(sopt, &l, sizeof l, sizeof l);
 2172                         if (error)
 2173                                 goto bad;
 2174 
 2175                         SOCK_LOCK(so);
 2176                         so->so_linger = l.l_linger;
 2177                         if (l.l_onoff)
 2178                                 so->so_options |= SO_LINGER;
 2179                         else
 2180                                 so->so_options &= ~SO_LINGER;
 2181                         SOCK_UNLOCK(so);
 2182                         break;
 2183 
 2184                 case SO_DEBUG:
 2185                 case SO_KEEPALIVE:
 2186                 case SO_DONTROUTE:
 2187                 case SO_USELOOPBACK:
 2188                 case SO_BROADCAST:
 2189                 case SO_REUSEADDR:
 2190                 case SO_REUSEPORT:
 2191                 case SO_OOBINLINE:
 2192                 case SO_TIMESTAMP:
 2193                 case SO_BINTIME:
 2194                 case SO_NOSIGPIPE:
 2195                         error = sooptcopyin(sopt, &optval, sizeof optval,
 2196                                             sizeof optval);
 2197                         if (error)
 2198                                 goto bad;
 2199                         SOCK_LOCK(so);
 2200                         if (optval)
 2201                                 so->so_options |= sopt->sopt_name;
 2202                         else
 2203                                 so->so_options &= ~sopt->sopt_name;
 2204                         SOCK_UNLOCK(so);
 2205                         break;
 2206 
 2207                 case SO_SETFIB:
 2208                         error = sooptcopyin(sopt, &optval, sizeof optval,
 2209                                             sizeof optval);
 2210                         if (optval < 1 || optval > rt_numfibs) {
 2211                                 error = EINVAL;
 2212                                 goto bad;
 2213                         }
 2214                         if ((so->so_proto->pr_domain->dom_family == PF_INET) ||
 2215                             (so->so_proto->pr_domain->dom_family == PF_ROUTE)) {
 2216                                 so->so_fibnum = optval;
 2217                                 /* Note: ignore error */
 2218                                 if (so->so_proto && so->so_proto->pr_ctloutput)
 2219                                         (*so->so_proto->pr_ctloutput)(so, sopt);
 2220                         } else {
 2221                                 so->so_fibnum = 0;
 2222                         }
 2223                         break;
 2224                 case SO_SNDBUF:
 2225                 case SO_RCVBUF:
 2226                 case SO_SNDLOWAT:
 2227                 case SO_RCVLOWAT:
 2228                         error = sooptcopyin(sopt, &optval, sizeof optval,
 2229                                             sizeof optval);
 2230                         if (error)
 2231                                 goto bad;
 2232 
 2233                         /*
 2234                          * Values < 1 make no sense for any of these options,
 2235                          * so disallow them.
 2236                          */
 2237                         if (optval < 1) {
 2238                                 error = EINVAL;
 2239                                 goto bad;
 2240                         }
 2241 
 2242                         switch (sopt->sopt_name) {
 2243                         case SO_SNDBUF:
 2244                         case SO_RCVBUF:
 2245                                 if (sbreserve(sopt->sopt_name == SO_SNDBUF ?
 2246                                     &so->so_snd : &so->so_rcv, (u_long)optval,
 2247                                     so, curthread) == 0) {
 2248                                         error = ENOBUFS;
 2249                                         goto bad;
 2250                                 }
 2251                                 (sopt->sopt_name == SO_SNDBUF ? &so->so_snd :
 2252                                     &so->so_rcv)->sb_flags &= ~SB_AUTOSIZE;
 2253                                 break;
 2254 
 2255                         /*
 2256                          * Make sure the low-water is never greater than the
 2257                          * high-water.
 2258                          */
 2259                         case SO_SNDLOWAT:
 2260                                 SOCKBUF_LOCK(&so->so_snd);
 2261                                 so->so_snd.sb_lowat =
 2262                                     (optval > so->so_snd.sb_hiwat) ?
 2263                                     so->so_snd.sb_hiwat : optval;
 2264                                 SOCKBUF_UNLOCK(&so->so_snd);
 2265                                 break;
 2266                         case SO_RCVLOWAT:
 2267                                 SOCKBUF_LOCK(&so->so_rcv);
 2268                                 so->so_rcv.sb_lowat =
 2269                                     (optval > so->so_rcv.sb_hiwat) ?
 2270                                     so->so_rcv.sb_hiwat : optval;
 2271                                 SOCKBUF_UNLOCK(&so->so_rcv);
 2272                                 break;
 2273                         }
 2274                         break;
 2275 
 2276                 case SO_SNDTIMEO:
 2277                 case SO_RCVTIMEO:
 2278 #ifdef COMPAT_IA32
 2279                         if (curthread->td_proc->p_sysent == &ia32_freebsd_sysvec) {
 2280                                 struct timeval32 tv32;
 2281 
 2282                                 error = sooptcopyin(sopt, &tv32, sizeof tv32,
 2283                                     sizeof tv32);
 2284                                 CP(tv32, tv, tv_sec);
 2285                                 CP(tv32, tv, tv_usec);
 2286                         } else
 2287 #endif
 2288                                 error = sooptcopyin(sopt, &tv, sizeof tv,
 2289                                     sizeof tv);
 2290                         if (error)
 2291                                 goto bad;
 2292 
 2293                         /* assert(hz > 0); */
 2294                         if (tv.tv_sec < 0 || tv.tv_sec > INT_MAX / hz ||
 2295                             tv.tv_usec < 0 || tv.tv_usec >= 1000000) {
 2296                                 error = EDOM;
 2297                                 goto bad;
 2298                         }
 2299                         /* assert(tick > 0); */
 2300                         /* assert(ULONG_MAX - INT_MAX >= 1000000); */
 2301                         val = (u_long)(tv.tv_sec * hz) + tv.tv_usec / tick;
 2302                         if (val > INT_MAX) {
 2303                                 error = EDOM;
 2304                                 goto bad;
 2305                         }
 2306                         if (val == 0 && tv.tv_usec != 0)
 2307                                 val = 1;
 2308 
 2309                         switch (sopt->sopt_name) {
 2310                         case SO_SNDTIMEO:
 2311                                 so->so_snd.sb_timeo = val;
 2312                                 break;
 2313                         case SO_RCVTIMEO:
 2314                                 so->so_rcv.sb_timeo = val;
 2315                                 break;
 2316                         }
 2317                         break;
 2318 
 2319                 case SO_LABEL:
 2320 #ifdef MAC
 2321                         error = sooptcopyin(sopt, &extmac, sizeof extmac,
 2322                             sizeof extmac);
 2323                         if (error)
 2324                                 goto bad;
 2325                         error = mac_setsockopt_label(sopt->sopt_td->td_ucred,
 2326                             so, &extmac);
 2327 #else
 2328                         error = EOPNOTSUPP;
 2329 #endif
 2330                         break;
 2331 
 2332                 default:
 2333                         error = ENOPROTOOPT;
 2334                         break;
 2335                 }
 2336                 if (error == 0 && so->so_proto != NULL &&
 2337                     so->so_proto->pr_ctloutput != NULL) {
 2338                         (void) ((*so->so_proto->pr_ctloutput)
 2339                                   (so, sopt));
 2340                 }
 2341         }
 2342 bad:
 2343         return (error);
 2344 }
 2345 
 2346 /*
 2347  * Helper routine for getsockopt.
 2348  */
 2349 int
 2350 sooptcopyout(struct sockopt *sopt, const void *buf, size_t len)
 2351 {
 2352         int     error;
 2353         size_t  valsize;
 2354 
 2355         error = 0;
 2356 
 2357         /*
 2358          * Documented get behavior is that we always return a value, possibly
 2359          * truncated to fit in the user's buffer.  Traditional behavior is
 2360          * that we always tell the user precisely how much we copied, rather
 2361          * than something useful like the total amount we had available for
 2362          * her.  Note that this interface is not idempotent; the entire
 2363          * answer must generated ahead of time.
 2364          */
 2365         valsize = min(len, sopt->sopt_valsize);
 2366         sopt->sopt_valsize = valsize;
 2367         if (sopt->sopt_val != NULL) {
 2368                 if (sopt->sopt_td != NULL)
 2369                         error = copyout(buf, sopt->sopt_val, valsize);
 2370                 else
 2371                         bcopy(buf, sopt->sopt_val, valsize);
 2372         }
 2373         return (error);
 2374 }
 2375 
 2376 int
 2377 sogetopt(struct socket *so, struct sockopt *sopt)
 2378 {
 2379         int     error, optval;
 2380         struct  linger l;
 2381         struct  timeval tv;
 2382 #ifdef MAC
 2383         struct mac extmac;
 2384 #endif
 2385 
 2386         error = 0;
 2387         if (sopt->sopt_level != SOL_SOCKET) {
 2388                 if (so->so_proto && so->so_proto->pr_ctloutput) {
 2389                         return ((*so->so_proto->pr_ctloutput)
 2390                                   (so, sopt));
 2391                 } else
 2392                         return (ENOPROTOOPT);
 2393         } else {
 2394                 switch (sopt->sopt_name) {
 2395 #ifdef INET
 2396                 case SO_ACCEPTFILTER:
 2397                         error = do_getopt_accept_filter(so, sopt);
 2398                         break;
 2399 #endif
 2400                 case SO_LINGER:
 2401                         SOCK_LOCK(so);
 2402                         l.l_onoff = so->so_options & SO_LINGER;
 2403                         l.l_linger = so->so_linger;
 2404                         SOCK_UNLOCK(so);
 2405                         error = sooptcopyout(sopt, &l, sizeof l);
 2406                         break;
 2407 
 2408                 case SO_USELOOPBACK:
 2409                 case SO_DONTROUTE:
 2410                 case SO_DEBUG:
 2411                 case SO_KEEPALIVE:
 2412                 case SO_REUSEADDR:
 2413                 case SO_REUSEPORT:
 2414                 case SO_BROADCAST:
 2415                 case SO_OOBINLINE:
 2416                 case SO_ACCEPTCONN:
 2417                 case SO_TIMESTAMP:
 2418                 case SO_BINTIME:
 2419                 case SO_NOSIGPIPE:
 2420                         optval = so->so_options & sopt->sopt_name;
 2421 integer:
 2422                         error = sooptcopyout(sopt, &optval, sizeof optval);
 2423                         break;
 2424 
 2425                 case SO_TYPE:
 2426                         optval = so->so_type;
 2427                         goto integer;
 2428 
 2429                 case SO_ERROR:
 2430                         SOCK_LOCK(so);
 2431                         optval = so->so_error;
 2432                         so->so_error = 0;
 2433                         SOCK_UNLOCK(so);
 2434                         goto integer;
 2435 
 2436                 case SO_SNDBUF:
 2437                         optval = so->so_snd.sb_hiwat;
 2438                         goto integer;
 2439 
 2440                 case SO_RCVBUF:
 2441                         optval = so->so_rcv.sb_hiwat;
 2442                         goto integer;
 2443 
 2444                 case SO_SNDLOWAT:
 2445                         optval = so->so_snd.sb_lowat;
 2446                         goto integer;
 2447 
 2448                 case SO_RCVLOWAT:
 2449                         optval = so->so_rcv.sb_lowat;
 2450                         goto integer;
 2451 
 2452                 case SO_SNDTIMEO:
 2453                 case SO_RCVTIMEO:
 2454                         optval = (sopt->sopt_name == SO_SNDTIMEO ?
 2455                                   so->so_snd.sb_timeo : so->so_rcv.sb_timeo);
 2456 
 2457                         tv.tv_sec = optval / hz;
 2458                         tv.tv_usec = (optval % hz) * tick;
 2459 #ifdef COMPAT_IA32
 2460                         if (curthread->td_proc->p_sysent == &ia32_freebsd_sysvec) {
 2461                                 struct timeval32 tv32;
 2462 
 2463                                 CP(tv, tv32, tv_sec);
 2464                                 CP(tv, tv32, tv_usec);
 2465                                 error = sooptcopyout(sopt, &tv32, sizeof tv32);
 2466                         } else
 2467 #endif
 2468                                 error = sooptcopyout(sopt, &tv, sizeof tv);
 2469                         break;
 2470 
 2471                 case SO_LABEL:
 2472 #ifdef MAC
 2473                         error = sooptcopyin(sopt, &extmac, sizeof(extmac),
 2474                             sizeof(extmac));
 2475                         if (error)
 2476                                 return (error);
 2477                         error = mac_getsockopt_label(sopt->sopt_td->td_ucred,
 2478                             so, &extmac);
 2479                         if (error)
 2480                                 return (error);
 2481                         error = sooptcopyout(sopt, &extmac, sizeof extmac);
 2482 #else
 2483                         error = EOPNOTSUPP;
 2484 #endif
 2485                         break;
 2486 
 2487                 case SO_PEERLABEL:
 2488 #ifdef MAC
 2489                         error = sooptcopyin(sopt, &extmac, sizeof(extmac),
 2490                             sizeof(extmac));
 2491                         if (error)
 2492                                 return (error);
 2493                         error = mac_getsockopt_peerlabel(
 2494                             sopt->sopt_td->td_ucred, so, &extmac);
 2495                         if (error)
 2496                                 return (error);
 2497                         error = sooptcopyout(sopt, &extmac, sizeof extmac);
 2498 #else
 2499                         error = EOPNOTSUPP;
 2500 #endif
 2501                         break;
 2502 
 2503                 case SO_LISTENQLIMIT:
 2504                         optval = so->so_qlimit;
 2505                         goto integer;
 2506 
 2507                 case SO_LISTENQLEN:
 2508                         optval = so->so_qlen;
 2509                         goto integer;
 2510 
 2511                 case SO_LISTENINCQLEN:
 2512                         optval = so->so_incqlen;
 2513                         goto integer;
 2514 
 2515                 default:
 2516                         error = ENOPROTOOPT;
 2517                         break;
 2518                 }
 2519                 return (error);
 2520         }
 2521 }
 2522 
 2523 /* XXX; prepare mbuf for (__FreeBSD__ < 3) routines. */
 2524 int
 2525 soopt_getm(struct sockopt *sopt, struct mbuf **mp)
 2526 {
 2527         struct mbuf *m, *m_prev;
 2528         int sopt_size = sopt->sopt_valsize;
 2529 
 2530         MGET(m, sopt->sopt_td ? M_TRYWAIT : M_DONTWAIT, MT_DATA);
 2531         if (m == NULL)
 2532                 return ENOBUFS;
 2533         if (sopt_size > MLEN) {
 2534                 MCLGET(m, sopt->sopt_td ? M_TRYWAIT : M_DONTWAIT);
 2535                 if ((m->m_flags & M_EXT) == 0) {
 2536                         m_free(m);
 2537                         return ENOBUFS;
 2538                 }
 2539                 m->m_len = min(MCLBYTES, sopt_size);
 2540         } else {
 2541                 m->m_len = min(MLEN, sopt_size);
 2542         }
 2543         sopt_size -= m->m_len;
 2544         *mp = m;
 2545         m_prev = m;
 2546 
 2547         while (sopt_size) {
 2548                 MGET(m, sopt->sopt_td ? M_TRYWAIT : M_DONTWAIT, MT_DATA);
 2549                 if (m == NULL) {
 2550                         m_freem(*mp);
 2551                         return ENOBUFS;
 2552                 }
 2553                 if (sopt_size > MLEN) {
 2554                         MCLGET(m, sopt->sopt_td != NULL ? M_TRYWAIT :
 2555                             M_DONTWAIT);
 2556                         if ((m->m_flags & M_EXT) == 0) {
 2557                                 m_freem(m);
 2558                                 m_freem(*mp);
 2559                                 return ENOBUFS;
 2560                         }
 2561                         m->m_len = min(MCLBYTES, sopt_size);
 2562                 } else {
 2563                         m->m_len = min(MLEN, sopt_size);
 2564                 }
 2565                 sopt_size -= m->m_len;
 2566                 m_prev->m_next = m;
 2567                 m_prev = m;
 2568         }
 2569         return (0);
 2570 }
 2571 
 2572 /* XXX; copyin sopt data into mbuf chain for (__FreeBSD__ < 3) routines. */
 2573 int
 2574 soopt_mcopyin(struct sockopt *sopt, struct mbuf *m)
 2575 {
 2576         struct mbuf *m0 = m;
 2577 
 2578         if (sopt->sopt_val == NULL)
 2579                 return (0);
 2580         while (m != NULL && sopt->sopt_valsize >= m->m_len) {
 2581                 if (sopt->sopt_td != NULL) {
 2582                         int error;
 2583 
 2584                         error = copyin(sopt->sopt_val, mtod(m, char *),
 2585                                        m->m_len);
 2586                         if (error != 0) {
 2587                                 m_freem(m0);
 2588                                 return(error);
 2589                         }
 2590                 } else
 2591                         bcopy(sopt->sopt_val, mtod(m, char *), m->m_len);
 2592                 sopt->sopt_valsize -= m->m_len;
 2593                 sopt->sopt_val = (char *)sopt->sopt_val + m->m_len;
 2594                 m = m->m_next;
 2595         }
 2596         if (m != NULL) /* should be allocated enoughly at ip6_sooptmcopyin() */
 2597                 panic("ip6_sooptmcopyin");
 2598         return (0);
 2599 }
 2600 
 2601 /* XXX; copyout mbuf chain data into soopt for (__FreeBSD__ < 3) routines. */
 2602 int
 2603 soopt_mcopyout(struct sockopt *sopt, struct mbuf *m)
 2604 {
 2605         struct mbuf *m0 = m;
 2606         size_t valsize = 0;
 2607 
 2608         if (sopt->sopt_val == NULL)
 2609                 return (0);
 2610         while (m != NULL && sopt->sopt_valsize >= m->m_len) {
 2611                 if (sopt->sopt_td != NULL) {
 2612                         int error;
 2613 
 2614                         error = copyout(mtod(m, char *), sopt->sopt_val,
 2615                                        m->m_len);
 2616                         if (error != 0) {
 2617                                 m_freem(m0);
 2618                                 return(error);
 2619                         }
 2620                 } else
 2621                         bcopy(mtod(m, char *), sopt->sopt_val, m->m_len);
 2622                sopt->sopt_valsize -= m->m_len;
 2623                sopt->sopt_val = (char *)sopt->sopt_val + m->m_len;
 2624                valsize += m->m_len;
 2625                m = m->m_next;
 2626         }
 2627         if (m != NULL) {
 2628                 /* enough soopt buffer should be given from user-land */
 2629                 m_freem(m0);
 2630                 return(EINVAL);
 2631         }
 2632         sopt->sopt_valsize = valsize;
 2633         return (0);
 2634 }
 2635 
 2636 /*
 2637  * sohasoutofband(): protocol notifies socket layer of the arrival of new
 2638  * out-of-band data, which will then notify socket consumers.
 2639  */
 2640 void
 2641 sohasoutofband(struct socket *so)
 2642 {
 2643 
 2644         if (so->so_sigio != NULL)
 2645                 pgsigio(&so->so_sigio, SIGURG, 0);
 2646         selwakeuppri(&so->so_rcv.sb_sel, PSOCK);
 2647 }
 2648 
 2649 int
 2650 sopoll(struct socket *so, int events, struct ucred *active_cred,
 2651     struct thread *td)
 2652 {
 2653 
 2654         return (so->so_proto->pr_usrreqs->pru_sopoll(so, events, active_cred,
 2655             td));
 2656 }
 2657 
 2658 int
 2659 sopoll_generic(struct socket *so, int events, struct ucred *active_cred,
 2660     struct thread *td)
 2661 {
 2662         int revents = 0;
 2663 
 2664         SOCKBUF_LOCK(&so->so_snd);
 2665         SOCKBUF_LOCK(&so->so_rcv);
 2666         if (events & (POLLIN | POLLRDNORM))
 2667                 if (soreadable(so))
 2668                         revents |= events & (POLLIN | POLLRDNORM);
 2669 
 2670         if (events & POLLINIGNEOF)
 2671                 if (so->so_rcv.sb_cc >= so->so_rcv.sb_lowat ||
 2672                     !TAILQ_EMPTY(&so->so_comp) || so->so_error)
 2673                         revents |= POLLINIGNEOF;
 2674 
 2675         if (events & (POLLOUT | POLLWRNORM))
 2676                 if (sowriteable(so))
 2677                         revents |= events & (POLLOUT | POLLWRNORM);
 2678 
 2679         if (events & (POLLPRI | POLLRDBAND))
 2680                 if (so->so_oobmark || (so->so_rcv.sb_state & SBS_RCVATMARK))
 2681                         revents |= events & (POLLPRI | POLLRDBAND);
 2682 
 2683         if (revents == 0) {
 2684                 if (events &
 2685                     (POLLIN | POLLINIGNEOF | POLLPRI | POLLRDNORM |
 2686                      POLLRDBAND)) {
 2687                         selrecord(td, &so->so_rcv.sb_sel);
 2688                         so->so_rcv.sb_flags |= SB_SEL;
 2689                 }
 2690 
 2691                 if (events & (POLLOUT | POLLWRNORM)) {
 2692                         selrecord(td, &so->so_snd.sb_sel);
 2693                         so->so_snd.sb_flags |= SB_SEL;
 2694                 }
 2695         }
 2696 
 2697         SOCKBUF_UNLOCK(&so->so_rcv);
 2698         SOCKBUF_UNLOCK(&so->so_snd);
 2699         return (revents);
 2700 }
 2701 
 2702 int
 2703 soo_kqfilter(struct file *fp, struct knote *kn)
 2704 {
 2705         struct socket *so = kn->kn_fp->f_data;
 2706         struct sockbuf *sb;
 2707 
 2708         switch (kn->kn_filter) {
 2709         case EVFILT_READ:
 2710                 if (so->so_options & SO_ACCEPTCONN)
 2711                         kn->kn_fop = &solisten_filtops;
 2712                 else
 2713                         kn->kn_fop = &soread_filtops;
 2714                 sb = &so->so_rcv;
 2715                 break;
 2716         case EVFILT_WRITE:
 2717                 kn->kn_fop = &sowrite_filtops;
 2718                 sb = &so->so_snd;
 2719                 break;
 2720         default:
 2721                 return (EINVAL);
 2722         }
 2723 
 2724         SOCKBUF_LOCK(sb);
 2725         knlist_add(&sb->sb_sel.si_note, kn, 1);
 2726         sb->sb_flags |= SB_KNOTE;
 2727         SOCKBUF_UNLOCK(sb);
 2728         return (0);
 2729 }
 2730 
 2731 /*
 2732  * Some routines that return EOPNOTSUPP for entry points that are not
 2733  * supported by a protocol.  Fill in as needed.
 2734  */
 2735 int
 2736 pru_accept_notsupp(struct socket *so, struct sockaddr **nam)
 2737 {
 2738 
 2739         return EOPNOTSUPP;
 2740 }
 2741 
 2742 int
 2743 pru_attach_notsupp(struct socket *so, int proto, struct thread *td)
 2744 {
 2745 
 2746         return EOPNOTSUPP;
 2747 }
 2748 
 2749 int
 2750 pru_bind_notsupp(struct socket *so, struct sockaddr *nam, struct thread *td)
 2751 {
 2752 
 2753         return EOPNOTSUPP;
 2754 }
 2755 
 2756 int
 2757 pru_connect_notsupp(struct socket *so, struct sockaddr *nam, struct thread *td)
 2758 {
 2759 
 2760         return EOPNOTSUPP;
 2761 }
 2762 
 2763 int
 2764 pru_connect2_notsupp(struct socket *so1, struct socket *so2)
 2765 {
 2766 
 2767         return EOPNOTSUPP;
 2768 }
 2769 
 2770 int
 2771 pru_control_notsupp(struct socket *so, u_long cmd, caddr_t data,
 2772     struct ifnet *ifp, struct thread *td)
 2773 {
 2774 
 2775         return EOPNOTSUPP;
 2776 }
 2777 
 2778 int
 2779 pru_disconnect_notsupp(struct socket *so)
 2780 {
 2781 
 2782         return EOPNOTSUPP;
 2783 }
 2784 
 2785 int
 2786 pru_listen_notsupp(struct socket *so, int backlog, struct thread *td)
 2787 {
 2788 
 2789         return EOPNOTSUPP;
 2790 }
 2791 
 2792 int
 2793 pru_peeraddr_notsupp(struct socket *so, struct sockaddr **nam)
 2794 {
 2795 
 2796         return EOPNOTSUPP;
 2797 }
 2798 
 2799 int
 2800 pru_rcvd_notsupp(struct socket *so, int flags)
 2801 {
 2802 
 2803         return EOPNOTSUPP;
 2804 }
 2805 
 2806 int
 2807 pru_rcvoob_notsupp(struct socket *so, struct mbuf *m, int flags)
 2808 {
 2809 
 2810         return EOPNOTSUPP;
 2811 }
 2812 
 2813 int
 2814 pru_send_notsupp(struct socket *so, int flags, struct mbuf *m,
 2815     struct sockaddr *addr, struct mbuf *control, struct thread *td)
 2816 {
 2817 
 2818         return EOPNOTSUPP;
 2819 }
 2820 
 2821 /*
 2822  * This isn't really a ``null'' operation, but it's the default one and
 2823  * doesn't do anything destructive.
 2824  */
 2825 int
 2826 pru_sense_null(struct socket *so, struct stat *sb)
 2827 {
 2828 
 2829         sb->st_blksize = so->so_snd.sb_hiwat;
 2830         return 0;
 2831 }
 2832 
 2833 int
 2834 pru_shutdown_notsupp(struct socket *so)
 2835 {
 2836 
 2837         return EOPNOTSUPP;
 2838 }
 2839 
 2840 int
 2841 pru_sockaddr_notsupp(struct socket *so, struct sockaddr **nam)
 2842 {
 2843 
 2844         return EOPNOTSUPP;
 2845 }
 2846 
 2847 int
 2848 pru_sosend_notsupp(struct socket *so, struct sockaddr *addr, struct uio *uio,
 2849     struct mbuf *top, struct mbuf *control, int flags, struct thread *td)
 2850 {
 2851 
 2852         return EOPNOTSUPP;
 2853 }
 2854 
 2855 int
 2856 pru_soreceive_notsupp(struct socket *so, struct sockaddr **paddr,
 2857     struct uio *uio, struct mbuf **mp0, struct mbuf **controlp, int *flagsp)
 2858 {
 2859 
 2860         return EOPNOTSUPP;
 2861 }
 2862 
 2863 int
 2864 pru_sopoll_notsupp(struct socket *so, int events, struct ucred *cred,
 2865     struct thread *td)
 2866 {
 2867 
 2868         return EOPNOTSUPP;
 2869 }
 2870 
 2871 static void
 2872 filt_sordetach(struct knote *kn)
 2873 {
 2874         struct socket *so = kn->kn_fp->f_data;
 2875 
 2876         SOCKBUF_LOCK(&so->so_rcv);
 2877         knlist_remove(&so->so_rcv.sb_sel.si_note, kn, 1);
 2878         if (knlist_empty(&so->so_rcv.sb_sel.si_note))
 2879                 so->so_rcv.sb_flags &= ~SB_KNOTE;
 2880         SOCKBUF_UNLOCK(&so->so_rcv);
 2881 }
 2882 
 2883 /*ARGSUSED*/
 2884 static int
 2885 filt_soread(struct knote *kn, long hint)
 2886 {
 2887         struct socket *so;
 2888 
 2889         so = kn->kn_fp->f_data;
 2890         SOCKBUF_LOCK_ASSERT(&so->so_rcv);
 2891 
 2892         kn->kn_data = so->so_rcv.sb_cc - so->so_rcv.sb_ctl;
 2893         if (so->so_rcv.sb_state & SBS_CANTRCVMORE) {
 2894                 kn->kn_flags |= EV_EOF;
 2895                 kn->kn_fflags = so->so_error;
 2896                 return (1);
 2897         } else if (so->so_error)        /* temporary udp error */
 2898                 return (1);
 2899         else if (kn->kn_sfflags & NOTE_LOWAT)
 2900                 return (kn->kn_data >= kn->kn_sdata);
 2901         else
 2902                 return (so->so_rcv.sb_cc >= so->so_rcv.sb_lowat);
 2903 }
 2904 
 2905 static void
 2906 filt_sowdetach(struct knote *kn)
 2907 {
 2908         struct socket *so = kn->kn_fp->f_data;
 2909 
 2910         SOCKBUF_LOCK(&so->so_snd);
 2911         knlist_remove(&so->so_snd.sb_sel.si_note, kn, 1);
 2912         if (knlist_empty(&so->so_snd.sb_sel.si_note))
 2913                 so->so_snd.sb_flags &= ~SB_KNOTE;
 2914         SOCKBUF_UNLOCK(&so->so_snd);
 2915 }
 2916 
 2917 /*ARGSUSED*/
 2918 static int
 2919 filt_sowrite(struct knote *kn, long hint)
 2920 {
 2921         struct socket *so;
 2922 
 2923         so = kn->kn_fp->f_data;
 2924         SOCKBUF_LOCK_ASSERT(&so->so_snd);
 2925         kn->kn_data = sbspace(&so->so_snd);
 2926         if (so->so_snd.sb_state & SBS_CANTSENDMORE) {
 2927                 kn->kn_flags |= EV_EOF;
 2928                 kn->kn_fflags = so->so_error;
 2929                 return (1);
 2930         } else if (so->so_error)        /* temporary udp error */
 2931                 return (1);
 2932         else if (((so->so_state & SS_ISCONNECTED) == 0) &&
 2933             (so->so_proto->pr_flags & PR_CONNREQUIRED))
 2934                 return (0);
 2935         else if (kn->kn_sfflags & NOTE_LOWAT)
 2936                 return (kn->kn_data >= kn->kn_sdata);
 2937         else
 2938                 return (kn->kn_data >= so->so_snd.sb_lowat);
 2939 }
 2940 
 2941 /*ARGSUSED*/
 2942 static int
 2943 filt_solisten(struct knote *kn, long hint)
 2944 {
 2945         struct socket *so = kn->kn_fp->f_data;
 2946 
 2947         kn->kn_data = so->so_qlen;
 2948         return (! TAILQ_EMPTY(&so->so_comp));
 2949 }
 2950 
 2951 int
 2952 socheckuid(struct socket *so, uid_t uid)
 2953 {
 2954 
 2955         if (so == NULL)
 2956                 return (EPERM);
 2957         if (so->so_cred->cr_uid != uid)
 2958                 return (EPERM);
 2959         return (0);
 2960 }
 2961 
 2962 static int
 2963 sysctl_somaxconn(SYSCTL_HANDLER_ARGS)
 2964 {
 2965         int error;
 2966         int val;
 2967 
 2968         val = somaxconn;
 2969         error = sysctl_handle_int(oidp, &val, 0, req);
 2970         if (error || !req->newptr )
 2971                 return (error);
 2972 
 2973         if (val < 1 || val > USHRT_MAX)
 2974                 return (EINVAL);
 2975 
 2976         somaxconn = val;
 2977         return (0);
 2978 }
 2979 
 2980 /*
 2981  * These functions are used by protocols to notify the socket layer (and its
 2982  * consumers) of state changes in the sockets driven by protocol-side events.
 2983  */
 2984 
 2985 /*
 2986  * Procedures to manipulate state flags of socket and do appropriate wakeups.
 2987  *
 2988  * Normal sequence from the active (originating) side is that
 2989  * soisconnecting() is called during processing of connect() call, resulting
 2990  * in an eventual call to soisconnected() if/when the connection is
 2991  * established.  When the connection is torn down soisdisconnecting() is
 2992  * called during processing of disconnect() call, and soisdisconnected() is
 2993  * called when the connection to the peer is totally severed.  The semantics
 2994  * of these routines are such that connectionless protocols can call
 2995  * soisconnected() and soisdisconnected() only, bypassing the in-progress
 2996  * calls when setting up a ``connection'' takes no time.
 2997  *
 2998  * From the passive side, a socket is created with two queues of sockets:
 2999  * so_incomp for connections in progress and so_comp for connections already
 3000  * made and awaiting user acceptance.  As a protocol is preparing incoming
 3001  * connections, it creates a socket structure queued on so_incomp by calling
 3002  * sonewconn().  When the connection is established, soisconnected() is
 3003  * called, and transfers the socket structure to so_comp, making it available
 3004  * to accept().
 3005  *
 3006  * If a socket is closed with sockets on either so_incomp or so_comp, these
 3007  * sockets are dropped.
 3008  *
 3009  * If higher-level protocols are implemented in the kernel, the wakeups done
 3010  * here will sometimes cause software-interrupt process scheduling.
 3011  */
 3012 void
 3013 soisconnecting(struct socket *so)
 3014 {
 3015 
 3016         SOCK_LOCK(so);
 3017         so->so_state &= ~(SS_ISCONNECTED|SS_ISDISCONNECTING);
 3018         so->so_state |= SS_ISCONNECTING;
 3019         SOCK_UNLOCK(so);
 3020 }
 3021 
 3022 void
 3023 soisconnected(struct socket *so)
 3024 {
 3025         struct socket *head;
 3026 
 3027         ACCEPT_LOCK();
 3028         SOCK_LOCK(so);
 3029         so->so_state &= ~(SS_ISCONNECTING|SS_ISDISCONNECTING|SS_ISCONFIRMING);
 3030         so->so_state |= SS_ISCONNECTED;
 3031         head = so->so_head;
 3032         if (head != NULL && (so->so_qstate & SQ_INCOMP)) {
 3033                 if ((so->so_options & SO_ACCEPTFILTER) == 0) {
 3034                         SOCK_UNLOCK(so);
 3035                         TAILQ_REMOVE(&head->so_incomp, so, so_list);
 3036                         head->so_incqlen--;
 3037                         so->so_qstate &= ~SQ_INCOMP;
 3038                         TAILQ_INSERT_TAIL(&head->so_comp, so, so_list);
 3039                         head->so_qlen++;
 3040                         so->so_qstate |= SQ_COMP;
 3041                         ACCEPT_UNLOCK();
 3042                         sorwakeup(head);
 3043                         wakeup_one(&head->so_timeo);
 3044                 } else {
 3045                         ACCEPT_UNLOCK();
 3046                         so->so_upcall =
 3047                             head->so_accf->so_accept_filter->accf_callback;
 3048                         so->so_upcallarg = head->so_accf->so_accept_filter_arg;
 3049                         so->so_rcv.sb_flags |= SB_UPCALL;
 3050                         so->so_options &= ~SO_ACCEPTFILTER;
 3051                         SOCK_UNLOCK(so);
 3052                         so->so_upcall(so, so->so_upcallarg, M_DONTWAIT);
 3053                 }
 3054                 return;
 3055         }
 3056         SOCK_UNLOCK(so);
 3057         ACCEPT_UNLOCK();
 3058         wakeup(&so->so_timeo);
 3059         sorwakeup(so);
 3060         sowwakeup(so);
 3061 }
 3062 
 3063 void
 3064 soisdisconnecting(struct socket *so)
 3065 {
 3066 
 3067         /*
 3068          * Note: This code assumes that SOCK_LOCK(so) and
 3069          * SOCKBUF_LOCK(&so->so_rcv) are the same.
 3070          */
 3071         SOCKBUF_LOCK(&so->so_rcv);
 3072         so->so_state &= ~SS_ISCONNECTING;
 3073         so->so_state |= SS_ISDISCONNECTING;
 3074         so->so_rcv.sb_state |= SBS_CANTRCVMORE;
 3075         sorwakeup_locked(so);
 3076         SOCKBUF_LOCK(&so->so_snd);
 3077         so->so_snd.sb_state |= SBS_CANTSENDMORE;
 3078         sowwakeup_locked(so);
 3079         wakeup(&so->so_timeo);
 3080 }
 3081 
 3082 void
 3083 soisdisconnected(struct socket *so)
 3084 {
 3085 
 3086         /*
 3087          * Note: This code assumes that SOCK_LOCK(so) and
 3088          * SOCKBUF_LOCK(&so->so_rcv) are the same.
 3089          */
 3090         SOCKBUF_LOCK(&so->so_rcv);
 3091         so->so_state &= ~(SS_ISCONNECTING|SS_ISCONNECTED|SS_ISDISCONNECTING);
 3092         so->so_state |= SS_ISDISCONNECTED;
 3093         so->so_rcv.sb_state |= SBS_CANTRCVMORE;
 3094         sorwakeup_locked(so);
 3095         SOCKBUF_LOCK(&so->so_snd);
 3096         so->so_snd.sb_state |= SBS_CANTSENDMORE;
 3097         sbdrop_locked(&so->so_snd, so->so_snd.sb_cc);
 3098         sowwakeup_locked(so);
 3099         wakeup(&so->so_timeo);
 3100 }
 3101 
 3102 /*
 3103  * Make a copy of a sockaddr in a malloced buffer of type M_SONAME.
 3104  */
 3105 struct sockaddr *
 3106 sodupsockaddr(const struct sockaddr *sa, int mflags)
 3107 {
 3108         struct sockaddr *sa2;
 3109 
 3110         sa2 = malloc(sa->sa_len, M_SONAME, mflags);
 3111         if (sa2)
 3112                 bcopy(sa, sa2, sa->sa_len);
 3113         return sa2;
 3114 }
 3115 
 3116 /*
 3117  * Create an external-format (``xsocket'') structure using the information in
 3118  * the kernel-format socket structure pointed to by so.  This is done to
 3119  * reduce the spew of irrelevant information over this interface, to isolate
 3120  * user code from changes in the kernel structure, and potentially to provide
 3121  * information-hiding if we decide that some of this information should be
 3122  * hidden from users.
 3123  */
 3124 void
 3125 sotoxsocket(struct socket *so, struct xsocket *xso)
 3126 {
 3127 
 3128         xso->xso_len = sizeof *xso;
 3129         xso->xso_so = so;
 3130         xso->so_type = so->so_type;
 3131         xso->so_options = so->so_options;
 3132         xso->so_linger = so->so_linger;
 3133         xso->so_state = so->so_state;
 3134         xso->so_pcb = so->so_pcb;
 3135         xso->xso_protocol = so->so_proto->pr_protocol;
 3136         xso->xso_family = so->so_proto->pr_domain->dom_family;
 3137         xso->so_qlen = so->so_qlen;
 3138         xso->so_incqlen = so->so_incqlen;
 3139         xso->so_qlimit = so->so_qlimit;
 3140         xso->so_timeo = so->so_timeo;
 3141         xso->so_error = so->so_error;
 3142         xso->so_pgid = so->so_sigio ? so->so_sigio->sio_pgid : 0;
 3143         xso->so_oobmark = so->so_oobmark;
 3144         sbtoxsockbuf(&so->so_snd, &xso->so_snd);
 3145         sbtoxsockbuf(&so->so_rcv, &xso->so_rcv);
 3146         xso->so_uid = so->so_cred->cr_uid;
 3147 }
 3148 
 3149 
 3150 /*
 3151  * Socket accessor functions to provide external consumers with
 3152  * a safe interface to socket state
 3153  *
 3154  */
 3155 
 3156 void
 3157 so_listeners_apply_all(struct socket *so, void (*func)(struct socket *, void *), void *arg)
 3158 {
 3159         
 3160         TAILQ_FOREACH(so, &so->so_comp, so_list)
 3161                 func(so, arg);
 3162 }
 3163 
 3164 struct sockbuf *
 3165 so_sockbuf_rcv(struct socket *so)
 3166 {
 3167 
 3168         return (&so->so_rcv);
 3169 }
 3170 
 3171 struct sockbuf *
 3172 so_sockbuf_snd(struct socket *so)
 3173 {
 3174 
 3175         return (&so->so_snd);
 3176 }
 3177 
 3178 int
 3179 so_state_get(const struct socket *so)
 3180 {
 3181 
 3182         return (so->so_state);
 3183 }
 3184 
 3185 void
 3186 so_state_set(struct socket *so, int val)
 3187 {
 3188 
 3189         so->so_state = val;
 3190 }
 3191 
 3192 int
 3193 so_options_get(const struct socket *so)
 3194 {
 3195 
 3196         return (so->so_options);
 3197 }
 3198 
 3199 void
 3200 so_options_set(struct socket *so, int val)
 3201 {
 3202 
 3203         so->so_options = val;
 3204 }
 3205 
 3206 int
 3207 so_error_get(const struct socket *so)
 3208 {
 3209 
 3210         return (so->so_error);
 3211 }
 3212 
 3213 void
 3214 so_error_set(struct socket *so, int val)
 3215 {
 3216 
 3217         so->so_error = val;
 3218 }
 3219 
 3220 int
 3221 so_linger_get(const struct socket *so)
 3222 {
 3223 
 3224         return (so->so_linger);
 3225 }
 3226 
 3227 void
 3228 so_linger_set(struct socket *so, int val)
 3229 {
 3230 
 3231         so->so_linger = val;
 3232 }
 3233 
 3234 struct protosw *
 3235 so_protosw_get(const struct socket *so)
 3236 {
 3237 
 3238         return (so->so_proto);
 3239 }
 3240 
 3241 void
 3242 so_protosw_set(struct socket *so, struct protosw *val)
 3243 {
 3244 
 3245         so->so_proto = val;
 3246 }
 3247 
 3248 void
 3249 so_sorwakeup(struct socket *so)
 3250 {
 3251 
 3252         sorwakeup(so);
 3253 }
 3254 
 3255 void
 3256 so_sowwakeup(struct socket *so)
 3257 {
 3258 
 3259         sowwakeup(so);
 3260 }
 3261 
 3262 void
 3263 so_sorwakeup_locked(struct socket *so)
 3264 {
 3265 
 3266         sorwakeup_locked(so);
 3267 }
 3268 
 3269 void
 3270 so_sowwakeup_locked(struct socket *so)
 3271 {
 3272 
 3273         sowwakeup_locked(so);
 3274 }
 3275 
 3276 void
 3277 so_lock(struct socket *so)
 3278 {
 3279         SOCK_LOCK(so);
 3280 }
 3281 
 3282 void
 3283 so_unlock(struct socket *so)
 3284 {
 3285         SOCK_UNLOCK(so);
 3286 }

Cache object: 80cee0276a896a69e9fbca095d15c118


[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]


This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.