Since 5G link speed is supported by some devices, add reporting of 5G link
speed.
Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The current code enables on X550 timestamping of all packets for any
filter, which means ethtool should not report any PTP-specific filters
as unsupported.
Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Acked-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Use the ARRAY_SIZE macro on array buf to determine size of the array.
Improvement suggested by coccinelle.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Use the ARRAY_SIZE macro on various arrays to determine
size of the arrays. Improvement suggested by coccinelle.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In the case of the Tx rings we need to only clear the Tx buffer_info when
we are resetting the rings. Ideally we do this when we configure the ring
to bring it back up instead of when we are taking it down in order to avoid
dirtying pages we don't need to.
In addition we don't need to clear the Tx descriptor ring since we will
fully repopulate it when we begin transmitting frames and next_to_watch can
be cleared to prevent the ring from being cleaned beyond that point instead
of needing to touch anything in the Tx descriptor ring.
Finally with these changes we can avoid having to reset the skb member of
the Tx buffer_info structure in the cleanup path since the skb will always
be associated with the first buffer which has next_to_watch set.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Pull networking fixes from David Miller:
1) The per-network-namespace loopback device, and thus its namespace,
can have its teardown deferred for a long time if a kernel created
TCP socket closes and the namespace is exiting meanwhile. The kernel
keeps trying to finish the close sequence until it times out (which
takes quite some time).
Fix this by forcing the socket closed in this situation, from Dan
Streetman.
2) Fix regression where we're trying to invoke the update_pmtu method
on route types (in this case metadata tunnel routes) that don't
implement the dst_ops method. Fix from Nicolas Dichtel.
3) Fix long standing memory corruption issues in r8169 driver by
performing the chip statistics DMA programming more correctly. From
Francois Romieu.
4) Handle local broadcast sends over VRF routes properly, from David
Ahern.
5) Don't refire the DCCP CCID2 timer endlessly, otherwise the socket
can never be released. From Alexey Kodanev.
6) Set poll flags properly in VSOCK protocol layer, from Stefan
Hajnoczi.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
VSOCK: set POLLOUT | POLLWRNORM for TCP_CLOSING
dccp: don't restart ccid2_hc_tx_rto_expire() if sk in closed state
net: vrf: Add support for sends to local broadcast address
r8169: fix memory corruption on retrieval of hardware statistics.
net: don't call update_pmtu unconditionally
net: tcp: close sock if net namespace is exiting
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJaawnZAAoJEAx081l5xIa+HNcP/RiVEwwAfSu0MzMosYvf2P53
ekyWyHobwBzuGo5xjG0XvVbexXiad1D/JXCiNjoNPS65O/sbCG4nfh/w1mGlrJwi
QXOnx0CDBwReITbzwMfyJl+gcsHnnKt2jPK5RJUqNy/0fEQKNNhnTsRL/rAzkvDc
8Bc06blwXoCmPMAUsju3htzxCOKS+AgIqYH8qEDcZ+6aOA2f2/LU/hnpYdhl//CE
IOujzKIoJhmNqvndAb7conik+PiKzlq3GEpx966QMZajnu6LKH8iFoFt5M7Jg6cD
vcTEzLGZCIimb5wOqeLq3t9rgS05oScNKRryCxOGB9nTrgwhqAgRUQH3MCUVx+GZ
OybTr3QmS/Oq7a0XjB8LU2M86zR192Kvl5xzmUgT9bhbPdvzR65e6C/I/02+75BY
2FXrn1nGTFXApGPGKjmUo2hyRsSyVudfD6f4JUJX5rlbXiwyv2kfZv/2pCjnLYZt
sAlawMKp+rv628Tx9rPD/dWvMR5Ftrqp55b4eNEZPnsqNMjlEvZjgy+fHJ4VPIII
x9TJYTMlHqjy/tpWWn21qzMbRST1bNB1AaQnLNY10DaRkelEN2lPeNrG2xzC7+YG
8Y/p3Tezmu15dlIx4KUcC+aFUntDS5mdzJuOgnc970DkKTMuPqMMSX433McCNKFG
h3AaLo0IGQh+Dz2z1H0A
=r4NK
-----END PGP SIGNATURE-----
Merge tag 'drm-fixes-for-v4.15-rc10-2' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
"A fairly urgent nouveau regression fix for broken irqs across
suspend/resume came in. This was broken before but a patch in 4.15 has
made it much more obviously broken and now s/r fails a lot more often.
The fix removes freeing the irq across s/r which never should have
been done anyways.
Also two vc4 fixes for a NULL deference and some misrendering /
flickering on screen"
* tag 'drm-fixes-for-v4.15-rc10-2' of git://people.freedesktop.org/~airlied/linux:
drm/nouveau: Move irq setup/teardown to pci ctor/dtor
drm/vc4: Fix NULL pointer dereference in vc4_save_hang_state()
drm/vc4: Flush the caches before the bin jobs, as well.
select(2) with wfds but no rfds must return when the socket is shut down
by the peer. This way userspace notices socket activity and gets -EPIPE
from the next write(2).
Currently select(2) does not return for virtio-vsock when a SEND+RCV
shutdown packet is received. This is because vsock_poll() only sets
POLLOUT | POLLWRNORM for TCP_CLOSE, not the TCP_CLOSING state that the
socket is in when the shutdown is received.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
ccid2_hc_tx_rto_expire() timer callback always restarts the timer
again and can run indefinitely (unless it is stopped outside), and after
commit 120e9dabaf55 ("dccp: defer ccid_hc_tx_delete() at dismantle time"),
which moved ccid_hc_tx_delete() (also includes sk_stop_timer()) from
dccp_destroy_sock() to sk_destruct(), this started to happen quite often.
The timer prevents releasing the socket, as a result, sk_destruct() won't
be called.
Found with LTP/dccp_ipsec tests running on the bonding device,
which later couldn't be unloaded after the tests were completed:
unregister_netdevice: waiting for bond0 to become free. Usage count = 148
Fixes: 2a91aa396739 ("[DCCP] CCID2: Initial CCID2 (TCP-Like) implementation")
Signed-off-by: Alexey Kodanev <alexey.kodanev@oracle.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy says:
====================
cxgb4: fix dump collection when firmware crashed
Patch 1 resets FW_OK flag, if firmware reports error.
Patch 2 fixes incorrect condition for using firmware LDST commands.
Patch 3 fixes dump collection logic to use backdoor register
access to collect dumps when firmware is crashed.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Fallback to backdoor register access to collect dumps if firmware
is crashed. Fixes TID, SGE Queue Context, and MPS TCAM dump collection.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Only contact firmware if it's alive _AND_ if use_bd (use backdoor
access) is not set when issuing FW_LDST_CMD.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If firmware reports error, reset FW_OK flag.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Peng Li says:
====================
net: hns3: add support ethtool_ops.{set|get}_coalesce for VF
This patch-set adds ethtool_ops.{get|set}_coalesce to VF and
fix one related bug.
HNS3 PF and VF driver use the common enet layer, as the
ethtool_ops.{get|set}_coalesce to PF have upstreamed, just
need add the ops to hns3vf_ethtool_ops.
[Patch 1/2] fix a related bug for the VF ethtool_ops.{set|
get}_coalesce.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Just like PF, if the int_gl_idx of VF does not be set, the default
interrupt coalesce index of VF is 0. But it should be GL1 for TX
queues and GL0 for RX queues.
This patch adds the int_gl_idx setup for VF.
Fixes: 200ecda42598 ("net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support")
Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds ethtool_ops.get/set_coalesce support to VF.
Since PF and VF share the same get/set_coalesce interface,
we only need to set hns3_get/set_coalesce to the ethtool_ops
when supporting get/set_coalesce for VF.
Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com>
Signed-off-by: Peng Li <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
-----BEGIN PGP SIGNATURE-----
iQFHBAABCgAxFiEE4bay/IylYqM/npjQHv7KIOw4HPYFAlpq+ZkTHG1rbEBwZW5n
dXRyb25peC5kZQAKCRAe/sog7Dgc9mFcB/wPSu30a664/+wjUvXM7Zdw4ko/PRdS
deSRnjGj3epkHRyGJkdGSuPx9iGg3pqR8poMCZZmFUG+kGBmEcGQX+eyaR41zIUz
iyEgZSufYDjsW47eGBsNE01xQjoL1jcF9JM7NHmRrw4+2YF75cGE3BOGcmcV6Hjc
O5HDIpLmbeMHI4NcujgD4UG/VPnZQw3+oN9eyYUEbY5Aa2XQyW76DIJ3SyKsHQz0
K/s0uxAGo+Ap7xuoBUJpx6BBYoHYM171DTgXfH9pUB0MwqyDCq3hAyYGR+UEdIXb
IDhIcN/l5wFU8VICjYmSKgKyjjHqlixgoki2snmJxVWu0KeVl5LJ1Edv
=7jiC
-----END PGP SIGNATURE-----
Merge tag 'linux-can-next-for-4.16-20180126' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next
Marc Kleine-Budde says:
====================
pull-request: can-next 2018-01-26
this is a pull request for net-next/master consisting of 3 patches.
The first two patches target the CAN documentation. The first is by me
and fixes pointer to location of fsl,mpc5200-mscan node in the mpc5200
documentation. The second patch is by Robert Schwebel and it converts
the plain ASCII documentation to restructured text.
The third patch is by Fabrizio Castro add the r8a774[35] support to the
rcar_can dt-bindings documentation.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Based on commit ec718254cbfe
("ixgbe: Improve performance and reduce size of ixgbe_tx_map")
This change is meant to both improve the performance and reduce the size of
ixgbevf_tx_map().
Expand the work done in the main loop by pushing first into tx_buffer.
This allows us to pull in the dma_mapping_error check, the tx_buffer value
assignment, and the initial DMA value assignment to the Tx descriptor.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Based on commit d2bead576e67
("igb: Clear Rx buffer_info in configure instead of clean")
This change makes it so that instead of going through the entire ring on Rx
cleanup we only go through the region that was designated to be cleaned up
and stop when we reach the region where new allocations should start.
In addition we can avoid having to perform a memset on the Rx buffer_info
structures until we are about to start using the ring again.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We already had placehloders for failed page and buffer allocations.
Added alloc_rx_page and made sure the stats are properly updated and
exposed in ethtool.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Based on commit bd4171a5d4c2
("igb: update code to better handle incrementing page count")
Update the driver code so that we do bulk updates of the page reference
count instead of just incrementing it by one reference at a time. The
advantage to doing this is that we cut down on atomic operations and
this in turn should give us a slight improvement in cycles per packet.
In addition if we eventually move this over to using build_skb the gains
will be more noticeable.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Based on commit 5be5955425c2
("igb: update driver to make use of DMA_ATTR_SKIP_CPU_SYNC")
and
commit 7bd175928280 ("igb: Add support for DMA_ATTR_WEAK_ORDERING")
Convert the calls to dma_map/unmap_page() to the attributes version
and add DMA_ATTR_SKIP_CPU_SYNC/WEAK_ORDERING which should help
improve performance on some platforms.
Move sync_for_cpu call before we perform a prefetch to avoid
invalidating the first 128 bytes of the packet on architectures where
that call may invalidate the cache.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Based on:
commit 7ec0116c9131 ("igb: Use length to determine if descriptor is done")
This change makes it so that we use the length of the packet instead of the
DD status bit to determine if a new descriptor is ready to be processed.
The obvious advantage is that it cuts down on reads as we don't really even
need the DD bit if going from a 0 to a non-zero value on size is enough to
inform us that the packet has been completed.
In addition we only reset the Rx descriptor length for descriptor zero when
resetting a ring instead of having to do a memset with 0 over the entire
ring. By doing this we can save some time on initialization.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Based on commit 64f2525ca4e7 ("igb: Only DMA sync frame length")
On some architectures synching a buffer for DMA may be expensive.
Instead of the entire 2K receive buffer only synchronize the length of
the frame, which will typically be the MTU or smaller.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Introduce ixgbevf_can_reuse_page() similar to the change in ixgbe from
commit af43da0dba0b
("ixgbe: Add function for checking to see if we can reuse page")
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Ursula Braun says:
====================
net/smc: fixes 2018-01-26
here are some more smc patches. The first 4 patches take care about
different aspects of smc socket closing, the 5th patch improves
coding style.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Return statements in functions returning bool should use
true/false instead of 1/0.
This issue was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Closing a listen socket may hit the warning
WARN_ON(sock_owned_by_user(sk)) of tcp_close(), if the wake up of
the smc_tcp_listen_worker has not yet finished.
This patch introduces smc_close_wait_listen_clcsock() making sure
the listening internal clcsock has been closed in smc_tcp_listen_work(),
before the listening external SMC socket finishes closing.
Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Proper socket refcounting makes the sock_put worker obsolete.
Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Increase the socket refcount during poll wait.
Take the socket lock before checking socket state.
For a listening socket return a mask independent of state SMC_ACTIVE and
cover errors or closed state as well.
Get rid of the accept_q loop in smc_accept_poll().
Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
RoCE device changes cause an IB event, processed in the global event
handler for the ROCE device. Problems for a certain Queue Pair cause a QP
event, processed in the QP event handler for this QP.
Among those events are port errors and other fatal device errors. All
link groups using such a port or device must be terminated in those cases.
Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Steffen Klassert says:
====================
pull request (net-next): ipsec-next 2018-01-26
One last patch for this development cycle:
1) Add ESN support for IPSec HW offload.
From Yossef Efraim.
Please pull or let me know if there are problems.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes: 1280c0f8aafc ("sfc: support second + quarter ns time format for receive datapath")
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Bert Kenward <bkenward@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern says:
====================
net/ipv6: Add support for ONLINK flag
Add support for RTNH_F_ONLINK with ipv6 routes.
First patch moves existing gateway validation into helper. The onlink
flag requires a different set of checks and the existing validation
makes ip6_route_info_create long enough.
Second patch makes the table id and lookup flag an option to
ip6_nh_lookup_table. onlink check needs to verify the gateway without
the RT6_LOOKUP_F_IFACE flag and PBR with VRF means the table id can
vary between the table the route is inserted and the VRF the egress
device is enslaved to.
Third patch adds support for RTNH_F_ONLINK.
I have a set of test cases in a format based on the framework Ido and
Jiri are working on. Once that goes in I will adapt the script and
submit.
v2
- removed table id check. Too constraining for PBR with VRF use cases
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Similar to IPv4 allow routes to be added with the RTNH_F_ONLINK flag.
The onlink option requires a gateway and a nexthop device. Any unicast
gateway is allowed (including IPv4 mapped addresses and unresolved
ones) as long as the gateway is not a local address and if it resolves
it must match the given device.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
onlink verification needs to do a lookup in potentially different
table than the table in fib6_config and without the RT6_LOOKUP_F_IFACE
flag. Change ip6_nh_lookup_table to take table id and flags as input
arguments. Both verifications want to ignore link state, so add that
flag can stay in the lookup helper.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Move existing code to validate nexthop into a helper. Follow on patch
adds support for nexthops marked with onlink, and this helper keeps
the complexity of ip6_route_info_create in check.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Document "renesas,can-r8a7743" and "renesas,can-r8a7745" compatible
strings. Since the fallback compatible string ("renesas,rcar-gen2-can")
activates the right code in the driver, no driver change is needed.
Signed-off-by: Fabrizio Castro <fabrizio.castro@bp.renesas.com>
Reviewed-by: Biju Das <biju.das@bp.renesas.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The kernel documentation is now restructured text. Convert the SocketCAN
documentation and include it in the toplevel kernel documentation.
This patch doesn't do any content change.
All references to can.txt in the code are converted to can.rst.
Signed-off-by: Robert Schwebel <r.schwebel@pengutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
This patch fixes the pointer to the location of the fsl,mpc5200-mscan
device tree node binding documentation.
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Apparently hardware does not perform CCMP PN validation in hardware, so
we need to take care of this in the driver. This is important for
protecting against replay attacks.
Since validation of fragmented frames is more complex, the CCMP header
for those is preserved. To keep the counter in sync, the first fragment
is verified by both mt76 and mac80211, and all other fragments only by
mac80211.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Add a separate function for processing frames after A-MPDU reordering,
reduce code duplication
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
This is required for performing CCMP PN validation in software
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Avoids the rhashtable lookup based on the MAC address inside mac80211
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Preparation for passing in more internal rx data via skb->cb
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Like beacons, probe responses need a hardware-generated TSF value. Set
the flag that causes the hw to generate it
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Sending frames to mac80211 needs time, which could allow for more rx
packets to end up in the DMA ring. Retry polling until there are no more
frames left. Improves rx latency under load.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Message sends to the local broadcast address (255.255.255.255) require
uc_index or sk_bound_dev_if to be set to an egress device. However,
responses or only received if the socket is bound to the device. This
is overly constraining for processes running in an L3 domain. This
patch allows a socket bound to the VRF device to send to the local
broadcast address by using IP_UNICAST_IF to set the egress interface
with packet receipt handled by the VRF binding.
Similar to IP_MULTICAST_IF, relax the constraint on setting
IP_UNICAST_IF if a socket is bound to an L3 master device. In this
case allow uc_index to be set to an enslaved if sk_bound_dev_if is
an L3 master device and is the master device for the ifindex.
In udp and raw sendmsg, allow uc_index to override the oif if
uc_index master device is oif (ie., the oif is an L3 master and the
index is an L3 slave).
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>