This change is meant to improve performance by splitting the Tx and Rx
rings into 3 sections. The first is primarily a read only section
containing basic things like the indexes, a pointer to the dev and netdev
structures, and basic information. The second section contains the stats
and next_to_use and next_to_clean values. The third section is primarily
unused values that can just be placed at the end of the ring and are not
used in the hot path.
The adapter structure has several sections that are read in the hot path.
In order to improve performance there I am combining the frequent read
hot path items into a single cache line.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change is meant to streamline the Rx buffer allocation and cleanup.
This is accomplished by reducing the number of writes by only having the Rx
descriptor ring written by software during allocation, and it will only be
read during cleanup.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change removes support for single buffer mode from igb and makes the
driver function in packet split always. The advantage to doing this is
that we can reduce total memory allocation overhead significantly as we
will only need to allocate one 1K slab per packet and then make use of a
reusable half page instead of allocating a 2K slab per packet.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch modifies the max_frame_size in order account for an optional
VLAN tag. In order to support this we must also increase the
MAX_STD_JUMBO_FRAME_SIZE to account for the 4 extra bytes.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change cleans up the RXDCTL and TXDCTL configurations and optimizes RX
performance by allowing back write-backs on all hardware other than 82576.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
tcp_md5sig_pool is currently an 'array' (a percpu object) of pointers to
struct tcp_md5sig_pool. Only the pointers are NUMA aware, but objects
themselves are all allocated on a single node.
Remove this extra indirection to get proper percpu memory (NUMA aware)
and make code simpler.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change details:
- In a continuous sequence of ifconfig up/down operations, there is a small
window of race between bnad_set_rx_mode() and bnad_cleanup_rx() while the
former tries to access rx_info->rx & the latter sets it to NULL. This race
could lead to bna_rx_mode_set() being called with a NULL (rx_info->rx)
pointer and a crash.
- Hold bnad->bna_lock while setting / unsetting rx_info->rx in bnad_setup_rx()
& bnad_cleanup_rx(), thereby eliminating the race described above.
Signed-off-by: Gurunatha Karaje <gkaraje@brocade.com>
Signed-off-by: Rasesh Mody <rmody@brocade.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When Rx queue size is changed, queues are torn down and setup with the new queue
size. During this operation, clear promiscuous mode and restore the original
VLAN filter.
Signed-off-by: Gurunatha Karaje <gkaraje@brocade.com>
Signed-off-by: Rasesh Mody <rmody@brocade.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove a BUG_ON() as it is not required.
Change the unconditional write to release a semaphore to read sem first
and then write. This will eliminate the possibility of sem getting locked
while trying to release it in case if previous sem_get operation failed.
Signed-off-by: Gurunatha Karaje <gkaraje@brocade.com>
Signed-off-by: Rasesh Mody <rmody@brocade.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
netif_tx_start_all_queues() is already called in ixgbe_up_complete, no need
to do it twice.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Remove duplicate inc of hwstats->ruc
Introduce separate loops for 8 and 16 register reads.
Consolidate mac checks under one case.
Make sure registers are cleared on read.
Reported-by: Jonathan Lynch <jonathan.lynch@thenowfactory.com>
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
CC: Jonathan Lynch <jonathan.lynch@thenowfactory.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch improves the memory utilization with RSC when in one-buffer
mode. This is accomplished by making the default buffer sizes match up
with the standard memory allocation sizes minus 1K for shared info and
padding overhead. By doing this CPU utilization when doing large receives
can be reduced by as much as 8%.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The adapter structure was removed from the call so it can be dropped from
the ixgbe_fso documentation.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
One of the 82598 phys was not being correctly identified as being SFP.
This change corrects that.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change adds a small bit of missing code for enabling the overheat sensor
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
ixgbe_up and ixgbe_up_complete will always return 0. Since this doesn't
provide any useful information we might as well just make them both void
and save ourselves from having to return an unused value.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change fixes an issue in which the incorrect amount of headroom was
being reserved for flow director filters.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change fixes a minor redundancy in that tx_sample_rate was set twice.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Private rx_csum flags are now duplicate of netdev->features & NETIF_F_RXCSUM.
Removing this needs deeper surgery.
Things noticed:
- ixgb has RX csum disabled by default
- HW VLAN acceleration probably can be toggled, but it's left as is
- the resets on RX csum offload change can probably be avoided
- there is A LOT of copy-and-pasted code here
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This reverts commit 0856a30409.
As requested by Eric Dumazet, it has various ref-counting
problems and has introduced regressions. Eric will add
a more suitable version of this performance fix.
Signed-off-by: David S. Miller <davem@davemloft.net>
A user-space process must use ETHTOOL_GRXCLSRLCNT to find the number
of classification rules, then allocate a buffer of the right size,
then use ETHTOOL_GRXCLSRLALL to fill the buffer. If some other
process inserts or deletes a rule between those two operations,
the user buffer might turn out to be the wrong size.
If it's too small, the return value will be -EMSGSIZE. But if it's
too large, there is no indication of this. Fix this by updating
the rule_cnt field on return.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Correct the description of ethtool_rxnfc::rule_locs; it is an array
of currently used locations, not all possible valid locations.
Add note that drivers must not use ethtool_rxnfc::rule_locs.
The rule_locs argument to ethtool_ops::get_rxnfc is either NULL or a
pointer to an array of u32, so change the parameter type accordingly.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The location of an RX flow classification rule is needed to identify
it for retrieval, replacement or deletion. However it also defines
the priority of the rule in the case that a flow is matched by
multiple rules. This is what I intended to imply by referring to the
use of a TCAM, commonly used to implement that behaviour.
However there are other ways this can be done, and it is better to
specify this explicitly. Further, I want to add the option for
automatic selection of rule locations.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Refer consistently to 'classification rules' or just 'rules' rather
than 'filter specifications' or 'filter rules'.
Refer consistently to rule 'locations' and not 'indices'.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
All tables of function pointers should be const.
The pre-existing code has lots of needless indirection...
Inspired by similar change in PAX.
Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
To prevent malicious usage, all tables of pointers must be const.
Compile tested only.
Gleaned for PAX.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Function tables need to be const to prevent malicious use.
This is compile tested only.
Gleaned from PAX.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is compile tested only.
Suggested by dumpster diving in PAX.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch improves the logic determining when to send ICMPv6 Router
Solicitations, so that they are 1) always sent when the kernel is
accepting Router Advertisements, and 2) never sent when the kernel is
not accepting RAs. In other words, the operational setting of the
"accept_ra" sysctl is used.
The change also makes the special "Hybrid Router" forwarding mode
("forwarding" sysctl set to 2) operate exactly the same as the standard
Router mode (forwarding=1). The only difference between the two was
that RSes was being sent in the Hybrid Router mode only. The sysctl
documentation describing the special Hybrid Router mode has therefore
been removed.
Rationale for the change:
Currently, the value of forwarding sysctl is the only thing determining
whether or not to send RSes. If it has the value 0 or 2, they are sent,
otherwise they are not. This leads to inconsistent behaviour in the
following cases:
* accept_ra=0, forwarding=0
* accept_ra=0, forwarding=2
* accept_ra=1, forwarding=2
* accept_ra=2, forwarding=1
In the first three cases, the kernel will send RSes, even though it will
not accept any RAs received in reply. In the last case, it will not send
any RSes, even though it will accept and process any RAs received. (Most
routers will send unsolicited RAs periodically, so suppressing RSes in
the last case will merely delay auto-configuration, not prevent it.)
Also, it is my opinion that having the forwarding sysctl control RS
sending behaviour (completely independent of whether RAs are being
accepted or not) is simply not what most users would intuitively expect
to be the case.
Signed-off-by: Tore Anderson <tore@fud.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds a CAN Gateway/Router to route (and modify) CAN frames.
It is based on the PF_CAN core infrastructure for msg filtering and msg
sending and can optionally modify routed CAN frames on the fly.
CAN frames can *only* be routed between CAN network interfaces (one hop).
They can be modified with AND/OR/XOR/SET operations as configured by the
netlink configuration interface known e.g. from iptables. From the netlink
view this can-gw implements RTM_{NEW|DEL|GET}ROUTE for PF_CAN.
The CAN specific userspace tool to manage CAN routing entries can be found in
the CAN utils http://svn.berlios.de/wsvn/socketcan/trunk/can-utils/cangw.c
at the SocketCAN SVN on BerliOS.
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
An earlier developer misunderstood the meaning of the 'irq' fields and
the driver did not support the standard fields. To avoid invalidating
existing user documentation, we report and accept changes through
either the standard or 'irq' fields. If both are changed at the same
time, we prefer the standard field.
Also explain why we don't currently use the 'max_frames' fields.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a range check, and move the check that RX and TX are consistent
from efx_ethtool_set_coalesce().
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The reported TX IRQ moderation is generated in a completely crazy way.
Make it simple and correct.
When channels are shared between RX and TX, TX IRQ moderation must be
the same as RX IRQ moderation, but must be specified as 0! Allow it
to be either specified as the same, or left at its previous value
in which case it will be quietly overridden.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It was possible to inadvertently add additional interrupt causes to the
MSI-X other interrupt. This occurred when things such as RX buffer overrun
events were being triggered at the same time as an event such as a Flow
Director table reinit request. In order to avoid this we should be
explicitly programming only the interrupts that we want enabled. In
addition I am renaming the ixgbe_msix_lsc function and interrupt to drop
any implied meaning of this being a link status only interrupt.
Unfortunately the patch is a bit ugly due to the fact that ixgbe_irq_enable
needed to be moved up before ixgbe_msix_other in order to have things
defined in the correct order.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change is meant to cleanup some of the code related to SR-IOV and the
interrupt registers. Specifically I am moving the EITRSEL configuration
into the MSI-X configuration section instead of enablement. Also I am
fixing the VF shutdown path since it had operations in the incorrect order.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The reset paths are overly complicated and are either missing steps or
contain extra unnecessary steps such as reading MAC address twice. This
change is meant to help clean up the reset paths an get things functioning
correctly.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change updated the TXDCTL configuration. The main goal is to be much
more explicit about the configuration and avoid a possible fake TX hang
when the interrupt throttle rate is set to 0.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch is a minor whitespace cleanup to compress the device ID
declaration and board type declaration onto the same line. It seems to
make sense since all of the combinations of the two are less than 80
characters and it makes the overall layout a bit more readable.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch drops a set of unnecessary dereferences to the hardware structure
since we already have a local copy of the hardware pointer.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch makes it so that the map_rings_to_vectors call will work with
all interrupt types. The advantage to this is that there will now be a
predictable mapping for all given interrupt types.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change switches us over to using the ring->dev pointer instead of
having to use the adapter->pdev->dev reference. The advantage to this is
that it is a much shorter route to get the to final needed value.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The allocation and freeing of the IRQ affinity hint needs some updates
since there are a number of spots where we run into possible issues with
the hint not being correctly updated.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change consolidates all of the MSI-X interrupt and polling routines
into two single functions. One for the interrupt and one for the code.
The main advantage to doing this is that the compiler can optimize the
routines into single monolithic functions which should allow all of them
function to occupy a single block of memory and as such avoid jumping
around.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change makes it so that the default Tx work limit is 256 buffers or
1/2 of an entire ring instead of a full ring size so that it is much more
likely that we will be able to actually reach the work limit value.
Previously with the value set to an entire ring it would not have been
possible for us to trigger an event due to the fact that the Tx work is
stopped at the point where we cannot place one more buffer on the ring and
it is not restarted until cleanup is complete.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>