summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/intel/ice/ice_base.h
AgeCommit message (Collapse)Author
2025-09-19ice: add E830 Earliest TxTime First Offload supportPaul Greenwalt
E830 supports Earliest TxTime First (ETF) hardware offload, which is configured via the ETF Qdisc on a per-queue basis (see tc-etf(8)). ETF introduces a new Tx flow mechanism that utilizes a timestamp ring (tstamp_ring) alongside the standard Tx ring. This timestamp ring is used to indicate when hardware will transmit a packet. Tx Time is supported on the first 2048 Tx queues of the device, and the NVM image limits the maximum number of Tx queues to 2048 for the device. The allocation and initialization of the timestamp ring occur when the feature is enabled on a specific Tx queue via tc-etf. The requested Tx Time queue index cannot be greater than the number of Tx queues (vsi->num_txq). To support ETF, the following flags and bitmap are introduced: - ICE_F_TXTIME: Device feature flag set for E830 NICs, indicating ETF support. - txtime_txqs: PF-level bitmap set when ETF is enabled and cleared when disabled for a specific Tx queue. It is used by ice_is_txtime_ena() to check if ETF is allocated and configured on any Tx queue, which is checked during Tx ring allocation. - ICE_TX_FLAGS_TXTIME: Per Tx ring flag set when ETF is allocated and configured for a specific Tx queue. It determines ETF status during packet transmission and is checked by ice_is_txtime_ena() to verify if ETF is enabled on any Tx queue. Due to a hardware issue that can result in a malicious driver detection event, additional timestamp descriptors are required when wrapping around the timestamp ring. Up to 64 additional timestamp descriptors are reserved, reducing the available Tx descriptors. To accommodate this, ICE_MAX_NUM_DESC_BY_MAC is introduced, defining: - E830: Maximum Tx descriptor count of 8096 (8K - 32 - 64 for timestamp fetch descriptors). - E810 and E82X: Maximum Tx descriptor count of 8160 (8K - 32). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Co-developed-by: Alice Michael <alice.michael@intel.com> Signed-off-by: Alice Michael <alice.michael@intel.com> Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-19ice: move ice_qp_[ena|dis] for reusePaul Greenwalt
Move ice_qp_[ena|dis] and related helper functions to ice_base.c to allow reuse of these function currently only used by ice_xsk.c. Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-02-02ice: make ice_vsi_cfg_txq() staticMaciej Fijalkowski
Currently, XSK control path in ice driver calls directly ice_vsi_cfg_txq() whereas we have ice_vsi_cfg_single_txq() for that purpose. Use the latter from XSK side and make ice_vsi_cfg_txq() static. ice_vsi_cfg_txq() resides in ice_base.c and is rather big, so to reduce the code churn let us move the callers of it from ice_lib.c to ice_base.c. This change puts ice_qp_ena() on nice diet due to the checks and operations that ice_vsi_cfg_single_{r,t}xq() do internally. add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-182 (-182) Function old new delta ice_xsk_pool_setup 2165 1983 -182 Total: Before=472597, After=472415, chg -0.04% Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Chandan Kumar Rout <chandanx.rout@intel.com> (A Contingent Worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-02-02ice: make ice_vsi_cfg_rxq() staticMaciej Fijalkowski
Currently, XSK control path in ice driver calls directly ice_vsi_cfg_rxq() whereas we have ice_vsi_cfg_single_rxq() for that purpose. Use the latter from XSK side and make ice_vsi_cfg_rxq() static. ice_vsi_cfg_rxq() resides in ice_base.c and is rather big, so to reduce the code churn let us move two callers of it from ice_lib.c to ice_base.c. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Chandan Kumar Rout <chandanx.rout@intel.com> (A Contingent Worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-01-02ice: ice_base.c: Add const modifier to params and varsJan Glaza
Add const modifier to function parameters and variables where appropriate in ice_base.c and corresponding declarations in ice_base.h. The reason for starting the change is that read-only pointers should be marked as const when possible to allow for smoother and more optimal code generation and optimization as well as allowing the compiler to warn the developer about potentially unwanted modifications, while not carrying noticeable negative impact. Reviewed-by: Andrii Staikov <andrii.staikov@intel.com> Reviewed-by: Sachin Bahadur <sachin.bahadur@intel.com> Signed-off-by: Jan Glaza <jan.glaza@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-10-15ice: split ice_ring onto Tx/Rx separate structsMaciej Fijalkowski
While it was convenient to have a generic ring structure that served both Tx and Rx sides, next commits are going to introduce several Tx-specific fields, so in order to avoid hurting the Rx side, let's pull out the Tx ring onto new ice_tx_ring and ice_rx_ring structs. Rx ring could be handled by the old ice_ring which would reduce the code churn within this patch, but this would make things asymmetric. Make the union out of the ring container within ice_q_vector so that it is possible to iterate over newly introduced ice_tx_ring. Remove the @size as it's only accessed from control path and it can be calculated pretty easily. Change definitions of ice_update_ring_stats and ice_fetch_u64_stats_per_ring so that they are ring agnostic and can be used for both Rx and Tx rings. Sizes of Rx and Tx ring structs are 256 and 192 bytes, respectively. In Rx ring xdp_rxq_info occupies its own cacheline, so it's the major difference now. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-06-07ice: Refactor ice_setup_rx_ctxKrzysztof Kazimierczak
Move AF_XDP logic and buffer allocation out of ice_setup_rx_ctx() to a new function ice_vsi_cfg_rxq(), so the function actually sets up the Rx context. Signed-off-by: Krzysztof Kazimierczak <krzysztof.kazimierczak@intel.com> Co-developed-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Kiran Bhandare <kiranx.bhandare@intel.com>
2020-02-15ice: Add support to enable/disable all Rx queues before waitingBrett Creeley
Currently when we enable/disable all Rx queues we do the following sequence for each Rx queue and then move to the next queue. 1. Enable/Disable the Rx queue via register write. 2. Read the configuration register to determine if the Rx queue was enabled/disabled successfully. In some cases enabling/disabling queue 0 fails because of step 2 above. Fix this by doing step 1 for all of the Rx queues and then step 2 for all of the Rx queues. Also, there are cases where we enable/disable a single queue (i.e. SR-IOV and XDP) so add a new function that does step 1 and 2 above with a read flush in between. This change also required a single Rx queue to be enabled/disabled with and without waiting for the change to propagate through hardware. Fix this by adding a boolean wait flag to the necessary functions. Also, add the keywords "one" and "all" to distinguish between enabling/disabling a single Rx queue and all Rx queues respectively. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-11-04ice: get rid of per-tc flow in Tx queue configuration routinesMaciej Fijalkowski
There's no reason for treating DCB as first class citizen when configuring the Tx queues and going through TCs. Reverse the logic and base the configuration logic on rings, which is the object of interest anyway. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-11-04ice: Introduce ice_base.cAnirudh Venkataramanan
Remove a few uses of kernel configuration flags from ice_lib.c by introducing a new source file ice_base.c. Also move corresponding function prototypes from ice_lib.h to ice_base.h and include ice_base.h where required. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>