summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/intel/idpf
AgeCommit message (Collapse)Author
13 daysidpf: correct queue index in Rx allocation error messagesAlok Tiwari
The error messages in idpf_rx_desc_alloc_all() used the group index i when reporting memory allocation failures for individual Rx and Rx buffer queues. This is incorrect. Update the messages to use the correct queue index j and include the queue group index i for clearer identification of the affected Rx and Rx buffer queues. Signed-off-by: Alok Tiwari <alok.a.tiwari@oracle.com> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://patch.msgid.link/20251125223632.1857532-10-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
13 daysidpf: use desc_ring when checking completion queue DMA allocationAlok Tiwari
idpf_compl_queue uses a union for comp, comp_4b, and desc_ring. The release path should check complq->desc_ring to determine whether the DMA descriptor ring is allocated. The current check against comp works but is leftover from a previous commit and is misleading in this context. Switching the check to desc_ring improves readability and more directly reflects the intended meaning, since desc_ring is the field representing the allocated DMA-backed descriptor ring. No functional change. Signed-off-by: Alok Tiwari <alok.a.tiwari@oracle.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://patch.msgid.link/20251125223632.1857532-9-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
13 daysidpf: convert vport state to bitmapEmil Tantilov
Convert vport state to a bitmap and remove the DOWN state which is redundant in the existing logic. There are no functional changes aside from the use of bitwise operations when setting and checking the states. Removed the double underscore to be consistent with the naming of other bitmaps in the header and renamed current_state to vport_is_up to match the meaning of the new variable. Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Chittim Madhu <madhu.chittim@intel.com> Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://patch.msgid.link/20251125223632.1857532-6-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-26idpf: extract GRXRINGS from .get_rxnfcBreno Leitao
Commit 84eaf4359c36 ("net: ethtool: add get_rx_ring_count callback to optimize RX ring queries") added specific support for GRXRINGS callback, simplifying .get_rxnfc. Remove the handling of GRXRINGS in .get_rxnfc() by moving it to the new .get_rx_ring_count(). This simplifies the RX ring count retrieval and aligns idpf with the new ethtool API for querying RX ring parameters. I was not totally convinced I needed to have the lock, but, I decided to be on the safe side and get the exact same behaviour it was before. Signed-off-by: Breno Leitao <leitao@debian.org> Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Link: https://patch.msgid.link/20251125-gxring_intel-v2-4-f55cd022d28b@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-20Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.18-rc7). No conflicts, adjacent changes: tools/testing/selftests/net/af_unix/Makefile e1bb28bf13f4 ("selftest: af_unix: Add test for SO_PEEK_OFF.") 45a1cd8346ca ("selftests: af_unix: Add tests for ECONNRESET and EOF semantics") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-18idpf: fix possible vport_config NULL pointer deref in removeEmil Tantilov
Attempting to remove the driver will cause a crash in cases where the vport failed to initialize. Following trace is from an instance where the driver failed during an attempt to create a VF: [ 1661.543624] idpf 0000:84:00.7: Device HW Reset initiated [ 1722.923726] idpf 0000:84:00.7: Transaction timed-out (op:1 cookie:2900 vc_op:1 salt:29 timeout:60000ms) [ 1723.353263] BUG: kernel NULL pointer dereference, address: 0000000000000028 ... [ 1723.358472] RIP: 0010:idpf_remove+0x11c/0x200 [idpf] ... [ 1723.364973] Call Trace: [ 1723.365475] <TASK> [ 1723.365972] pci_device_remove+0x42/0xb0 [ 1723.366481] device_release_driver_internal+0x1a9/0x210 [ 1723.366987] pci_stop_bus_device+0x6d/0x90 [ 1723.367488] pci_stop_and_remove_bus_device+0x12/0x20 [ 1723.367971] pci_iov_remove_virtfn+0xbd/0x120 [ 1723.368309] sriov_disable+0x34/0xe0 [ 1723.368643] idpf_sriov_configure+0x58/0x140 [idpf] [ 1723.368982] sriov_numvfs_store+0xda/0x1c0 Avoid the NULL pointer dereference by adding NULL pointer check for vport_config[i], before freeing user_config.q_coalesce. Fixes: e1e3fec3e34b ("idpf: preserve coalescing settings across resets") Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Reviewed-by: Chittim Madhu <madhu.chittim@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Samuel Salin <Samuel.salin@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-11-06idpf: add support for IDPF PCI programming interfacePavan Kumar Linga
At present IDPF supports only 0x1452 and 0x145C as PF and VF device IDs on our current generation hardware. Future hardware exposes a new set of device IDs for each generation. To avoid adding a new device ID for each generation and to make the driver forward and backward compatible, make use of the IDPF PCI programming interface to load the driver. Write and read the VF_ARQBAL mailbox register to find if the current device is a PF or a VF. PCI SIG allocated a new programming interface for the IDPF compliant ethernet network controller devices. It can be found at: https://members.pcisig.com/wg/PCI-SIG/document/20113 with the document titled as 'PCI Code and ID Assignment Revision 1.16' or any latest revisions. Tested this patch by doing a simple driver load/unload on Intel IPU E2000 hardware which supports 0x1452 and 0x145C device IDs and new hardware which supports the IDPF PCI programming interface. Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com> Signed-off-by: Madhu Chittim <madhu.chittim@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: Marek Landowski <marek.landowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://patch.msgid.link/20251103224631.595527-1-anthony.l.nguyen@intel.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-10-29idpf: remove duplicate defines in IDPF_CAP_RSSSreedevi Joshi
Remove duplicate defines from the OR operation when defining IDPF_CAP_RSS. Duplicate definitions were introduced when IDPF_CAP_RSS was originally defined and were left behind and went unnoticed during a previous commit that renamed them. Review of the original out-of-tree code confirms these duplicates were the result of a typing error. Remove the duplicates to clean up the code and avoid potential confusion. Also verify no other duplicate occurrences of these defines exist elsewhere in the codebase. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Sreedevi Joshi <sreedevi.joshi@intel.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-10-13idpf: cleanup remaining SKBs in PTP flowsMilena Olech
When the driver requests Tx timestamp value, one of the first steps is to clone SKB using skb_get. It increases the reference counter for that SKB to prevent unexpected freeing by another component. However, there may be a case where the index is requested, SKB is assigned and never consumed by PTP flows - for example due to reset during running PTP apps. Add a check in release timestamping function to verify if the SKB assigned to Tx timestamp latch was freed, and release remaining SKBs. Fixes: 4901e83a94ef ("idpf: add Tx timestamp capabilities negotiation") Signed-off-by: Milena Olech <milena.olech@intel.com> Signed-off-by: Anton Nadezhdin <anton.nadezhdin@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20251009-jk-iwl-net-2025-10-01-v3-1-ef32a425b92a@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-10-01Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netPaolo Abeni
Cross-merge networking fixes after downstream PR (net-6.17-rc8). Conflicts: tools/testing/selftests/drivers/net/bonding/Makefile 87951b566446 selftests: bonding: add test for passive LACP mode c2377f1763e9 selftests: bonding: add test for LACP actor port priority Adjacent changes: drivers/net/ethernet/cadence/macb.h fca3dc859b20 net: macb: remove illusion about TBQPH/RBQPH being per-queue 89934dbf169e net: macb: Add TAPRIO traffic scheduling support drivers/net/ethernet/cadence/macb_main.c fca3dc859b20 net: macb: remove illusion about TBQPH/RBQPH being per-queue 89934dbf169e net: macb: Add TAPRIO traffic scheduling support Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-09-29idpf: fix mismatched free function for dma_alloc_coherentAlok Tiwari
The mailbox receive path allocates coherent DMA memory with dma_alloc_coherent(), but frees it with dmam_free_coherent(). This is incorrect since dmam_free_coherent() is only valid for buffers allocated with dmam_alloc_coherent(). Fix the mismatch by using dma_free_coherent() instead of dmam_free_coherent Fixes: e54232da1238 ("idpf: refactor idpf_recv_mb_msg") Signed-off-by: Alok Tiwari <alok.a.tiwari@oracle.com> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Link: https://patch.msgid.link/20250925180212.415093-1-alok.a.tiwari@oracle.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-09-24idpf: enable XSk features and ndo_xsk_wakeupAlexander Lobakin
Now that AF_XDP functionality is fully implemented, advertise XSk XDP feature and add .ndo_xsk_wakeup() callback to be able to use it with this driver. Co-developed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-24idpf: implement Rx path for AF_XDPAlexander Lobakin
Implement Rx packet processing specific to AF_XDP ZC using the libeth XSk infra. Initialize queue registers before allocating buffers to avoid redundant ifs when updating the queue tail. Co-developed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-24idpf: implement XSk xmitAlexander Lobakin
Implement the XSk transmit path using the libeth (libeth_xdp) XSk infra. When the NAPI poll is called, XSk Tx queues are polled first, before regular Tx and Rx. They're generally faster to serve and have higher priority comparing to regular traffic. Co-developed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-24idpf: add XSk pool initializationMichal Kubiak
Add functionality to setup an XSk buffer pool, including ability to stop, reconfig and start only selected queues, not the whole device. Pool DMA mapping is managed by libeth_xdp. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-24idpf: add virtchnl functions to manage selected queuesMichal Kubiak
Implement VC functions dedicated to enabling, disabling and configuring not all but only selected queues. Also, refactor the existing implementation to make the code more modular. Introduce new generic functions for sending VC messages consisting of chunks, in order to isolate the sending algorithm and its implementation for specific VC messages. Finally, rewrite the function for mapping queues to q_vectors using the new modular approach to avoid copying the code that implements the VC message sending algorithm. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Co-developed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-19idpf: add HW timestamping statisticsMilena Olech
Add HW timestamping statistics support - through implementing get_ts_stats. Timestamp statistics include correctly timestamped packets, discarded, skipped and flushed during PTP release. Most of the stats are collected per vport, only requests skipped due to lack of free latch index are collected per Tx queue. Statistics can be obtained using kernel ethtool since version 6.10 with: ethtool -I -T <interface> The output will include: Statistics: tx_pkts: 15 tx_lost: 0 tx_err: 0 Signed-off-by: Milena Olech <milena.olech@intel.com> Co-developed-by: Anton Nadezhdin <anton.nadezhdin@intel.com> Signed-off-by: Anton Nadezhdin <anton.nadezhdin@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: add XDP RSS hash hintAlexander Lobakin
Add &xdp_metadata_ops with a callback to get RSS hash hint from the descriptor. Declare the splitq 32-byte descriptor as 4 u64s to parse them more efficiently when possible. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: add support for .ndo_xdp_xmit()Alexander Lobakin
Use libeth XDP infra to implement .ndo_xdp_xmit() in idpf. The Tx callbacks are reused from XDP_TX code. XDP redirect target feature is set/cleared depending on the XDP prog presence, as for now we still don't allocate XDP Tx queues when there's no program. Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: add support for XDP on RxAlexander Lobakin
Use libeth XDP infra to support running XDP program on Rx polling. This includes all of the possible verdicts/actions. XDP Tx queues are cleaned only in "lazy" mode when there are less than 1/4 free descriptors left on the ring. libeth helper macros to define driver-specific XDP functions make sure the compiler could uninline them when needed. Use __LIBETH_WORD_ACCESS to parse descriptors more efficiently when applicable. It really gives some good boosts and code size reduction on x86_64: XDP only: add/remove: 0/0 grow/shrink: 3/3 up/down: 5/-59 (-54) with XSk: add/remove: 0/0 grow/shrink: 5/6 up/down: 23/-124 (-101) with the most demanding workloads like XSk xmit differing in up to 5-8%. Co-developed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: use generic functions to build xdp_buff and skbAlexander Lobakin
In preparation of XDP support, move from having skb as the main frame container during the Rx polling to &xdp_buff. This allows to use generic and libeth helpers for building an XDP buffer and changes the logics: now we try to allocate an skb only when we processed all the descriptors related to the frame. Store &libeth_xdp_stash instead of the skb pointer on the Rx queue. It's only 8 bytes wider, but contains everything we may need. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: implement XDP_SETUP_PROG in ndo_bpf for splitqMichal Kubiak
Implement loading/removing XDP program using .ndo_bpf callback in the split queue mode. Reconfigure and restart the queues if needed (!!old_prog != !!new_prog), otherwise, just update the pointers. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: prepare structures to support XDPMichal Kubiak
Extend basic structures of the driver (e.g. 'idpf_vport', 'idpf_*_queue', 'idpf_vport_user_config_data') by adding members necessary to support XDP. Add extra XDP Tx queues needed to support XDP_TX and XDP_REDIRECT actions without interfering with regular Tx traffic. Also add functions dedicated to support XDP initialization for Rx and Tx queues and call those functions from the existing algorithms of queues configuration. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Co-developed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: add support for nointerrupt queuesAlexander Lobakin
Currently, queues are associated 1:1 with interrupt vectors as it's assumed queues are always interrupt-driven. For XDP, we want to use Tx queues without interrupts and only do "lazy" cleaning when the number of free elements is <= threshold (closest pow-2 to 1/4 of the ring). In order to use a queue without an interrupt, idpf still needs to have a vector assigned to it to flush descriptors. This vector can be global and only one for the whole vport to handle all its noirq queues. Always request one excessive vector and configure it in non-interrupt mode right away when creating vport, so that it can be used later by queues when needed (not only XDP ones). Co-developed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: remove SW marker handling from NAPIMichal Kubiak
SW marker descriptors on completion queues are used only when a queue is about to be destroyed. It's far from hotpath and handling it in the hotpath NAPI poll makes no sense. Instead, run a simple poller after a virtchnl message for destroying the queue is sent and wait for the replies. If replies for all of the queues are received, this means the synchronization is done correctly and we can go forth with stopping the link. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: add 4-byte completion descriptor definitionMichal Kubiak
In the queue-based scheduling mode, Tx completion descriptor is 4 bytes comparing to 8 bytes in flow-based. Add definition for it and allocate the corresponding amount of memory for the descriptors during the completion queue creation. This does not include handling 4-byte completions during Tx polling, as for now, the only user of QB will be XDP, which has its own routines. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: link NAPIs to queuesAlexander Lobakin
Add the missing linking of NAPIs to netdev queues when enabling interrupt vectors in order to support NAPI configuration and interfaces requiring get_rx_queue()->napi to be set (like XSk busy polling). As currently, idpf_vport_{start,stop}() is called from several flows with inconsistent RTNL locking, we need to synchronize them to avoid runtime assertions. Notably: * idpf_{open,stop}() -- regular NDOs, RTNL is always taken; * idpf_initiate_soft_reset() -- usually called under RTNL; * idpf_init_task -- called from the init work, needs RTNL; * idpf_vport_dealloc -- called without RTNL taken, needs it. Expand common idpf_vport_{start,stop}() to take an additional bool telling whether we need to manually take the RTNL lock. Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # helper Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: use a saner limit for default number of queues to allocateAlexander Lobakin
Currently, the maximum number of queues available for one vport is 16. This is hardcoded, but then the function calculating the optimal number of queues takes min(16, num_online_cpus()). In order to be able to allocate more queues, which will be then used for XDP, stop hardcoding 16 and rely on what the device gives us[*]. Instead of num_online_cpus(), which is considered suboptimal since at least 2013, use netif_get_num_default_rss_queues() to still have free queues in the pool. [*] With the note: Currently, idpf always allocates `IDPF_MAX_BUFQS_PER_RXQ_GRP` (== 2) buffer queues for each Rx queue and one completion queue for each Tx for best performance. But there was no check whether such number is available, IOW the assumption was not backed by any "harmonizing" / actual checks. Fix this while at it. nr_cpu_ids number of Tx queues are needed only for lockless XDP sending, the regular stack doesn't benefit from that anyhow. On a 128-thread Xeon, this now gives me 32 regular Tx queues and leaves 224 free for XDP (128 of which will handle XDP_TX, .ndo_xdp_xmit(), and XSk xmit when enabled). Note 2: Unfortunately, some CP/FW versions are not able to reconfigure/enable/disable large amount of queues within the minimum timeout (2 seconds). For now, fall back to the default timeout for every operation until this is resolved. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-08idpf: fix Rx descriptor ready check barrier in splitqAlexander Lobakin
No idea what the current barrier position was meant for. At that point, nothing is read from the descriptor, only the pointer to the actual one is fetched. The correct barrier usage here is after the generation check, so that only the first qword is read if the descriptor is not yet ready and we need to stop polling. Debatable on coherent DMA as the Rx descriptor size is <= cacheline size, but anyway, the current barrier position only makes the codegen worse. Fixes: 3a8845af66ed ("idpf: add RX splitq napi poll support") Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Tested-by: Ramu R <ramu.r@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-04Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.17-rc5). No conflicts. Adjacent changes: include/net/sock.h c51613fa276f ("net: add sk->sk_drop_counters") 5d6b58c932ec ("net: lockless sock_i_ino()") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-09-02idpf: set mac type when adding and removing MAC filtersEmil Tantilov
On control planes that allow changing the MAC address of the interface, the driver must provide a MAC type to avoid errors such as: idpf 0000:0a:00.0: Transaction failed (op 535) idpf 0000:0a:00.0: Received invalid MAC filter payload (op 535) (len 0) idpf 0000:0a:00.0: Transaction failed (op 536) These errors occur during driver load or when changing the MAC via: ip link set <iface> address <mac> Add logic to set the MAC type when sending ADD/DEL (opcodes 535/536) to the control plane. Since only one primary MAC is supported per vport, the driver only needs to send an ADD opcode when setting it. Remove the old address by calling __idpf_del_mac_filter(), which skips the message and just clears the entry from the internal list. This avoids an error on DEL as it attempts to remove an address already cleared by the preceding ADD opcode. Fixes: ce1b75d0635c ("idpf: add ptypes and MAC filter support") Reported-by: Jian Liu <jianliu@redhat.com> Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-09-02idpf: fix UAF in RDMA core aux dev deinitializationJoshua Hay
Free the adev->id before auxiliary_device_uninit. The call to uninit triggers the release callback, which frees the iadev memory containing the adev. The previous flow results in a UAF during rmmod due to the adev->id access. [264939.604077] ================================================================== [264939.604093] BUG: KASAN: slab-use-after-free in idpf_idc_deinit_core_aux_device+0xe4/0x100 [idpf] [264939.604134] Read of size 4 at addr ff1100109eb6eaf8 by task rmmod/17842 ... [264939.604635] Allocated by task 17597: [264939.604643] kasan_save_stack+0x20/0x40 [264939.604654] kasan_save_track+0x14/0x30 [264939.604663] __kasan_kmalloc+0x8f/0xa0 [264939.604672] idpf_idc_init_aux_core_dev+0x4bd/0xb60 [idpf] [264939.604700] idpf_idc_init+0x55/0xd0 [idpf] [264939.604726] process_one_work+0x658/0xfe0 [264939.604742] worker_thread+0x6e1/0xf10 [264939.604750] kthread+0x382/0x740 [264939.604762] ret_from_fork+0x23a/0x310 [264939.604772] ret_from_fork_asm+0x1a/0x30 [264939.604785] Freed by task 17842: [264939.604790] kasan_save_stack+0x20/0x40 [264939.604799] kasan_save_track+0x14/0x30 [264939.604808] kasan_save_free_info+0x3b/0x60 [264939.604820] __kasan_slab_free+0x37/0x50 [264939.604830] kfree+0xf1/0x420 [264939.604840] device_release+0x9c/0x210 [264939.604850] kobject_put+0x17c/0x4b0 [264939.604860] idpf_idc_deinit_core_aux_device+0x4f/0x100 [idpf] [264939.604886] idpf_vc_core_deinit+0xba/0x3a0 [idpf] [264939.604915] idpf_remove+0xb0/0x7c0 [idpf] [264939.604944] pci_device_remove+0xab/0x1e0 [264939.604955] device_release_driver_internal+0x371/0x530 [264939.604969] driver_detach+0xbf/0x180 [264939.604981] bus_remove_driver+0x11b/0x2a0 [264939.604991] pci_unregister_driver+0x2a/0x250 [264939.605005] __do_sys_delete_module.constprop.0+0x2eb/0x540 [264939.605014] do_syscall_64+0x64/0x2c0 [264939.605024] entry_SYSCALL_64_after_hwframe+0x76/0x7e Fixes: f4312e6bfa2a ("idpf: implement core RDMA auxiliary dev create, init, and destroy") Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Tested-by: Samuel Salin <Samuel.salin@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-08-29Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.17-rc4). No conflicts. Adjacent changes: drivers/net/ethernet/intel/idpf/idpf_txrx.c 02614eee26fb ("idpf: do not linearize big TSO packets") 6c4e68480238 ("idpf: remove obsolete stashing code") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-22idpf: do not linearize big TSO packetsEric Dumazet
idpf has a limit on number of scatter-gather frags that can be used per segment. Currently, idpf_tx_start() checks if the limit is hit and forces a linearization of the whole packet. This requires high order allocations that can fail under memory pressure. A full size BIG-TCP packet would require order-7 alocation on x86_64 :/ We can move the check earlier from idpf_features_check() for TSO packets, to force GSO in this case, removing the cost of a big copy. This means that a linearization will eventually happen with sizes smaller than one MSS. __idpf_chk_linearize() is renamed to idpf_chk_tso_segment() and moved to idpf_lib.c Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Przemek Kitszel <przemyslaw.kitszel@intel.com> Cc: Jacob Keller <jacob.e.keller@intel.com> Cc: Madhu Chittim <madhu.chittim@intel.com> Cc: Pavan Kumar Linga <pavan.kumar.linga@intel.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Andrew Lunn <andrew+netdev@lunn.ch> Reviewed-by: Joshua Hay <joshua.a.hay@intel.com> Tested-by: Brian Vazquez <brianvv@google.com> Acked-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://patch.msgid.link/20250818195934.757936-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-21idpf: remove obsolete stashing codeJoshua Hay
With the new Tx buffer management scheme, there is no need for all of the stashing mechanisms, the hash table, the reserve buffer stack, etc. Remove all of that. Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-08-21idpf: stop Tx if there are insufficient buffer resourcesJoshua Hay
The Tx refillq logic will cause packets to be silently dropped if there are not enough buffer resources available to send a packet in flow scheduling mode. Instead, determine how many buffers are needed along with number of descriptors. Make sure there are enough of both resources to send the packet, and stop the queue if not. Fixes: 7292af042bcf ("idpf: fix a race in txq wakeup") Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-08-21idpf: replace flow scheduling buffer ring with buffer poolJoshua Hay
Replace the TxQ buffer ring with one large pool/array of buffers (only for flow scheduling). This eliminates the tag generation and makes it impossible for a tag to be associated with more than one packet. The completion tag passed to HW through the descriptor is the index into the array. That same completion tag is posted back to the driver in the completion descriptor, and used to index into the array to quickly retrieve the buffer during cleaning. In this way, the tags are treated as a fix sized resource. If all tags are in use, no more packets can be sent on that particular queue (until some are freed up). The tag pool size is 64K since the completion tag width is 16 bits. For each packet, the driver pulls a free tag from the refillq to get the next free buffer index. When cleaning is complete, the tag is posted back to the refillq. A multi-frag packet spans multiple buffers in the driver, therefore it uses multiple buffer indexes/tags from the pool. Each frag pulls from the refillq to get the next free buffer index. These are tracked in a next_buf field that replaces the completion tag field in the buffer struct. This chains the buffers together so that the packet can be cleaned from the starting completion tag taken from the completion descriptor, then from the next_buf field for each subsequent buffer. In case of a dma_mapping_error occurs or the refillq runs out of free buf_ids, the packet will execute the rollback error path. This unmaps any buffers previously mapped for the packet. Since several free buf_ids could have already been pulled from the refillq, we need to restore its original state as well. Otherwise, the buf_ids/tags will be leaked and not used again until the queue is reallocated. Descriptor completions only advance the descriptor ring index to "clean" the descriptors. The packet completions only clean the buffers associated with the given packet completion tag and do not update the descriptor ring index. When operating in queue based scheduling mode, the array still acts as a ring and will only have TxQ descriptor count entries. The tx_bufs are still associated 1:1 with the descriptor ring entries and we can use the conventional indexing mechanisms. Fixes: c2d548cad150 ("idpf: add TX splitq napi poll support") Signed-off-by: Luigi Rizzo <lrizzo@google.com> Signed-off-by: Brian Vazquez <brianvv@google.com> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-08-21idpf: simplify and fix splitq Tx packet rollback error pathJoshua Hay
Move (and rename) the existing rollback logic to singleq.c since that will be the only consumer. Create a simplified splitq specific rollback function to loop through and unmap tx_bufs based on the completion tag. This is critical before replacing the Tx buffer ring with the buffer pool since the previous rollback indexing will not work to unmap the chained buffers from the pool. Cache the next_to_use index before any portion of the packet is put on the descriptor ring. In case of an error, the rollback will bump tail to the correct next_to_use value. Because the splitq path now supports different types of context descriptors (and potentially multiple in the future), this will take care of rolling back any and all context descriptors encoded on the ring for the erroneous packet. The previous rollback logic was broken for PTP packets since it would not account for the PTP context descriptor. Fixes: 1a49cf814fe1 ("idpf: add Tx timestamp flows") Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-08-21idpf: improve when to set RE bit logicJoshua Hay
Track the gap between next_to_use and the last RE index. Set RE again if the gap is large enough to ensure RE bit is set frequently. This is critical before removing the stashing mechanisms because the opportunistic descriptor ring cleaning from the out-of-order completions will go away. Previously the descriptors would be "cleaned" by both the descriptor (RE) completion and the out-of-order completions. Without the latter, we must ensure the RE bit is set more frequently. Otherwise, it's theoretically possible for the descriptor ring next_to_clean to never advance. The previous implementation was dependent on the start of a packet falling on a 64th index in the descriptor ring, which is not guaranteed with large packets. Signed-off-by: Luigi Rizzo <lrizzo@google.com> Signed-off-by: Brian Vazquez <brianvv@google.com> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-08-21idpf: add support for Tx refillqs in flow scheduling modeJoshua Hay
In certain production environments, it is possible for completion tags to collide, meaning N packets with the same completion tag are in flight at the same time. In this environment, any given Tx queue is effectively used to send both slower traffic and higher throughput traffic simultaneously. This is the result of a customer's specific configuration in the device pipeline, the details of which Intel cannot provide. This configuration results in a small number of out-of-order completions, i.e., a small number of packets in flight. The existing guardrails in the driver only protect against a large number of packets in flight. The slower flow completions are delayed which causes the out-of-order completions. The fast flow will continue sending traffic and generating tags. Because tags are generated on the fly, the fast flow eventually uses the same tag for a packet that is still in flight from the slower flow. The driver has no idea which packet it should clean when it processes the completion with that tag, but it will look for the packet on the buffer ring before the hash table. If the slower flow packet completion is processed first, it will end up cleaning the fast flow packet on the ring prematurely. This leaves the descriptor ring in a bad state resulting in a crash or Tx timeout. In summary, generating a tag when a packet is sent can lead to the same tag being associated with multiple packets. This can lead to resource leaks, crashes, and/or Tx timeouts. Before we can replace the tag generation, we need a new mechanism for the send path to know what tag to use next. The driver will allocate and initialize a refillq for each TxQ with all of the possible free tag values. During send, the driver grabs the next free tag from the refillq from next_to_clean. While cleaning the packet, the clean routine posts the tag back to the refillq's next_to_use to indicate that it is now free to use. This mechanism works exactly the same way as the existing Rx refill queues, which post the cleaned buffer IDs back to the buffer queue to be reposted to HW. Since we're using the refillqs for both Rx and Tx now, genericize some of the existing refillq support. Note: the refillqs will not be used yet. This is only demonstrating how they will be used to pass free tags back to the send path. Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-07-23idpf: access ->pp through netmem_desc instead of pageByungchul Park
To eliminate the use of struct page in page pool, the page pool users should use netmem descriptor and APIs instead. Make idpf access ->pp through netmem_desc instead of page. Signed-off-by: Byungchul Park <byungchul@sk.com> Link: https://patch.msgid.link/20250721021835.63939-10-byungchul@sk.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-18idpf: preserve coalescing settings across resetsAhmed Zaki
The IRQ coalescing config currently reside only inside struct idpf_q_vector. However, all idpf_q_vector structs are de-allocated and re-allocated during resets. This leads to user-set coalesce configuration to be lost. Add new fields to struct idpf_vport_user_config_data to save the user settings and re-apply them after reset. Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Signed-off-by: Ahmed Zaki <ahmed.zaki@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-07-18idpf: add cross timestampingMilena Olech
Add cross timestamp support through virtchnl mailbox messages and directly, through PCIe BAR registers. Cross timestamping assumes that both system time and device clock time values are cached simultaneously, what is triggered by HW. Feature is enabled for both ARM and x86 archs. Signed-off-by: Milena Olech <milena.olech@intel.com> Reviewed-by: Karol Kolacinski <karol.kolacinski@intel.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Tested-by: Samuel Salin <Samuel.salin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-07-18idpf: add flow steering supportAhmed Zaki
Use the new virtchnl2 OP codes to communicate with the Control Plane to add flow steering filters. We add the basic functionality for add/delete with TCP/UDP IPv4 only. Support for other OP codes and protocols will be added later. Standard 'ethtool -N|--config-ntuple' should be used, for example: # ethtool -N ens801f0d1 flow-type tcp4 src-ip 10.0.0.1 action 6 to route all IPv4/TCP traffic from IP 10.0.0.1 to queue 6. Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Ahmed Zaki <ahmed.zaki@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-07-18virtchnl2: add flow steering supportSudheer Mogilappagari
Add opcodes and corresponding message structure to add and delete flow steering rules. Flow steering enables configuration of rules to take an action or subset of actions based on a match criteria. Actions could be redirect to queue, redirect to queue group, drop packet or mark. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Co-developed-by: Dinesh Kumar <dinesh.kumar@intel.com> Signed-off-by: Dinesh Kumar <dinesh.kumar@intel.com> Signed-off-by: Sudheer Mogilappagari <sudheer.mogilappagari@intel.com> Signed-off-by: Ahmed Zaki <ahmed.zaki@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-07-18virtchnl2: rename enum virtchnl2_cap_rssAhmed Zaki
The "enum virtchnl2_cap_rss" will be used for negotiating flow steering capabilities. Instead of adding a new enum, rename virtchnl2_cap_rss to virtchnl2_flow_types. Also rename the enum's constants. Flow steering will use this enum in the next patches. Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Ahmed Zaki <ahmed.zaki@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-07-17Merge branch 'for-next' of ↵Paolo Abeni
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux Tony Nguyen says: ==================== Add RDMA support for Intel IPU E2000 in idpf Tatyana Nikolova says: This idpf patch series is the second part of the staged submission for introducing RDMA RoCEv2 support for the IPU E2000 line of products, referred to as GEN3. To support RDMA GEN3 devices, the idpf driver uses common definitions of the IIDC interface and implements specific device functionality in iidc_rdma_idpf.h. The IPU model can host one or more logical network endpoints called vPorts per PCI function that are flexibly associated with a physical port or an internal communication port. Other features as it pertains to GEN3 devices include: * MMIO learning * RDMA capability negotiation * RDMA vectors discovery between idpf and control plane These patches are split from the submission "Add RDMA support for Intel IPU E2000 (GEN3)" [1]. The patches have been tested on a range of hosts and platforms with a variety of general RDMA applications which include standalone verbs (rping, perftest, etc.), storage and HPC applications. Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> [1] https://lore.kernel.org/all/20240724233917.704-1-tatyana.e.nikolova@intel.com/ This idpf patch series is the second part of the staged submission for introducing RDMA RoCEv2 support for the IPU E2000 line of products, referred to as GEN3. To support RDMA GEN3 devices, the idpf driver uses common definitions of the IIDC interface and implements specific device functionality in iidc_rdma_idpf.h. The IPU model can host one or more logical network endpoints called vPorts per PCI function that are flexibly associated with a physical port or an internal communication port. Other features as it pertains to GEN3 devices include: * MMIO learning * RDMA capability negotiation * RDMA vectors discovery between idpf and control plane These patches are split from the submission "Add RDMA support for Intel IPU E2000 (GEN3)" [1]. The patches have been tested on a range of hosts and platforms with a variety of general RDMA applications which include standalone verbs (rping, perftest, etc.), storage and HPC applications. Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> [1] https://lore.kernel.org/all/20240724233917.704-1-tatyana.e.nikolova@intel.com/ IWL reviews: v3: https://lore.kernel.org/all/20250708210554.1662-1-tatyana.e.nikolova@intel.com/ v2: https://lore.kernel.org/all/20250612220002.1120-1-tatyana.e.nikolova@intel.com/ v1 (split from previous series): https://lore.kernel.org/all/20250523170435.668-1-tatyana.e.nikolova@intel.com/ v3: https://lore.kernel.org/all/20250207194931.1569-1-tatyana.e.nikolova@intel.com/ RFC v2: https://lore.kernel.org/all/20240824031924.421-1-tatyana.e.nikolova@intel.com/ RFC: https://lore.kernel.org/all/20240724233917.704-1-tatyana.e.nikolova@intel.com/ * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux: idpf: implement get LAN MMIO memory regions idpf: implement IDC vport aux driver MTU change handler idpf: implement remaining IDC RDMA core callbacks and handlers idpf: implement RDMA vport auxiliary dev create, init, and destroy idpf: implement core RDMA auxiliary dev create, init, and destroy idpf: use reserved RDMA vectors from control plane ==================== Link: https://patch.msgid.link/20250714181002.2865694-1-anthony.l.nguyen@intel.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-07-14idpf: implement get LAN MMIO memory regionsJoshua Hay
The RDMA driver needs to map its own MMIO regions for the sake of performance, meaning the IDPF needs to avoid mapping portions of the BAR space. However, to be HW agnostic, the IDPF cannot assume where these are and must avoid mapping hard coded regions as much as possible. The IDPF maps the bare minimum to load and communicate with the control plane, i.e., the mailbox registers and the reset state registers. Because of how and when mailbox register offsets are initialized, it is easier to adjust the existing defines to be relative to the mailbox region starting address. Use a specific mailbox register write function that uses these relative offsets. The reset state register addresses are calculated the same way as for other registers, described below. The IDPF then calls a new virtchnl op to fetch a list of MMIO regions that it should map. The addresses for the registers in these regions are calculated by determining what region the register resides in, adjusting the offset to be relative to that region, and then adding the register's offset to that region's mapped address. If the new virtchnl op is not supported, the IDPF will fallback to mapping the whole bar. However, it will still map them as separate regions outside the mailbox and reset state registers. This way we can use the same logic in both cases to access the MMIO space. Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-07-14idpf: implement IDC vport aux driver MTU change handlerJoshua Hay
The only event an RDMA vport aux driver cares about right now is an MTU change on its underlying vport. Implement and plumb the handler to signal the pre MTU change event and post MTU change events to the RDMA vport aux driver. Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-07-14idpf: implement remaining IDC RDMA core callbacks and handlersJoshua Hay
Implement the idpf_idc_request_reset and idpf_idc_rdma_vc_send_sync callbacks for the rdma core auxiliary driver to issue reset events to the idpf and send (synchronous) virtchnl messages to the control plane respectively. Implement and plumb the reset handler for the opposite flow as well, i.e. when the idpf is resetiing and needs to notify the rdma core auxiliary driver. Reviewed-by: Madhu Chittim <madhu.chittim@intel.com> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>