Skip to content
  • Saeed Mahameed's avatar
    6849c6d8
    net/mlx5e: Rx, Update page pool numa node when changed · 6849c6d8
    Saeed Mahameed authored
    
    
    Once every napi poll cycle, check if numa node is different than
    the page pool's numa id, and update it using page_pool_update_nid().
    
    Alternatively, we could have registered an irq affinity change handler,
    but page_pool_update_nid() must be called from napi context anyways, so
    the handler won't actually help.
    
    Performance testing:
    XDP drop/tx rate and TCP single/multi stream, on mlx5 driver
    while migrating rx ring irq from close to far numa:
    
    mlx5 internal page cache was locally disabled to get pure page pool
    results.
    
    CPU: Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz
    NIC: Mellanox Technologies MT27700 Family [ConnectX-4] (100G)
    
    XDP Drop/TX single core:
    NUMA  | XDP  | Before    | After
    ---------------------------------------
    Close | Drop | 11   Mpps | 10.9 Mpps
    Far   | Drop | 4.4  Mpps | 5.8  Mpps
    
    Close | TX   | 6.5 Mpps  | 6.5 Mpps
    Far   | TX   | 3.5 Mpps  | 4  Mpps
    
    Improvement is about 30% drop packet rate, 15% tx packet rate for numa
    far test.
    No degradation for numa close tests.
    
    TCP single/multi cpu/stream:
    NUMA  | #cpu | Before  | After
    --------------------------------------
    Close | 1    | 18 Gbps | 18 Gbps
    Far   | 1    | 15 Gbps | 18 Gbps
    Close | 12   | 80 Gbps | 80 Gbps
    Far   | 12   | 68 Gbps | 80 Gbps
    
    In all test cases we see improvement for the far numa case, and no
    impact on the close numa case.
    
    Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
    Acked-by: default avatarJonathan Lemon <jonathan.lemon@gmail.com>
    Acked-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
    Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    6849c6d8
    net/mlx5e: Rx, Update page pool numa node when changed
    Saeed Mahameed authored
    
    
    Once every napi poll cycle, check if numa node is different than
    the page pool's numa id, and update it using page_pool_update_nid().
    
    Alternatively, we could have registered an irq affinity change handler,
    but page_pool_update_nid() must be called from napi context anyways, so
    the handler won't actually help.
    
    Performance testing:
    XDP drop/tx rate and TCP single/multi stream, on mlx5 driver
    while migrating rx ring irq from close to far numa:
    
    mlx5 internal page cache was locally disabled to get pure page pool
    results.
    
    CPU: Intel(R) Xeon(R) CPU E5-2603 v4 @ 1.70GHz
    NIC: Mellanox Technologies MT27700 Family [ConnectX-4] (100G)
    
    XDP Drop/TX single core:
    NUMA  | XDP  | Before    | After
    ---------------------------------------
    Close | Drop | 11   Mpps | 10.9 Mpps
    Far   | Drop | 4.4  Mpps | 5.8  Mpps
    
    Close | TX   | 6.5 Mpps  | 6.5 Mpps
    Far   | TX   | 3.5 Mpps  | 4  Mpps
    
    Improvement is about 30% drop packet rate, 15% tx packet rate for numa
    far test.
    No degradation for numa close tests.
    
    TCP single/multi cpu/stream:
    NUMA  | #cpu | Before  | After
    --------------------------------------
    Close | 1    | 18 Gbps | 18 Gbps
    Far   | 1    | 15 Gbps | 18 Gbps
    Close | 12   | 80 Gbps | 80 Gbps
    Far   | 12   | 68 Gbps | 80 Gbps
    
    In all test cases we see improvement for the far numa case, and no
    impact on the close numa case.
    
    Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
    Acked-by: default avatarJonathan Lemon <jonathan.lemon@gmail.com>
    Acked-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
    Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
Loading