Forum | Documentation | Website | Blog

Skip to content
Snippets Groups Projects
  • Shakeel Butt's avatar
    mm: page_counter: remove unneeded atomic ops for low/min · cfdab60b
    Shakeel Butt authored
    Patch series "memcg: optimize charge codepath", v2.
    
    Recently Linux networking stack has moved from a very old per socket
    pre-charge caching to per-cpu caching to avoid pre-charge fragmentation
    and unwarranted OOMs.  One impact of this change is that for network
    traffic workloads, memcg charging codepath can become a bottleneck.  The
    kernel test robot has also reported this regression[1].  This patch series
    tries to improve the memcg charging for such workloads.
    
    This patch series implement three optimizations:
    (A) Reduce atomic ops in page counter update path.
    (B) Change layout of struct page_counter to eliminate false sharing
        between usage and high.
    (C) Increase the memcg charge batch to 64.
    
    To evaluate the impact of these optimizations, on a 72 CPUs machine, we
    ran the following workload in root memcg and then compared with scenario
    where the workload is run in a three level of cgroup hierarchy with top
    level having min and low setup appropr...
    cfdab60b