From: Christoph Lameter To: Pekka Enberg Cc: Matt Mackall Cc: Nick Piggin Cc: linux-mm@kvack.org Subject: slub: Patches to merge for 2.6.26 Subject-Prefix: [patch @num@/@total@] Patches that may be merged for 2.6.26. Contains the generalization of the fallback logic which allows to cope with higher order allocs failing which is an advantage vs. SLAB (SLAB uses order 1 allocs and cannot fallback). The way that the objects per slab are determined was reworked from the last release. The objects per slab are here stored in the page struct. So they are readily available and there is no chance of a cache miss in __slab_alloc(). The intelligence of configuring slab orders was improved with the help of Yanmin Zang from Intel so that hackbench results are fine for 16p SMP boxes etc. The order and the objects in a slab of a certain order are stored in one word in order to avoid races when the object order is changed from userspace (yes that works now too, cpuslab size can be tuned at runtime now like SLAB). The patches are also available via git pull from: git://git.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git slab-mm Christoph Lameter (12): slub: Move map/flag clearing to __free_slab slub: Store max number of objects in the page struct. slub: for_each_object must be passed the number of objects in a slab slub: Add kmem_cache_order_objects struct slub: Update statistics handling for variable order slabs slub: Fallback to minimal order during slab page allocation slub: Drop fallback to page allocator method slub: Make the order configurable for each slab cache slub: Simplify any_slab_object checks slub: Drop DEFAULT_MAX_ORDER / DEFAULT_MIN_OBJECTS slub: Calculate min_objects based on number of processors. slub: pack objects denser Documentation/vm/slabinfo.c | 27 +-- include/linux/mm_types.h | 5 +- include/linux/slub_def.h | 16 ++- mm/slub.c | 435 ++++++++++++++++++++++++------------------- 4 files changed, 277 insertions(+), 206 deletions(-) ---