Slub: Remove special casing for page sized slabs After we have used quicklist so that arches can avoid using the slab allocator to manage page table pages we can now remove the special casing from slub. [Must have the quicklist patches applied. Otherwise this will break i386 and x86_64]. Signed-off-by: Christoph Lameter Index: linux-2.6.21-rc4-mm1/mm/slub.c =================================================================== --- linux-2.6.21-rc4-mm1.orig/mm/slub.c 2007-03-22 22:23:28.000000000 -0700 +++ linux-2.6.21-rc4-mm1/mm/slub.c 2007-03-22 22:23:29.000000000 -0700 @@ -1334,16 +1334,6 @@ static int calculate_order(int size) int order; int rem; - /* - * If this is an order 0 page then there are no issues with - * fragmentation. We can then create a slab with a single object. - * We need this to support the i386 arch code that uses our - * freelist field (index field) for a list pointer. We neveri - * touch the freelist pointer if we just have one object - */ - if (size == PAGE_SIZE) - return 0; - for (order = max(slub_min_order, fls(size - 1) - PAGE_SHIFT); order < MAX_ORDER; order++) { unsigned long slab_size = PAGE_SIZE << order; @@ -1474,15 +1464,6 @@ int calculate_sizes(struct kmem_cache *s tentative_size = ALIGN(size, calculate_alignment(align, flags)); - /* - * PAGE_SIZE slabs are special in that they are passed through - * to the page allocator. Do not do any debugging in order to avoid - * increasing the size of the object. - */ - if (size == PAGE_SIZE) - flags &= ~(SLAB_RED_ZONE| SLAB_DEBUG_FREE | \ - SLAB_STORE_USER | SLAB_POISON | __OBJECT_POISON); - size = ALIGN(size, sizeof(void *)); /* @@ -1495,9 +1476,8 @@ int calculate_sizes(struct kmem_cache *s s->inuse = size; - if (size * 2 < (PAGE_SIZE << calculate_order(size)) && - ((flags & (SLAB_DESTROY_BY_RCU | SLAB_POISON)) || - s->ctor || s->dtor)) { + if ((flags & (SLAB_DESTROY_BY_RCU | SLAB_POISON)) || + s->ctor || s->dtor) { /* * Relocate free pointer after the object if it is not * permitted to overwrite the first word of the object on