slab: Bypass free lists for __drain_alien() __drain_alien currently drains objects by freeing them to the (remote) freelists of the original node. However, each node also has a shared list containing objects to be used on any processor of that node. We can avoid a number of remote node accessed by copying the pointers to the free objects into the remote shared array. Depends on the earlier patch that introduces the transfer_objects() function. And while we are at it: Skip alien draining if the alien cache spinlock is already taken. Signed-off-by: Christoph Lameter Index: linux-2.6.16-rc6-mm2/mm/slab.c =================================================================== --- linux-2.6.16-rc6-mm2.orig/mm/slab.c 2006-03-21 14:53:40.000000000 -0800 +++ linux-2.6.16-rc6-mm2/mm/slab.c 2006-03-21 14:55:09.000000000 -0800 @@ -974,6 +974,13 @@ static void __drain_alien_cache(struct k if (ac->avail) { spin_lock(&rl3->list_lock); + /* + * Stuff objects into the remote nodes shared array first. + * That way we could avoid the overhead of putting the objects + * into the free lists and getting them back later. + */ + transfer_objects(rl3->shared, ac, ac->limit); + free_block(cachep, ac->entry, ac->avail, node); ac->avail = 0; spin_unlock(&rl3->list_lock); @@ -989,8 +996,8 @@ static void reap_alien(struct kmem_cache if (l3->alien) { struct array_cache *ac = l3->alien[node]; - if (ac && ac->avail) { - spin_lock_irq(&ac->lock); + + if (ac && ac->avail && spin_trylock_irq(&ac->lock)) { __drain_alien_cache(cachep, ac, node); spin_unlock_irq(&ac->lock); }