We're running single-blade 2400's but without vCMP and don't see what you're seeing. During peak production traffic for a busy VIP (ssl offload client and server, X-Forwarded-For, several simple iRules) we see 1.3gbps client and 1.3gbps server traffic without any performance problems on the 2400 cpu/memory. The 1.3gbps peak is probably due to SNAT providing insufficient variation in trunk member assignment with upstream cisco equipment. In our experience, the 2400 is quite a beast. For us, the 'hey, why is it slow when going through the F5 but not when going to the node directly' complaints are nearly always related to frame buffer saturation (drops) on cisco equipment leading to devices spinning their wheels waiting for retransmissions, reassembly, etc.