joan,
The best options short of creating a second logical collector on the same physical collector is to change the testing approach.
A short term option would be to optimize the code of the script zencommand is calling at scale. This can add up drmatically. Chaining sed, awk and other bins is very common, but each one takes up precious ms and causes thread contention. Every optimization you can make counts.
A better option would be to use PythonCollector (https://github.com/Hackman238/ZenPacks.zenoss.PythonCollector) as it's quite a bit more scalable than zencommand. The downside is your test would need to be rewritten in python.
The best option, in my opinion, is to try to use a similar test that fits under other zendaemons (like SNMP) or to craft a specialized daemon dedicated to the test needing large scale. This is usually isolated to cases where you're needing very large scale, producing a package for a customer for which scale is unknown or needing to create a custom datasource.
The big problem with zencommand isn't really zencommand at all- it's the nature of what it's designed to do, execute scripts and commands. The competition for execution resources really adds up fast, especially with high parallel values. What's worse is the collectors are often subjec to high IO wait from RRD writting to the disks. With that it's very easy for a normally fast script to dramatically bloat in execution time waiting for disk access.
--Shane Scott (Hackman238)