This is why context optimization is going to be critical and thank you so much for sharing this paper as this also validates what we are trying to do. So if we managed to keep the baseline below 40% through context optimization then coordination might actually work very well and helps at scaling agentic systems.
I agree on measuring and it is planned especially once we integrate the context optimization. I think the value of context optimization will go beyond just avoiding compacting and reducing cost to providing more reliable agents.
> coordination yields diminishing or negative returns once single-agent baselines exceed ~45%
This is going to be the big thing to overcome, and without actually measuring it all we're doing is AI astrology.