Just personal experience using LLMs. I've optimized a lot of things in the past year I wouldn't normally have time to do, especially build systems/pipelines.
"Saving X% of RAM" isn't a thing because RAM is itself a cache of compressed swap space and/or mapped files.
The lesson here is pointer-chasing data structures and trees are a lot more expensive than everyone and most programming languages like to pretend they are.
It's more than that, because generally performance is also much worse if you're using a shit ton of memory. That's because CPUs are bottlenecked by cache. So more memory means cache has to be flushed more often, and there will be more misses, which can greatly impact performance.
But they do keep the active tab of each window in memory. Firefox even continues rendering all active tabs in all windows, even if for windows which are not visible.
Not sure if this 45MB is per browser instance or per tab, but it’s the latter case, 10 windows would save 450MB. >10% on a lower-end device.