Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> uncompressing packages while they are still being downloaded

... but the archive directory is at the end of the file?

> no python VM startup overhead

This is about 20 milliseconds on my 11-year-old hardware.





HTTP range strikes again.

As for 20 ms, if you deal with 20 dependencies in parallel, that's 400ms just to start working.

Shaving half a second on many things make things fast.

Althought as we saw with zeeek in the other comment, you likely don't need multiprocessing since the network stack and unzip in the stdlib release the gil.

Threads are cheaper.

Maybe if you'd bundle pubgrub as a compiled extension, you coukd get pretty close to uv's perf.


Why are you starting a separate Python process for each dependency?

Real thread are very recent and didn't exist when uv was created. So you needed multiprocesses.

No, I mean why are you starting them for each dependency, rather than having a few workers pulling build requests from a queue?

At least one worker for each virtual cpu core you get for CPU. I got 16 on my laptop. My servers have much more.

If I have 64 cores, and 20 dependencies, I do want the 20 of them to be uncompressed in parallel. That's faster and if I'm installing something, I wanna prioritize that workload.

But it doesn't have to be 20. Even say 5 with queues, that's 100ms. It adds up.


Using the -S (“isolated”) flag can maybe cut startup in half.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: