The most surprising part of uv's success to me isn't Rust at all, it's how much speed we "unlocked" just by finally treating Python packaging as a well-specified systems problem instead of a pile of historical accidents. If uv had been written in Go or even highly optimized CPython, but with the same design decisions (PEP 517/518/621/658 focus, HTTP range tricks, aggressive wheel-first strategy, ignoring obviously defensive upper bounds, etc.), I strongly suspect we'd be debating a 1.3× vs 1.5× speedup instead of a 10× headline — but the conversation here keeps collapsing back to "Rust rewrite good/bad." That feels like cargo-culting the toolchain instead of asking the uncomfortable question: why did it take a greenfield project to give Python the package manager behavior people clearly wanted for the last decade?
It's not just greenfield-ness but the fact it's a commercial endeavor (even if the code is open-source).
Building a commercial product means you pay money (or something they equally value) to people to do your bidding. You don't have to worry about politics, licensing, and all the usual FOSS-related drama. You pay them to set their opinions aside and build what you want, not what they want (and if that doesn't work, it just means you need to offer more money).
In this case it's a company that believes they can make a "good" package manager they can sell/monetize somehow and so built that "good" package manager. Turns out it's at least good enough that other people now like it too.
This would never work in a FOSS world because the project will be stuck in endless planning as everyone will have an opinion on how it should be done and nothing will actually get done.
Similar story with systemd - all the bitching you hear about it (to this day!) is the stuff that would've happened during its development phase had it been developed as a typical FOSS project and ultimately made it go nowhere - but instead it's one guy that just did what he wanted and shared it with the world, and enough other people liked it and started building upon it.
I don't know what you think "typical Foss projects" are but in my experience they are exactly like your systemd example: one person that does what they want and share it with the world. The rest of your argument doesn't really make any sense with that in mind.
That's no longer as true as it once was. I get the feeling that quite a few people would consider "benevolent dictator for life" an outdated model for open source communities. For better or worse, there's a lot of push to transition popular projects towards being led by committee. Results are mixed (literally: I see both successes and failures), but that doesn't seem to have any effect on the trend.
Only a very, very small fraction of open source projects get to the point where they legitimately need committees and working groups and maintainer politics/drama.
> quite a few people would consider "benevolent dictator for life" an outdated model for open source communities.
I think what most people dislike are rugpulls and when commercial interests override what contributors/users/maintainers are trying to get out of a project.
For example, we use forgejo at my company because it was not clear to us to what extent gitea would play nicely with us if we externalized a hosted version/deployment their open source software (which they somewhat recently formed a company around, and led to forgejo forking it under the GPL). I'm also not a fan of what minio did recently to that effect, and am skeptical but hopeful that seaweedfs is not going to do something similar.
We ourselves are building out a community around our static site generator https://github.com/accretional/statue as FOSS with commercial backing. The difference is that we're open and transparent about it from the beginning, and static site generators/component libraries are probably some of the least painful to fork or take issue with their direction, vs critical infrastructure like distributed systems' storage layer.
Bottom line is, BDFL works when 1. you aren't asking people to bet their business on you staying benevolent 2. you remain benevolent.
> Only a very, very small fraction of open source projects get to the point where they legitimately need committees and working groups and maintainer politics/drama.
You’re not wrong, but those are the projects we’re talking about in this thread. uv has become large enough to enter this realm.
> Bottom line is, BDFL works when 1. you aren't asking people to bet their business on you staying benevolent 2. you remain benevolent.
That second point is doing a lot of heavy lifting. All of the BDFL models depend on that one person remaining aligned, interested, and open to new ideas. A lot of the small projects I’ve worked with have had BDFL models where even simple issues like the BDFL becoming busy or losing interest became the death knell of the project. On the other hand, I can think of a few committee-style projects where everything collapsed under infighting and drama from the committee.
More projects should push back against calls for "governance" and "steering committees" and such. As you noticed, they paralyze projects. It took JavaScript seven years to get a half-baked version of Python context managers, and Python itself has slowed down markedly.
The seemingly irresistible social pressure to committee-ize development is a paper tiger. It disappears if you stand your ground and state firmly "This is MY project".
It depends on governance, for want of a better word: if a project has a benevolent dictator then that project will likely be more productive than one that requires consensus building.
That's what I'm saying. Benevolent dictator is the rule, not the exception, in FOSS. Which is why GP's argument that private companies good, FOSS bad, makes no sense.
I think OP is directing their ire towards projects with multiple maintainers, thus is more likely to be hamstrung by consensus building and is thus less productive. It does seem like we've been swamped with drama posts about large open-source projects and their governance, notably with Rust itself, linux incorporating Rust, Pebble, etc. It's not hard to imagine this firehose of dev-drama (that's not even about actual code) overshadowing the fact that the overwhelming majority of code ever written has a benevolent dictator model.
The argument isn't about proprietary vs open, but that design by committee, whether that committee be a bunch of open source heads that we like, or by some group that we've been told to other and hate, has limitations that have been exhibited here.
> You don't have to worry about politics, licensing, and all the usual FOSS-related drama. You pay them to set their opinions aside and build what you want, not what they want (and if that doesn't work, it just means you need to offer more money).
Money is indeed a great lubricator.
However, it's not black-and-white: office politics is a long standing term for a reason.
Office politics happen when people determine they can get more money by engaging in politics instead of working. This is just an indicator people aren't being paid enough money (since people politicking around is detrimental to the company, it is better off paying them whatever it takes for them not to engage in such behavior). "You get what you pay for" applies yet again.
> In large companies people engage in politics because it becomes necessary to accomplish large things.
At a large company, your job after a certain level depends on your “impact” and “value delivered”. The challenge is getting 20 other teams to work on your priorities and not their priorities. They too need to play to win to keep their job or get that promotion.
For software engineering, “impact” or “value delivered” are pretty much always your job unless you work somewhere really dysfunctional that’s measuring lines of code or some other nonsense. But that does become a lot about politics after some level.
I would not say it’s about getting other people aligned with your priorities instead of theirs but rather finding ways such that your priorities are aligned. There’s always the “your boss says it needs to help me” sort of priority alignment but much better is to find shared priorities. e.g. “We both need X; let’s work together.” “You need Foo which you could more easily achieve by investing your efforts into my platform Bar.”
If you are a fresh grad, you can mostly just chug along with your tickets and churn out code. Your boss (if you have a good boss) will help you make sure the other people work with you.
When you are higher up, that is when you become said good boss, or that boss's boss, the dynamics of the grandfather comment kick in fully.
Agree. A fresh grad is still measured on “impact” but that impact is generally localized. e.g. Quality of individual code and design vs ability to wrangle others to work with you.
Impact is a handwavy way of saying “is your work good for the company”.
Figuring out how to allocate scarce career resources at a company ("impact", recognition, promotions, etc) is fundamental to the job of getting stuff done in a large organization.
There's an old saying: politics began when two people in a cave found themselves with only one blanket.
Exceeds 1. Politics is the craft of influence. And, debatably, there's a politic even when population size=1, between your subconscious instinctive mind (eat the entire box of donuts) versus your conscious mind (don't spike your blood sugar).
I think too many people happens because a company would rather hire 10 "market rate" people than 3 well-compensated ones. Headcount inflation dilutes responsibility and rewards, so even if one of the "market rate" guys does the best work possible they won't get rewarded proportionally... so if hard work isn't going to get them adequate comp, maybe politics will.
Alternatively, companies hire multiple subject domain experts, and pay them handsomely.
The experts believe they've been hired for the value of their opinions, rather than for being 'yes-people', and have differing opinions to each other.
At a certain pay threshold, there are multiple peoples who's motivation is not "how do I maximise my compensation?" and instead is "how do I do the best work I can?" Sometimes this presents as vocal disagreements between experts.
> a company would rather hire 10 "market rate" people than 3 well-compensated ones
The former is probably easier. They don't have to justify or determine the salaries, and don't have to figure out who's worth the money, and don't have to figure out how to figure that out.
It also comes that the well-compensated people are probably that because they know how to advocate for their worth, which usually includes a list of things they will tolerate and a list they will not, whereas "market rate" is just happy to be there and more inclined to go along with, ya know, whatever.
I believe incompetence is the key. When someone cannot compete (or the office does not use yardstick that can be measurable) politics is the only way to get you up.
Switch to what Nobel prize to man instead of the woman who do the work … sometimes. Take the credit and get the promotion.
It's a question of what you want to invest your time in. Everyone creates output, whether it's lines of code, a smoke screen to hide your social media time, or a set of ongoing conversations and perceptions than you have a use in the organization.
Sounds like you’re really down on FOSS and think FOSS projects don’t get stuff done and have no success? You might want to think about that a bit more.
FOSS can sometimes get stuff done but I'd argue it gets stuff done in spite of all the bickering, not because of it. If all the energy spent on arguments or "design by committee" was spent productively FOSS would go much farther (hell maybe we'd finally get that "year of the Linux desktop").
It doesn't have to make money now. But it's clearly pouring commercial-project-level of resources into uv, on the belief they will somehow recoup that investment later on.
It doesn’t hav eto make money ever on us for it to be worth it to them.
If you’re a Python shop, compare
- writing uv and keeping it private makes package management easier for your own packages
- writing uv and opening it up, and getting all/most third party libs to use it makes package management easier for your own packages and third party packages you use
Keep in mind that "making money" doesn't have to be from people paying to use uv.
It could be that they calculate the existence of uv saves their team more time (and therefore expense) in their other work than it used to create. It could be that recognition for making the tool is worth the cost as a marketing expense. It could be that other companies donate money to them either ahead of time in order to get uv made, or after it was made to encourage more useful tools to be made. etc
«« I don't want to charge people money to use our tools, and I don't want to create an incentive structure whereby our open source offerings are competing with any commercial offerings (which is what you see with a lost of hosted-open-source-SaaS business models).
What I want to do is build software that vertically integrates with our open source tools, and sell that software to companies that are already using Ruff, uv, etc. Alternatives to things that companies already pay for today.
An example of what this might look like (we may not do this, but it's helpful to have a concrete example of the strategy) would be something like an enterprise-focused private package registry. A lot of big companies use uv. We spend time talking to them. They all spend money on private package registries, and have issues with them. We could build a private registry that integrates well with uv, and sell it to those companies. [...]
But the core of what I want to do is this: build great tools, hopefully people like them, hopefully they grow, hopefully companies adopt them; then sell software to those companies that represents the natural next thing they need when building with Python. Hopefully we can build something better than the alternatives by playing well with our OSS, and hopefully we are the natural choice if they're already using our OSS. »»
nah, a lot of people working on `uv` have a massive amount of experience working on the rust ecosystem, including `cargo` the rust package manager. `uv` is even advertised as `cargo` for python. And what is `cargo`? a FLOSS project.
Lots of lessons from other FLOSS package managers helped `cargo` become great, and then this knowledge helped shape `uv`.
> This would never work in a FOSS world because the project will be stuck in endless planning as everyone will have an opinion on how it should be done and nothing will actually get done.
numpy is the the de-facto foundation for data science in python, which is one of the main reasons, if not the main reason, why people use python
I largely agree but don't want to entirely discount the effect that using a compiled language had.
At least in my limited experience, the selling point with the most traction is that you don't already need a working python install to get UV. And once you have UV, you can just go!
If I had a dollar for every time I've helped somebody untangle the mess of python environment libraries created by an undocumented mix of python delivered through the distributions package management versus native pip versus manually installed...
At least on paper, both poetry and UV have a pretty similar feature set. You do however need a working python environment to install and use poetry though.
> the selling point with the most traction is that you don't already need a working python install to get UV. And once you have UV, you can just go!
I still genuinely do not understand why this is a serious selling point. Linux systems commonly already provide (and heavily depend upon) a Python distribution which is perfectly suitable for creating virtual environments, and Python on Windows is provided by a traditional installer following the usual idioms for Windows end users. (To install uv on Windows I would be expected to use the PowerShell equivalent of a curl | sh trick; many people trying to learn to use Python on Windows have to be taught what cmd.exe is, never mind PowerShell.) If anything, new Python-on-Windows users are getting tripped up by the moving target of attempts to make it even easier (in part because of things Microsoft messed up when trying to coordinate with the CPython team; see for example https://stackoverflow.com/questions/58754860/cmd-opens-windo... when it originally happened in Python 3.7).
> If I had a dollar for every time I've helped somebody untangle the mess of python environment libraries created by an undocumented mix of python delivered through the distributions package management versus native pip versus manually installed...
Sure, but that has everything to do with not understanding (or caring about) virtual environments (which are fundamental, and used by uv under the hood because there is really no viable alternative), and nothing to do with getting Python in the first place. I also don't know what you mean about "native pip" here; it seems like you're conflating the Python installation process with the package installation process.
Linux systems commonly already provide an outdated system Python you don’t want to use, and it can’t be used to create a venv of a version you want to use. A single Python version for the entire system fundamentally doesn’t work for many people thanks to shitty compat story in the vast ecosystem.
Even languages with great compat story are moving to support multi-toolchains natively. For instance, go 1.22 on Ubuntu 24.04 LTS is outdated, but it will automatically download the 1.25 toolchain when it seems go 1.25.0 in go.mod.
> Linux systems commonly already provide an outdated system Python you don’t want to use
They can be a bit long in the tooth, yes, but from past experience another Python version I don't want to use is anything ending in .0, so I can cope with them being a little older.
That's in quite a bit of contrast to something like Go, where I will happily update on the day a new version comes out. Some care is still needed - they allow security changes particularly to be breaking, but at least those tend to be deliberate changes.
> Linux systems commonly already provide an outdated system Python you don’t want to use
Even with LTS Ubuntu updated only at EOL, Python will not be EOL most of the time.
> A single Python version for the entire system fundamentally doesn’t work for many people thanks to shitty compat story in the vast ecosystem.
My experience has been radically different. Everyone is trying their hardest to provide wheels for a wide range of platforms, and all the most popular projects succeed. Try adding `--only-binary=:all:` to your pip invocations and let me know the next time that actually causes a failure.
Besides which, I was very specifically talking about the user story for people who are just learning to program and will use Python for it. Because otherwise this problem is trivially solved by anyone competent. In particular, building and installing Python from source is just the standard configure / make / make install dance, and it Just Works. I have done it many times and never needed any help to figure it out even though it was the first thing I tried to build from C source after switching to Linux.
For much of the ML/scientific ecosystem, you're lucky to get all your deps working with the latest minor version of Python six months to a year after its release. Random ML projects with hundreds to thousands of stars on GitHub may only work with a specific, rather ancient version of Python.
> Because otherwise this problem is trivially solved by anyone competent. In particular, building and installing Python from source is just the standard configure / make / make install dance, and it Just Works. I have done it many times and never needed any help to figure it out even though it was the first thing I tried to build from C source after switching to Linux.
I compiled the latest GCC many times with the standard configure / make / make install dance when I just started learning *nix command line. I even compiled gmp, mpfr, etc. many times. It Just Works. Do you compile your GCC every time before you compile your Python? Why not? It Just Works.
Time. CPython compiles in a few minutes on an underpowered laptop. I don't recall last time I compiled GCC, but I had to compile LLVM and Clang recently, and it took significantly longer than "a few minutes" on a high-end desktop.
Why not just use a Python container rather than rely on having the latest binary installed on the system? Then venv inside the container. That would get you the “venv of a version” that you are referring to
It's more complex and heavier than using uv. I see docker/vm/vagrant/etc as something as something I reach for when the environment I want is too big, too fancy or too nondeterministic to manually set up locally; but the entire point is that "plain Python with some dependencies" really shouldn't qualify as any of these (just like build environment for a random Rust library).
Also, what do you do when you want your to locally test your codebase across many Python versions? Do you keep track of several different containers? If you start writing some tool to wrap that, you're back at square one.
Our firm uses python extensively and the virtual environment for every script or script is ... difficult. We have dozens of python scripts running for team research and in production, from small maintenance tools to rather complex daemons. Add to that the hundreds of Jupyter notebooks used by various people. Some have a handful of dependencies, some dozens of dependencies. While most of those scripts/notebooks are only used by a handful of people, many are used company-wide.
Further, we have a rather largish set of internal libraries most of our python programs rely on. And some of those rely on external 3rd party API's (often REST). When we find a bug or something changes, more often than not, we want to roll out the changed internal lib so that all programs that use it get the fix. Having to get everyone to rebuild and/or redeploy everything is a non-starter as many of the people involved are not primarily software developers.
We usually install into the system dirs and have a dependency problem maybe once a year. And it's usually trivially resolved (the biggest problem was with some google libs which had internally inconsistent dependencies at one point).
I can understand encouraging the use of virtual environments, but this movement towards requiring them ignores what, I think, is a very common use case. In short, no one way is suitable for everyone.
But in your case if you had a vanilla even just a standard, hardened RHEL image then you can run as many container variations as you want and not be impacted by host changes. Actually the host can stay pretty static.
> Why not just use a Python container rather than rely on having the latest binary installed on the system?
Sometimes this is the right answer. Sometimes docker/podman/runc are not an option nor would the headache of volumes/mounts/permissions/hw-pass-through be worth the additional mess.
It is hard to over-state how delightful putting `uv` in the shebang is:
At no point did I have a detour to figure out why `python` is symlinked to `python3` unless I am in some random directory where there is a half-broken `conda` environment...
Yes, PATH-driven interpreter selection is the source of the detours. uv eliminates interpreter ambiguity but requires uv as a prerequisite. This improves portability inside environments that standardize uv; it’s not “portable to machines with nothing installed.”
Though, this isn’t about avoiding installs; it’s about making the one install (uv) the only thing you have to get right, instead of debugging whatever python means today.
I was advocating for containers as the “hard isolation / full stack” solution which eliminate host interpreter ambiguity and OS drift by running everything inside a pinned image. But you do need podman and have the permissions set right on it.
'we can't ship the Python version you want for your OS so we'll ship the whole OS' is a solution, but the 'we can't' part was embarrassing in 2015 already.
So basically, it avoids the whole chicken-and-egg problem. With UV you've simply always got "UV -> project Python 1.23 -> project". UV is your dependency manager, and your Python is just another dependency.
With other dependency managers you end up with "system Python 3.45 -> dep manager -> project Python 1.23 -> project". Or worse, "system Python 1.23 -> dep manager -> project Python 1.23 -> project". And of course there will be people who read about the problem and install their own Python manager, so they end up with a "system Python -> virtualenv Python -> poetry Python -> project" stack. Or the other way around, and they'll end up installing their project dependencies globally...
Sorry, but that is simply incorrect, on many levels.
Virtual environments are the fundamental way of setting up a Python project, whether or not you use uv, which creates and manages them for you. And these virtual environments can freely either use or not use the system environment, whether or not you use uv to create them. It's literally a single-line difference in the `pyvenv.cfg` file, which is a standard required part of the environment (see https://peps.python.org/pep-0405/), created whether or not you use uv.
Most of the time you don't need a different Python version from the system one. When you do, uv can install one for you, but it doesn't change what your dependency chain actually is.
Python-native tools like Poetry, Hatch etc. also work by managing standards-defined virtual environments (which can be created using the standard library, and you don't even have to bootstrap pip into them if you don't want to) in fundamentally the same way that uv does. Some of them can even grab Python builds for you the same way that uv does (of course, uv doesn't need a "system Python" to exist first). "system Python -> virtualenv Python -> poetry Python -> project" is complete nonsense. The "virtualenv Python" is the system Python — either a symlink or a stub executable that launches that Python — and the project will be installed into that virtual environment. A tool like Poetry might use the system Python directly, or it might install into its own separate virtual environment; but either way it doesn't cause any actual complication.
Anyone who "ends up installing their project dependencies globally" has simply not read and understood Contemporary Python Development 101. In fact, anyone doing this on a reasonably new Linux has gone far out of the way to avoid learning that, by forcefully bypassing multiple warnings (such as described in https://peps.python.org/pep-0668/).
No matter what your tooling, the only sensible "stack" to end up with, for almost any project, is: base Python (usually the system Python but may be a separately installed Python) -> virtual environment (into which both the project and its dependencies are installed). The base Python provides the standard library; often there will be no third-party libraries, and even if there are they will usually be cut off intentionally. (If your Linux comes with pre-installed third-party libraries, they exist primarily to service tools that are part of your Linux distribution; you may be able to use them for some useful local hacking, but they are not appropriate for serious, publishable development.)
Your tooling sits parallel to, and isolated from, that as long as it is literally anything other than pip — and even with pip you can have that isolation (it's flawed but it works for common cases; see for example https://zahlman.github.io/posts/2025/02/28/python-packaging-... for how I set it up using a vendored copy of pip provided by Pipx), and have been able to for three years now.
> Most of the time you don't need a different Python version from the system one.
Except for literally anytime you’re collaborating with anyone, ever? I can’t even begin to imagine working on a project where folks just use whatever python version their OS happens to ship with. Do you also just ship the latest version of whatever container because most of the time nothing has changed?
> has simply not read and understood Contemporary Python Development 101.
They haven't. At the end of the day, they just want their program to work. You and I can design a utopian packaging system, but the physics PhD with a hand-me-down windows laptop and access to her university's Linux research cluster don't care about python other than it has a PITA library situation that UV addresses.
You misunderstand. The physicists are developing their own software to analyze their experimental data. They typically have little software development experience, but there is seldom someone more knowledgeable available to support them. Making matters worse, they often are not at all interested in software development and thus also don't invest the time to learn more than the absolute minimum necessary to solve their current problem, even if it could save them a lot of time in the long run. (Even though I find the situation frustration, I can't say I don't relate, given that I feel the same way about LaTeX.)
Conda has slowly but surely gone down the drain as well. It used to be bullet proof but there too you now get absolutely unsolvable circular dependencies.
Good question, I can't backtrack right now but it was apmplanner that I had to compile from source, and it contains some python that gets executed during the build process (I haven't seen it try to run it during normal execution yet).
Probably either one of python-serial python-pexpect judging by the file dates, and neither of these are so exciting that there should have been any version conflicts at all.
And the only reason I had to rebuild it at all was due to another version conflict in the apm distribution that expects a particular version of pixbuf to be present on the system and all hell breaks loose if it isn't, and you can't install that version on a modern system because that breaks other packages.
It is insane how bad all this package management crap is. The GNU project and the linux kernel are the only ones that have never given me any trouble.
They're not applications developers, but they need to write code. That's the whole point. Python is popular within academia because it replaces R/Excel/VB.Net, not Java/C++.
> If I had a dollar for every time I've helped somebody untangle the mess of python environment libraries created by an undocumented mix of python delivered through the distributions package management versus native pip versus manually installed...
macos and linux usually come with a python installation out of the box. windows should be following suite but regardless, using uv vs venv is not that different for most users. in fact to use uv in a project, `uv venv` seems like a prerequisite.
> macos and linux usually come with a python installation out of the box
Yep. But it's either old or broken or both. Using a tool not dependent on the python ecosystem to manage the python ecosystem is the trick here that makes it so reliable and invulnerable to issues that characterize python / dependency hell.
imho the dependency hell is a product of the dependencies themselves (a la node), especially the lack of version fixing in majority of projects.
conda already had the independence from python distribution, but it still had its own set of problems with overlap with pip (see mamba).
i personally use uv for projects at work, but for smaller projects, `requirements.txt` feel more readable than the `toml` and `uv.lock`. in the spirit of encouraging best practices, it is still probably simpler to do it with older tools. but larger projects definitely benefit, such as in building container images.
If I want to install Python on Windows and start using pip, I grab an installer from python.org and follow a wizard. On Linux, I almost certainly already have it anyway.
If I want to bootstrap from uv on Windows, the simplest option offered involves Powershell.
Either way, I can write quite a bit with just the standard library before I have to understand what uv really is (or what pip is). At that point, yes, the pip UX is quite a bit messier. But I already have Python, and pip itself was also trivially installable (e.g. via the standard library `ensurepip`, or from a Linux system package manager — yes, still using the command line, but this hypothetical is conditioned on being a Linux user).
Not many normal people want to install python. Instead, author of the software they are trying to use wants them to install python. So they follow readme, download windows installer as you say, pip this pipx, pipx that conda, conda this requirements.txt, and five minutes later they have magic error telling that tensorflow version they are installing is not compatible with pytorch version they are installing or some such.
The aftertaste python leaves is lasting-disgusting.
Scenarios like that occur daily. I do quite a bit of software development and whenever I come across something that really needs python I mentally prepare for a day of battle with the various (all subtly broken) package managers, dependency hell and circular nonsense to the point that I am also ready to give up on it after a day of trying.
Just recently: a build of a piece of software that itself wasn't written in python but that urgently needed a very particular version of it with a whole bunch of dependencies that refused to play nice with Anaconda for some reason (which in spite of the fact that it too is becoming less reliable is probably still the better one). The solution? Temporarily move andaconda to a backup directory, remove the venv activation code from .bashrc and compile the project, then restore everything to the way it was before (which I need it to be because I have some other stuff on the stove that is built using python because there isn't anything else).
And let's not go into bluetooth device support in python, anything involving networking that is a little bit off the beaten path and so on.
> Scenarios like that occur daily. I do quite a bit of software development and whenever I come across something that really needs python I mentally prepare for a day of battle with the various (all subtly broken) package managers, dependency hell and circular nonsense to the point that I am also ready to give up on it after a day of trying.
Please name a set of common packages that causes this problem reliably.
You're getting a bit boring, and are not arguing in good faith. "Reliably"... as per your definition I guess. You have now made 60(!!!) comments in this thread questioning everything and everybody without ever once accepting that other people's experiences do not necessarily have to match your own. If you did some reading rather than just writing you'd have seen that I gave a very specific example right in this thread. You are now going on my blocklist because I really don't have time or energy to argue with language zealots.
The large majority of my comments ITT are not in fact "questioning everything and everybody". I checked your comment history and couldn't find other comments from you ITT, and the post I responded to does not contain anything like a "very specific example". Your accusations are entirely unfounded, and frankly inflammatory.
Traditional Windows install didn’t include things Microsoft doesn’t make. But, any PC distributor could always include Python as part of their base Windows install with all the other stuff that bloats the typical third party Windows installs. They don’t which indicates the market doesn’t want it. Your indictment of the lack of Python out of the box is less on Windows than on the “distro” served by PC manufacturers
I don't think this makes a meaningful difference. The installation is a `curl | sh`, which downloads a tarball, which gets extracted to some directory in $PATH.
It currently includes two executables, but having it contain two executables and a bunch of .so libraries would be a fairly trivial change. It only gets messy when you want it to make use of system-provided versions of the libraries, rather than simply vendoring them all yourself.
It gets mess not just in that way but also someone can have a weird LD_LIBRARY_PATH that starts to have problems. Statically linking drastically simplifies distribution and you’ve had to have distributed 0 software to end users to believe otherwise. The only platform this isn’t the case for is Apple because they natively supported app bundles. I don’t know if flat pack solves the distribution problem because I’ve not seen a whole lot of it in the ecosystem - most people seem to generally still rely on the system package manager and commercial entities don’t seem to really target flat pack.
When you're shipping software, you have full control over LD_LIBRARY_PATH. Your entry point can be e.g. a shell script that sets it.
There is not so much difference between shipping a statically linked binary, and a dynamically linked binary that brings its own shared object files.
But if they are equivalent, static linking has the benefit of simplicity: Why create and ship N files that load each other in fancy ways, when you can do 1 that doesn't have this complexity?
That’s precisely my point. It’s insanely weird to have a shell script to setup the path for an executable binary that can’t do it for itself. I guess you could go the RPATH route but boy have I only experienced pain from that.
> the conversation here keeps collapsing back to "Rust rewrite good/bad." That feels like cargo-culting the toolchain instead of asking the uncomfortable question: why did it take a greenfield project to give Python the package manager behavior people clearly wanted for the last decade?
I think there's a few things going on here:
- If you're going have a project that's obsessed with speed, you might as well use rust/c/c++/zig/etc to develop the project, otherwise you're always going to have python and the python ecosystem as a speed bottleneck. rust/c/c++/zig ecosystems generally care a lot about speed, so you can use a library and know that it's probably going to be fast.
- For example, the entire python ecosystem generally does not put much emphasis on startup time. I know there's been some recent work here on the interpreter itself, but even modules in the standard library will pre-compile regular expressions at import time, even if they're never used, like the "email" module.
- Because the python ecosystem doesn't generally optimize for speed (especially startup), the slowdowns end up being contagious. If you import a library that doesn't care about startup time, why should your library care about startup time? The same could maybe be said for memory usage.
- The bootstrapping problem is also mostly solved by using a complied language like c/rust/go. If the package manager is written in python (or even node/javascript), you first have to have python+dependencies installed before you can install python and your dependencies. With uv, you copy/install a single binary file which can then install python + dependencies and automatically do the right thing.
- I think it's possible to write a pretty fast implementation using python, but you'd need to "greenfield" it by rewriting all of the dependencies yourself so you can optimize startup time and bootstrapping.
- Also, as the article mentions there are _some_ improvements that have happened in the standards/PEPs that should eventually make they're way into pip, though it probably won't be quite the gamechanger that uv is.
> the entire python ecosystem generally does not put much emphasis on startup time.
You'd think PyPy would be more popular, then.
> even modules in the standard library will pre-compile regular expressions at import time, even if they're never used, like the "email" module.
Hmm, that is slower than I realized (although still just a fraction of typical module import time):
$ python -m timeit --setup 'import re' 're.compile("foo.*bar"); re.purge()'
10000 loops, best of 5: 26.5 usec per loop
$ python -m timeit --setup 'import sys' 'import re; del sys.modules["re"]'
500 loops, best of 5: 428 usec per loop
I agree the email module is atrocious in general, which specifically matters because it's used by pip for parsing "compiled" metadata (PKG-INFO in sdists, when present, and METADATA in wheels). The format is intended to look like email headers and be parseable that way; but the RFC mandates all kinds of things that are irrelevant to package metadata, and despite the streaming interface it's hard to actually parse only the things you really need to know.
> Because the python ecosystem doesn't generally optimize for speed (especially startup), the slowdowns end up being contagious. If you import a library that doesn't care about startup time, why should your library care about startup time? The same could maybe be said for memory usage.
I'm trying to fight this, by raising awareness and by choosing my dependencies carefully.
> you first have to have python+dependencies installed before you can install python and your dependencies
It's unusual that you actually need to install Python again after initially having "python+dependencies installed". And pip vendors all its own dependencies except for what's in the standard library. (Which is highly relevant to Debian getting away with the repackaging that it does.)
> I think it's possible to write a pretty fast implementation using python, but you'd need to "greenfield" it by rewriting all of the dependencies yourself so you can optimize startup time and bootstrapping.
This is my current main project btw. (No, I don't really care that uv already exists. I'll have to blog about why.)
> there are _some_ improvements that have happened in the standards/PEPs that should eventually make they're way into pip
Most of them already have, along with other changes. The 2025 pip experience is, believe it or not, much better than the ~2018 pip experience, notwithstanding higher expectations for ecosystem complexity.
PyPy is hamstrung by a limited (previously, a lack of) compatibility with compiled Python modules. If it had been a drop-in replacement for the equivalent Python versions, then it'd probably have been much more popular
> PyPy doesn't do anything to help startup time. In fact, it's typically a bit slower to start up than CPython.
Considerably slower on my machine. Yes, that was my point. If the community doesn't care about startup time, you'd expect more adoption of an implementation that sacrifices that startup time for later performance.
> I agree the email module is atrocious in general
Hah. Yes sounds like we are very much on the same page here. Python stdlib could really use a simple generic email/http header parser.
> It's unusual that you actually need to install Python again after initially having "python+dependencies installed".
I’m thinking about 3rd party installers like poetry, pip-tools, pdm, etc, where your installer needs python+dependencies installed before it can start installing.
> “write a pretty fast implementation using python” This is my current main project btw. (No, I don't really care that uv already exists. I'll have to blog about why.)
Do you have anything public yet? I’m totally curious. I started doing this for flake8 and pip back in 2021/2022, but when ruff+uv came along I figured it wasn’t worth my time any more.
The repo is https://github.com/zahlman/paper but it's not really usable and it's missing a bunch of local very unfinished stuff (and my README template definitely needs fixing). More of a "watch this space" but I would really like to push out a Show HN for the first chunk of functionality soon.
Note that the advantages of Rust are not just execution speed: it's also a good language for expressing one's thoughts, and thus makes it easier to find and unlock the algorithmic speedups that really increase speed.
But yeah. Python packaging has been dumb for decades and successive Python package managers recapitulated the same idiocies over and over. Anyone who had used both Python and a serious programming language knew it, the problem was getting anyone to do anything about it. I can't help thinking that maybe the main reason using Rust worked is that it forced anyone who wanted to contribute to it to experience what using a language with a non-awful package manager is like.
Cargo is not really good. The very much non-zero frequency of something with cargo not working for opaque reasons and then suddenly working again after "cargo clean", the "no, I invoke your binaries"-mentality (try running a benchmark without either ^C'ing out of bench to copy the binary name or parsing some internal JSON metadata) because "cargo build" is the only build system in the world which will never tell you what it built, the whole mess with features, default-features, no-default-features, of course bindgen/sys dependency conflicts, "I'll just use the wrong -L libpath for the bin crate but if I'm building tests I remember the ...64". cargo randomly deciding that it now has to rebuild everything or 50% of everything for reasons which are never to be known, builds being not reproducible, cargo just never cleaning garbage up and so on.
rustdoc has only slightly changed since the 2010s, it's still very hard to figure out generic/trait-oriented APIs, and it still only does API documentation in mostly the same basic 1:1 "list of items" style. Most projects end up with two totally disjointed sets of documentation, usually one somewhere on github pages and the rustdoc.
Rust is overall good language, don't get me wrong. But it and the ecosystem also has a ton of issues (and that's without even mentioning async), and most of these have been sticking around since basically 1.0.
(However, the rules around initialization are just stupid and unsafe is no good. Rust also tends to favor a very allocation-heavy style of writing code, because avoiding allocations tends to be possible but often annoying and difficult in unique-to-rust ways. For somewhat related reasons, trivial things are at times really hard in Rust for no discernible reason. As a concrete, simplistic but also real-world example, Vec::push is an incredibly pessimistic method, but if you want to get around it, you either have to initialize the whole Vec, which is a complete waste of cycles, or you yolo it with reserve+set_len, which is invalid Rust because you didn't properly use MaybeUninit for locations which are only ever written.)
Cargo is fantastic... for building Rust code. Once you start trying to also use it to build C code, you're moving outside of Cargo's wheelhouse, using features that Cargo only supports begrudgingly (like build scripts). Cargo is definitely not intended to be an end-all be-all build system for all languages; it's specialized for Rust, and that's what it's great at. For multi-language projects, you want some sort of simple tool to orchestrate the builds (e.g. `just` https://just.systems/man/en/ ) that internally calls out to Cargo (and whatever other build systems you have for whatever other languages you're using). The overall mistake is thinking that Cargo is a replacement for `make`, when it isn't nearly so general.
I have empathy for anyone who was required to use cargo on a nfs mounted fs. The number of files and random IO cargo uses makes any large project unusable.
I had to stop telling people to stop syncing their cargo env around nfs so many times, but sometimes they have no choice.
> That feels like cargo-culting the toolchain [...]
Pun intended?
Jokes aside, what you describe is a common pattern. It's also why Google internally they used to get decent speedups from rewriting some old C++ project in Go for a while: the magic was mostly in the rewrite-with-hindsight.
If you put effort into it, you can also get there via an incremental refactoring of an existing system. But the rewrite is probably easier to find motivation for, I guess.
I don't know the problem space and I'm sure that the language-agnostic algorithmic improvements are massive. But to me, there's just something about rust that promotes fast code. It's easy to avoid copies and pointer-chasing, for example. In python, you never have any idea when you're copying, when you're chasing a pointer, when you're allocating, and so on. (Or maybe you do, but I certainly don't.) You're so far from hardware that you start thinking more abstractly and not worrying about performance. For some things, that's probably perfect. But for writing fast code, it's not the right mindset.
The thing is that a lot of the bottlenecks in pip are entirely artificial, and a lot of the rest can't really be improved by rewriting in Rust per se, because they're already written in C (within the Python interpreter itself).
By definition greenfield projects literally means free from constraints.
So the answer is in your question:
Why did it take a team unbound by constraints to try something new, as compared to a project with millions of existing stakeholders?
Single vision. Smaller team. What they landed on is a hit (no guarantee of that in advance!)
Conversely, with so many stakeholders, getting everyone to rally around a change (in advance) is hard.
In my experience this is about human nature/organisation and spans all types of organisations, not just python or open source etc.
It also looks like python would have got there, given the foundations put in place as noted in the article.
I can't find the quote for this, but I remember Python maintainers wanted package installing and management to be separate things. uv did the opposite, and instead it's more like npm.
> That feels like cargo-culting the toolchain instead of asking the uncomfortable question: why did it take a greenfield project to give Python the package manager behavior people clearly wanted for the last decade?
This feels like a very unfair take to me. Uv didn’t happen in isolation, and wasn’t the first alternative to pip. It’s built on a lot of hard work by the community to put the standards in place, through the PEP process, that make it possible.
I suspect that the non-Rust improvements are vastly more important than you’re giving credit for. I think the go version would be 5x or 8x compared to the 10x, maybe closer. It’s not that the Rust parts are insignificant but the algorithmic changes eliminate huge bottlenecks.
Because it broke backwards compatibility? It's worth noting that setuptools is in a similar situation to pip, where any change has a high chance of breaking things (as can be seen by perusing the setuptools and pip bug trackers). PEP 517/518 removed the implementation-defined nature of the ecosystem (which had caused issues for at least a decade, see e.g. the failures of distutils2 and bento), instead replacing it with a system where users complain about which backend to use (which is at least an improvement on the previous situation)...
Poetry largely accomplished the same thing first with most of the speedups (except managing your python installations) and had the disadvantage of starting before the PEPs you mentioned were standardized.
It just has to do with values. If you value perf you aren't going to write it in Python. And if you value perf then everything else becomes a no brainer as well.
It's the same way in JS land. You can make a game in a few kilobytes, but most web pages are still many megabytes for what should have been no JS at all.
That's TensorRT-LLM in it's entirety at 1.2.0rc6 locked to run on Ubuntu or NixOS with full MPI and `nvshmem`, the DGX container Jensen's Desk edition (I know because I also rip apart and `autopatchelf` NGC containers for repackaging on Grace/SBSA).
It's... arduous. And the benefit is what exactly? A very mixed collection of maintainers have asserted that software behavior is monotonic along a single axis most of which they can't see and we ran a solver over those guesses?
I think the future is collections of wheels that have been through a process the consumer regards as credible.
> it's how much speed we "unlocked" just by finally treating Python packaging as a well-specified systems problem instead of a pile of historical accidents.
A lot of that, in turn, boils down to realizing that it could be fast, and then expecting that and caring enough about it.
> but with the same design decisions (PEP 517/518/621/658 focus, HTTP range tricks, aggressive wheel-first strategy, ignoring obviously defensive upper bounds, etc.), I strongly suspect we'd be debating a 1.3× vs 1.5× speedup instead of a 10× headline
I'm doing a project of this sort (although I'm hoping not to reinvent the wheel (heh) for the actual resolution algorithm). I fully expect that some things will be barely improved or even slower, but many things will be nearly as fast as with uv.
For example, installing from cache (the focus for the first round) mainly relies on tools in the standard library that are written in C and have to make system calls and interact with the filesystem; Rust can't do a whole lot to improve on that. On the other hand, a new project can improve by storing unpacked files in the cache (like uv) instead of just the artifact (I'm storing both; pip stores the artifact, but with a msgpack header) and hard-linking them instead of copying them (so that the system calls do less I/O). It can also improve by actually making the cached data accessible without a network call (pip's cache is an HTTP cache; contacting PyPI tells it what the original download URL is for the file it downloaded, which is then hashed to determine its path).
For another example, pre-compiling bytecode can be parallelized; there's even already code in the standard library for it. Pip hasn't been taking advantage of that all this time, but to my understanding it will soon feature its own logic (like uv does) to assign files to compile to worker processes. But Rust can't really help with the actual logic being parallelized, because that, too, is written purely in C (at least for CPython), within the interpreter.
> why did it take a greenfield project to give Python the package manager behavior people clearly wanted for the last decade?
(Zeroth, pip has been doing HTTP range tricks, or at least trying, for quite a while. And the exact point of PEP 658 is to obsolete them. It just doesn't really work for sdists with the current level of metadata expressive power, as in other PEPs like 440 and 508. Which is why we have more PEPs in the pipeline trying to fix that, like 725. And discussions and summaries like https://pypackaging-native.github.io/.)
First, you have to write the standards. People in the community expect interoperability. PEP 518 exists specifically so that people could start working on alternatives to Setuptools as a build backend, and PEP 517 exists so that such alternatives could have the option of providing just the build backend functionality. (But the people making things like Poetry and Hatch had grander ideas anyway.)
But also, consider the alternative: the only other viable way would have been for pip to totally rip apart established code paths and possibly break compatibility. And, well, if you used and talked about Python at any point between 2006 and 2020, you should have the first-hand experience required to complete that thought.