From a safety perspective there isn't a huge benefit to choosing Zig over C with the caveat, as others have pointed out, that you need to enable more tooling in C to get to a comparable level. You should be using -Wall and -fsanitize=address among others in your debug builds.
You do get some creature comforts like slices (fat pointers) and defer (goto replacement). But you also get forced to write a lot of explicit conversions (I personally think this is a good thing).
The C interop is good but the compiler is doing a lot of work under the hood for you to make it happen. And if you export Zig code to C... well you're restricted by the ABI so you end up writing C-in-Zig which you may as well be writing C.
It might be an easier fit than Rust in terms of ergonomics for C developers, no doubt there.
But I think long-term things like the borrow checker could still prove useful for kernel code. Currently you have to specify invariants like that in a separate language from C, if at all, and it's difficult to verify. Bringing that into a language whose compiler can check it for you is very powerful. I wouldn't discount it.
There are a ton of extremely
Hard problems to solve there that we are not likely going to solve.
One: English is terribly non-prescriptive. Explaining an algorithm is incredibly laborious in spoken language and can contain many ambiguous errors. Try reading Euclid’s Elements. Or really any pre-algebra text and reproduce its results.
Fortunately there’s a solution to that. Formal languages.
Now LLMs can somewhat bridge that gap due to how frequently we write about code. But it’s a non-deterministic process and hallucinations are by design. There’s no escaping the fact that an LLM is making up the code it generates. There’s nothing inside the machine that is understanding what any of the data it’s manipulating means or how it affects the system it’s generating code for.
And it’s not even a tool.
Worse, we can’t actually ship the code that gets generated without a human appendage to the machine to take the fall for it if there are any mistakes in it.
If you’re trying to vibe code an operating system and have no idea what good OS design is or what good code for such a system looks like… you’re going to be a bad appendage for the clanker. If it could ship code on its own the corporate powers that be absolutely would fire all the vibe coders and you’d never work again.
Vibe coding is turning people into indentured corporate servants. The last mile delivery driver of code. Every input surveilled and scrutinized. Output is your responsibility and something you have little control over. You learn nothing when the LLM gives you the answer because you’ll forget it tomorrow. There’s no joy in it either because there is no challenge and no difficulty.
I think what pron is leading to is that there’s no need to imagine what these machines could potentially do. I think we should be looking at what they actually do, who they’re doing it to, and who benefits from it.
The vast majority of software models are trained on have little to no standards and contains all kinds of errors and omissions.
And these are systems that require a human in the loop to verify the output because you are ultimately responsible for it when it makes a mistake. And it will.
It’s not fun because it’s not fun being an appendage to a machine that doesn’t know or care that you exist. It will generate 1200 lines of code. You have to try and make sure it doesn’t contain the subtle kinds of errors that could cost you your job.
At least if you made those errors you could own them and learn from it. Instead you gain nothing when the machine makes an error except the ability to detect them over time.
I think if you don’t know C extremely well then there’s no point vibe coding it. If you don’t know anything about operating systems you’re not going to find the security bugs or know if the scheduler you chose does the the right thing. You won’t be able to tell the difference between good code and bad.
If I gave you a gun without a safety could you be the one to blame when it goes off because you weren’t careful enough?
The problem with this analogy is that it makes no sense.
LLMs aren’t guns.
The problem with using them is that humans have to review the content for accuracy. And that gets tiresome because the whole point is that the LLM saves you time and effort doing it yourself. So naturally people will tend to stop checking and assume the output is correct, “because the LLM is so good.”
Then you get false citations and bogus claims everywhere.
> The problem with using them is that humans have to review the content for accuracy.
There are (at least) two humans in this equation. The publisher, and the reader. The publisher at least should do their due diligence, regardless of how "hard" it is (in this case, we literally just ask that you review your OWN CITATIONS that you insert into your paper). This is why we have accountability as a concept.
> The problem with using them is that humans have to review the content for accuracy.
How long are we going to push this same narrative we've been hearing since the introduction of these tools? When can we trust these tools to be accurate? For technology that is marketed as having superhuman intelligence, it sure seems dumb that it has to be fact-checked by less-intelligent humans.
> If I gave you a gun without a safety could you be the one to blame when it goes off because you weren’t careful enough?
Absolutely. Many guns don't have safties. You don't load a round in the chamber unless you intend on using it.
A gun going off when you don't intend is a negligent discharge. No ifs, ands or buts. The person in possession of the gun is always responsible for it.
> A gun going off when you don't intend is a negligent discharg
false. A gun goes off when not intended too often to claim that. It has happned to me - I then took the gun to a qualified gunsmith for repairs.
A gun they fires and hits anything you didn't intend to is negligent discharge even if you intended to shoot. Gun saftey is about assuming a gun that could possible fire will and ensuring nothing bad can happen. When looking at gun in a store (that you might want to buy) you aim it at an upper corner where even if it fires the odds of something bad resulting is the least lively to happen (it should be unloaded - and you may have checked, but you still aim there!)
same with cat toy lazers - they should be safe to shine in an eye - but you still point in a safe direction.
Yes. That is absolutely the case. One of the
Most popular handguns does not have a safety switch that must be toggled before firing. (Glock series handguns)
If someone performs a negligent discharge, they are responsible, not Glock. It does have other safety mechanisms to prevent accidental fires not resulting from a trigger pull.
They are trained on code people had to make sacrifices for: deadlines, shortcuts, etc. And code people were simply too ignorant to be writing in the first place. Lots of code with hardly any coding standards.
So of course it’s going to generate code that has non-obvious bugs in it.
Ever play the Undefined Behaviour Game? Humans are bad at being compilers and catching mistakes.
I’d hoped… maybe still do, that the future of programming isn’t a shrug and, “good enough.” I hope we’ll keep developing languages and tools that let us better specify programs and optimize them.
It's much worse than "designed for cars." It's more like "not survivable without a car." It's the same with apps on my phone. I don't want to use them, but sometimes there simply is no alternative in today's world.
We may end up building a world where AI is similarly necessary. The AI companies would certainly like that. But at the moment we still have a choice. The more people exercise their agency now the more likely we are to retain that agency in the future.
I lived in Prague, whose center is medieval and the neighbourhoods around it pre-1900, and even though what you say is true (fewer people drove everywhere), the streets were still saturated to their capacity.
It seemed to me that regardless of the city, many people will drive until the point where traffic jams and parking become a nightmare, and only then consider the alternatives. This point of pain is much lower in old European cities that weren't built as car-centric and much higher in the US, but the pattern seems to repeat itself.
Helsinki made a major push to reduce cars to get to Vision Zero and succeeded in no car fatalities in 2024. It’s now hard to get a taxi and you’re expected to walk / other transport it’s a little bit annoying but worth it
The comment explicitly mentioned "cities". Of course rural and suburban areas don't make it practical to be without a car, but many people in cities could use public transportation but handwave it as beneath them or dangerous or unreliable. When in reality it works just fine. Car travel has its own tradeoffs that can be just as easily exaggerated.
Hamming, in his book The Art of Doing Science and Engineering, also encourages this.
But the last edition was in 1994 and he was writing from the position of having worked at Bell Labs for most of his career. We don’t really have that these days.
It’s great if you can find a way to be a non-fungible developer. I think part of the strategy is taking the spotlight and managing perceptions of your work. You don’t get to choose what you work on most of the time but you can make sure that it’s visible and useful.
As the author suggests, and I think I aspire to myself, is building good tools and libraries that people appreciate and depend on.
OSS developer and maintainer. Open to full-time work, not interested in blockchains, crypto, or AI. Prefer web application back-ends, systems level programming, databases, high perf/availability/reliability, etc. Experience as team lead, eng. manager, and IC.
We gotta gather ourselves and remind companies why they once paid handsomely to not let potential disruptiors run rampant on the market. Long term new teams will form once productivity is valued again and not this giant incestuous GDP-maxmizing scheme.
I doubt things will recover to 2018 levels. Too many new software devs coming out each year, too much AI, too little big company growth once everyone already has an internet computer in their hands. The Wild West is over and now the digital economy has entered the boring phase.
The comparison to greenhouse gases doesn’t make sense. Corps pay a lot for developers right now because they get more value out of them than they cost. As long as that remains true, devs will be fine.
> I think we ought to be keeping people trained and employed
I never understood this sentiment. We don't have a massive manual weaving industry anymore, 95%+ of people used to be farmers in 1900. Tech comes and replaces humans, and the transition can be extremely painful especially for the people replaced, but ultimately it's better than keeping people artificially employed in obsolete jobs.
(I don't think SWE will be obsolete, but even in this case I'd rather switch careers)
Most deindustrialized regions in the West haven't recovered to full prosperity and are quite depressing to live in, sometimes even 30-40 years later: US Rust Belt, Wallonia in Belgium, the French North East, etc.
At a large enough scale, most people don't really move on, their lives are wrecked and they just suffer through them.
This is predicated on the myth that people can re-skill and move into new industries. Sure the former can happen and people can learn new things. But we're talking about an economy where there are no new industries. And an economy where you have to work in order to live.
What's a software developer in their 30s, 40s, and 50s supposed to re-skill into? Take on debt for the rest of their lives and re-skill into a profession (if they can even afford to take several years out of their lives to go back to school)? Into blue collar work along with the salary cut for which they might not have the physical capabilities for?
There's no social system for providing the necessities for living.
The other side of it is skill. Human societies have lost knowledge before. We've had to rediscover various aspects of metallurgy before. We could lose the ability to understand the technology we've made if we trust everything to the LLMs. There are already vibecoders who don't even be able to review the code that it generates for them because they're starting to lose the critical faculties and skills to understand it.
a16z seems to view turning society into a dystopia as a goal, so that makes sense. Their portfolio includes:
- DoubleSpeed, a bot farm as a service provider, allowing customers to orchestrate social media activity across thousands of fake accounts to create artificial consensus on the topic of their choice. Never pay a human again!
- Cheddr, the TikTok of sports gambling, whose differentiating feature is allowing users under 21. Place live in-game bets with just a swipe!
- Coverd, a new type of credit card where you can wipe off bills by betting on your favorite gambling games in their app. No VPN required!
Wow, I just checked the doublespeed website and it is comically evil. The footer says — verbatim, and in huge letters — "never pay a human again." (I'm not selectively quoting; it's a full sentence, despite their weird capitalization.)
If Neal Stephenson tried to write a villain this on-the-nose, his editor would tell him to tone it down.
Can A16Z tell the difference? Insert that meme "At long last, we have created the Torment Nexus, from the classic sci-fi novel, Don't Create the Torment Nexus".
a16z and others like them never met a dystopian warning they didn't interpret as a titillating invitation to an uncomfortably exciting and inevitable future!
You do get some creature comforts like slices (fat pointers) and defer (goto replacement). But you also get forced to write a lot of explicit conversions (I personally think this is a good thing).
The C interop is good but the compiler is doing a lot of work under the hood for you to make it happen. And if you export Zig code to C... well you're restricted by the ABI so you end up writing C-in-Zig which you may as well be writing C.
It might be an easier fit than Rust in terms of ergonomics for C developers, no doubt there.
But I think long-term things like the borrow checker could still prove useful for kernel code. Currently you have to specify invariants like that in a separate language from C, if at all, and it's difficult to verify. Bringing that into a language whose compiler can check it for you is very powerful. I wouldn't discount it.