Seems pretty hinged to me. Grounded firmly in reality even.
The data centres used to run AI consume huge amounts of power and water to run, not to mention massive quantities of toxic raw materials in their manufacture and construction. The hardware itself has a shelf life measured in single digit years and many of its constituent components can’t be recycled.
Tell me what I’m missing. What exactly is unhinged? Are you offended that he used the word “fuck” or something?
He is, very directly and in shorthand form I’ll grant you, expressing concerns that many people share about both AI and the oligarchs in control of it.
But if you find the language offensive consider the very real possibility that, if we don’t get ourselves onto a better, more sustainable, and more equitable path, people will eventually start expressing themselves with bullets as well as with words.
Many of us would like to avoid that, especially if we have families, so the harsh language is the least of our concerns.
Yeah, but the industry is a big part of the problem and most people working in it are complicit at this point (whether or not they are reluctantly complicit).
I think your head would have to be extremely deeply in the sand to think that. Gamer's Nexus has been doing extensive and well researched videos on the results of ram prices skyrocketing and other computing parts becoming inaccessibly expensive
And it isn't a $300 surcharge on DDR5. The ram I bought in August (2x16gb DDR5) cost me $90. That same product crept up to around 200+ when I last checked a month or two ago, and is now either out of stock or $400+.
Most online "gamers" are teens or college students, just by nature of demographics. I feel like people who pay for their own RAM (likely 18 or older) would be more likely to feel this
Verify what? I certainly don't have the capacity to thoroughly review my every dependency's source code in order to detect potentially hidden malware.
In this case more realistic advice would probably be to either rely on a more popular package to benefit from swarm intelligence, or creating your own implementation.
also scrutinize every dependency you introduce. I have seen sooooo many dependencies over the years where a library was brought in for one or two things which you can write yourself in 5 minutes (e.g. commons-lang to use null-safe string compare or contains only)
Sure but you basically need a different ecosystem to bring in a popular package and not expect to end up with these trivial libraries indirectly through some of the dependencies.
Said scrutinizing from my side consists of checking the number of downloads and age of the package, maybe at best a quick look at the GitHub.
Yes, I'm sure many dependencies aren't very necessary. However, in many projects I worked on (corporate) which were on the older Webpack/Babel/Jest stack, you can expect node_modules at over 1 GB. There this ship has sailed long ago.
But on the upside, most of those packages should be fairly popular. With pnpm's dependency cooldown and whitelisting of postinstall scripts, you are probably good.
So far it's rarely been the leading frontier model, but at least it's not full of dumb guardrails that block many legitimate use cases in order to prevent largely imagined harm.
You can also use Grok without sign in, in a private window, for sensitive queries where privacy matters.
A lot of liberals badmouth the model for obviously political reasons, but it's doing an important job.
AI works. This is evidenced by my side project which I spent some 50 hours on.
I'm not sure what your "empirical evidence and repeatable tests" is supposed to be. The AI not successfully converting a 3000 line C program to Python, in a test you probably designed to fail, doesn't strike me as particularly relevant.
Also, I suspect that AI could most likely guess that 80 lines of Python aren't correctly replicating 3000 lines of C, if you prompted it correctly.
For some definition of "works". This seems to be yours:
> I'd go further and say vibe coding it up, testing the green case, and deploying it straight into the testing environment is good enough.
The rest we can figure out during testing, or maybe you even have users willing to beta-test for you.
> This way, while you're still on the understanding part and reasoning over the code, your competitor already shipped ten features, most of them working.
> Ok, that was a provocative scenario. Still, nowadays I am not sure you even have to understand the code anymore. Maybe having a reasonable belief that it does work will be sufficient in some circumstances.
Yes, as I said, it is working well in my side project.
The application works and I am happy with my results so far.
It's interesting how this workflow appears to almost offend some users here.
I get it, we all don't like sloppy code that does not work well or is not maintainable.
I think some developers will need to learn to give control away rather than trying to understand every line of code in their project - depending of course on the environment and use case.
Also worth to keep in mind that even if you think you understand all the code in your project - as far as that is even possible in larger projects with multiple developers - there are still bugs anyway. And a few months later, your memory might be fuzzy in any case.
People seem to be divided between "AI doesn't work, I told it 'convert this program' and it failed" and "AI works, I guided it through converting this program and saved myself 30 hours of work".
Given my personal experience, and how much more productive AI has made me, it seems to me that some people are just using it wrong. Either that, or I'm delusional, and it doesn't actually work for me.
The models are good enough now that anyone who says AI doesn't work is either not acting in good faith or is staggeringly bad at learning a new skill.
It's not hard to spend a few hours testing out models / platforms and learning how to use them. I would argue this has been true for a long time, but it's so obviously true now that I think most of those people are not acting in good faith.
I have 22k karma and I think it's a trivial claim that LLMs work and that software is clearly on the cusp of being 100% solved within a couple years.
The naysaying seems to mostly come from people coping with the writing they see on the wall with their anecdote about some goalpost-moving challenge designed for the LLM to fail (which they never seem to share with us). And if their low effort attempt can't crack LLMs, then nobody can.
It reminds me of HN ten years ago where you'd still run into people claiming that Javascript is so bad that anybody who thinks they can create good software with it is wrong (trust them, they've supposedly tried). Acting like they're so preoccupied with good engineering when it's clearly something more emotional.
Meanwhile, I've barely had to touch code ever since Opus 4.5 dropped. I've started wondering if it's me or the machine that's the background agent. My job is clearly shifting into code review and project management while tabbing between many terminals.
As LLMs keep improving, there's a moment where it's literally more work to find the three files you need to change than to just instruct someone to do it, and what changes the game is when you realize it's creating output you don't even need to edit anymore.
> It reminds me of HN ten years ago where you'd still run into people claiming that Javascript is so bad that anybody who thinks they can create good software with it is wrong (trust them, they've supposedly tried). Acting like they're so preoccupied with good engineering when it's clearly something more emotional.
Curiously enough, those people are still around and writing good software without javascript. And I say that as someone who generally enjoys modern JS.
> Meanwhile, I've barely had to touch code ever since Opus 4.5 dropped. I've started wondering if it's me or the machine that's the background agent. My job is clearly shifting into code review and project management while tabbing between many terminals.
Why not cut out the middleman and have Opus 4.5 do the code review and project management too?
> those people are still around and writing good software without javascript.
Sure, but their claim is about what others can do with javascript.
> Why not cut out the middleman
Sure.
Every month, tech advances enough to where the AI writes code that requires less and less of my intervention. So it seems obviously the case that the same would go for other roles like project management and code review.
Of course, whether I want this to be the case isn't really relevant. But trying to build a product in a competitive market is one way to accept certain realities.
If we're going to argue on that level: Maybe it's because accounts with 12k karma spend more time posting than working on side projects and trying new tools.
I'd go further and say vibe coding it up, testing the green case, and deploying it straight into the testing environment is good enough.
The rest we can figure out during testing, or maybe you even have users willing to beta-test for you.
This way, while you're still on the understanding part and reasoning over the code, your competitor already shipped ten features, most of them working.
Ok, that was a provocative scenario. Still, nowadays I am not sure you even have to understand the code anymore. Maybe having a reasonable belief that it does work will be sufficient in some circumstances.
This approach sounds like a great way to get a lot of security holes into your code.
Maybe your competitors will be faster at first, but it’s probably better to be a bit slower and not leaking all your users data.
I’m assuming you work in a setting where there is a QA team?
I haven’t been in such a setting in 2008 so you can ignore everything I said.
But I wouldn’t want to be somewhere where people don’t test their code, and I have to write code that doesn’t break the code that was never tested until the QA cycle?
No, in my day job I obsess over every line I add, although there is QA.
In my side project I'm building a frontend that, according to me, is the best looking and most feature rich option out there.
I find that I'm making great progress with it, even when I don't know every line in the project. I understand the architecture and roughly where what functionality is located, and that is good enough for me.
If in testing I see issues with some functionality, I can first ask the model to summarize the implementation. I can then come up with a better approach and have the model make the change. Or alternatively I edit some values myself. So far it wasn't often that I felt the need to write more than a few lines of code manually.
In any neighboring country, where punctuality is at like 74-99%, depending on the country.
The DB is at 48.5% (Oct 2025) to 60% (2024 avg).
reply