Debian, at least until bookworm works perfectly without systemd. The easiest way to make this transition, is to installed Debian with nothing but 'standard system utilities' and 'SSH server' (if you want) during install:
In this sense, LLMs don’t really change anything. The same person operating the tool will continue to be the same person in either case.
I don't understand why more people don't get this. I've told everyone who will listen in my org that implementing LLM's isn't going to solve the problem of people wasting our time reaching out to us with questions that have already been answered in our KA system. If someone was going to type something into an LLM, they could have typed it into a search bar. People don't skip the documentation because they can't find it; they skip because they don't want to read! They want to bug a live a person (and make it their problem)!
I was correct. Now we have a costly LLM implementation and have our time wasted with questions that are already answered.
That is a wonderful question, but it's very hard to answer without essentially knowing me, and that may be a little bit ambitious for a comment.
I'm a software engineer. I consider this some of the most important work of our generation. The hardware we've made today has unlocked an until now impossible control over the world. We don't have to mechanically devise a way to make a clock that tracks the stars. We can just program it into a microchip, and it'll just do it. We don't have to manage an untold thousands of people to calculate our taxes. We can write it into a computer and it can just do it. Forever and perfectly. We're just not applying it.
I've reached the point of despair. It's not a AI doom kind of despair, where I believe that AI is going rogue or whatever. It's a much more pedestrian of despair. We have tremendous problems ahead of us. Both when it comes to the climate, but also when it comes to just doing the things that society always has to do and AI doesn't offer anything to any of the actual problems of society.
While people are dying of Ebola in Africa and Americans are dying because they can't pay for healthcare, we are talking about automating software development for ad-tech companies. It's embarrassing. This is my field, these are my people, and this is the best we have to offer.
I try to abstain from that despair by just not engaging with it. Either AI will happen and we'll take it from there, or it wont and then we'll have wasted a lot of effort and will hopefully never had any credibility as an industry again. I can't make a difference in either of those outcomes, so I just want it to go away.
Let me make it clear though. I too love the math behind recent AI. I even love the engineering behind how we do fast GEMM on GPU's. The challenges are really fun technically. That just can't be what decides our direction.
I hope that somewhat answered it a little. It's a bit hard to get such a large topic rooted so deeply in me into a comment. Thinking about the future in relation to these billion dollar companies and what they make does actually make me emotional.
You might wonder why the state of affairs in Latin America drives people to hard lives in the US.
A certain large country to the north had a policy of destabilizing economic, covert action and direct military action in place to save these nations from the horrors of socialism.
When you build empire, there’s always a pull of your subjects to the center.
I see a bunch of "nobody knows everything, this old man needs to appreciate modern technology stacks" comments, and in some ways I blame the post for this because it kind of meanders into that realm where it gets into abstractions being bad and kids not knowing how to make op-amp circuits (FTR, I am from the "you have to know op-amps!" generation and I intentionally decided deep hardware hacking was not going to be my thing), but the actual core thing I think is important here is that working hard is being devalued - putting in the time to understand the general workings underpinnings of software, the hardware, using trial and error to solve an engineering problem as opposed to "sticking LEDs on fruit", the entire premise of knowing how things work and achieving some deep expertise is no longer what people assume they should be striving for, and LLMs, useful or not, are only accelerating this.
Just yesterday I used an LLM to write some docs for me, and for a little bit where I mistakenly thought the docs were fine as they were (they weren't, but I had to read them closely to see this) it felt like, "wow if the LLM just writes all my docs now, I'm pretty much going to forget how to write docs. Is that something I should worry about?" The LLM almost fooled me. The docs sounded good. It's because they were documenting something I myself was too lazy to re-familiarize with, hoping the LLM would just do it for me. Fortunately the little bit of my brain that still wanted to be able to do things decided to really read the docs deeply, and they were wrong. I think this "the LLM made it convincing, we're done let's go watch TV" mentality is a big danger spot at scale.
There's an actual problem forming here and it's that human society is becoming idiocracy all the way down. It might be completely unavoidable. It might be the reason for the Fermi paradox.
It's Google. The relationship is not consensual but adversarial. Google attempts to get free things from me. I attempt to get free things from Google.
It's like asking a lawyer why does he defend an obviously guilty client? Because it's adversarial system, his job is to protect his client, not to worry about the other side. The other side is trying to maximize their advantage too. Google has defined my relationship with it in such terms through its behavior.
If YouTube were still an independent operator I would be more amenable to your argument.
In any case, the fact I can recite an ad from memory shows that I am at least watching some of their ads, notably on mobile.
You can use network namespaces too. As a reference, here is my torrent setup:
ip netns add torrent
ip link add wg1 type wireguard
ip link set wg1 netns torrent
ip -n torrent addr add 10.67.124.111/32 dev wg1
ip netns exec torrent wg setconf wg1 /etc/wireguard/wg1.conf
ip -n torrent link set wg1 up
ip -n torrent route add default dev wg1
ip netns exec torrent ip link set dev lo up
ip netns exec torrent transmission-daemon -f 2>&1
AFAIK it's pretty bulletproof. But for good measure I also have transmission configured to only listen on the wireguard address.
Ah Zuck, having to actually run the Company after Sheryl left and getting a free pass for "The managers hired too many people", "Year of Efficiency", "People Are Lazy" and "People are not masculine enough" discovers the Elon method of just paying to cover up incompetence.
What problem? The whole point of shareholder capitalism is to diffuse accountability across so many people that it ceases to exist: the employee points to management, management points to executives, executives point to the board, the board points to the shareholders, the shareholders are index funds managed by robots so "nobody" is to blame. The final phase of the administrative state works much the same way.
Viewed through that lens, LLMs and their total lack of accountability are a perfect match for the modern-day business; that's probably part of why so many executives are hell-bent on putting them everywhere as quickly as possible.
> So any and all human communication is divination in your book?
Words from an AI are just words.
Words in a human brain have more or less (depending on the individual's experiences) "stuff" attached to them: From direct sensory inputs to complex networks of experiences and though. Human thought is mainly not based on words. Language is an add-on. (People without language - never learned, or sometimes temporarily disabled due to drugs, or permanently due to injury, transient or permanent aphasia - are still consciously thinking people.)
Words in a human brain are an expression of deeper structure in the brain.
Words from an AI have nothing behind them but word statistics, devoid of any real world, just words based on words.
Random example sentence: "The company needs to expand into a new country's market."
When an AI writes this, there is no real world meaning behind it whatsoever.
When a fresh out of college person writes this it's based on some shallow real world experience, and lots of hearsay.
When an experienced person actually having done such expansion in the past says it a huge network of their experience with people and impressions is behind it, a feeling for where the difficulties lie and what to expect IRL with a lot of real-world-experience based detail. When such a person expands on the original statement chances are highest that any follow-up statements will also represent real life quite well, because they are drawn not from text analysis, but from those deeper structures created by and during the process of the person actually performing and experiencing the task.
But the words can be exactly the same. Words from a human can be of the same (low) quality as that of an AI, if they just parrot something they read or heard somewhere, although even then the words will have more depth than the "zero" on AI words, because even the stupidest person has some degree of actual real life forming their neural network, and not solely analysis of other's texts.
Allow me to give you a different viewpoint. And this is coming from someone that has an _amazing instinct_ to be in the "Who The Fuck Cares" club. I use that instinct to protect my mental health but nothing more than that.
What I noticed when I checked out at work is that it also makes me check out in my personal life (PL). It bleeds in. Generally, in my personal life I'm not checked out. That bleeds into work.
So work bleeds into PL and PL into work. I found that it was painful for work to bleed into my PL like that since I'm switched on and I just had this hint of "ah... whatever who gives a fuck."
I give a fuck.
I give a fuck because it's my life. I do it for myself. I don't do it for my boss or my colleagues. I do it for me.
I've found that this attitude is way more helpful to me as two things happen:
1. I'm more productive at work so I don't have to cover my ass at all. When I was in the "Who The Fuck Cares" club, I needed to cover my ass once per month (read: I didn't do anything for like 3 days and people were expecting results on day 4).
2. Since it's in service for my personal life, I don't go too far. The moment I notice that work encroaches too much upon personal life, my instinct comes back immediately and I pay my visit to the "Who The Fuck Cares" club, and party as long as I want to.
What's more likely to be a problem, is the request to be concise.
For some reason, this still seems to not be widely known among even technical users: token generation is where the computation/"thinking" in LLMs happen! By forcing it to keep its answers short, you're starving the model for compute, making each token do more work. There's a small, fixed amount of "thinking" LLM can do per token, so the more you squeeze it, the less reliable it gets, until eventually it's not able to "spend" enough tokens to produce a reliable answer at all.
In other words: all those instructions to "be terse", "be concise", "don't be verbose", "just give answer, no explanation" - or even asking for answer first, then explanations - they're all just different ways to dumb down the model.
I wonder if this can explain, at least in part, why there's so much conflicted experiences with LLMs - in every other LLM thread, you'll see someone claim they're getting great results at some tasks, and then someone else saying they're getting disastrously bad results with the same model on the same tasks. Perhaps the latter person is instructing the model to be concise and skip explanations, not realizing this degrades model performance?
(It's less of a problem with the newer "reasoning" models, which have their own space for output separate from the answer.)
Actual invocation is this huge hairy furball of an rsync command that appears to use every single feature of rsync as I worked on my backup script over the years.
If you truly believe that, the fix is the opposite of what the GOP is proposing. Freeze spending at today’s levels, raise taxes (uncap SS, add higher tax brackets, add a wealth tax, etc…), then let the economy grow naturally.
The delusional conspiracy theory Trump won the 2020 election, in order to justify January 6 violence and then pardoning those criminals as his first act as POTUS redux.
Trump sent a mob to assassinate the vice-president of the United States when he (Mike Pence) refused Trump’s order to overturn that election.
Trump’s longest serving chief of staff said Trump is, A person who admires autocrats and murderous dictators. A person that has nothing but contempt for our democratic institutions, our Constitution, and the rule of law.
The bigotry of low expectations is thinking people are too stupid to know this, rather than understanding 1/3 are anti-American illiberal shitbirds.
Just because they failed to get it once before doesn’t mean they didn’t try and keep on trying.
A vote for a rapist, a felon, and vile insurrectionist is voting in support of abuse. And now we’re getting abused.
I have one very specific retort to the 'you are still responsible' point. High school kids write lots of notes. The notes frequently never get read, but the performance is worse without them: the act of writing them embeds them into your head. I allegedly know how to use a debugger, but I haven't in years: but for a number I could count on my fingers, nearly every bug report I have gotten I know exactly down to the line of code where it comes from, because I wrote it or something next to it (or can immediately ask someone who probably did). You don't get that with AI. The codebase is always new. Everything must be investigated carefully. When stuff slips through code review, even if it is a mistake you might have made, you would remember that you made it. When humans do not do the work, humans do not accrue the experience. (This may still be a good tradeoff, I haven't run any numbers. But it's not such an obvious tradeoff as TFA implies.)
A lot of less scrupulous crawlers just seem to imitate the big ones. I feel a lot of people make assumptions because the user agent has to be true, right?
My fave method is still just to have bait info in robots.txt that gzip bombs and autoblocks all further requests from them. Was real easy to configure in Caddy and tends to catch the worst offenders.
Not excusing the bot behaviours but if a few bots blindly take down your site, then an intentionally malicious offender would have a field day.
I've found this prompt turns ChatGPT into a cold, blunt but effective psychopath. I like it a lot.
System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered - no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
I ditched for [silverbullet](https://silverbullet.md). MIT licensed, markdown editor with embedded lua scripting. It's a PWA app that works offline and syncs well.
https://www.bmj.com/content/349/bmj.g7257 is one analysis of oxygen/CO2 exchange being a major component of weight loss. As the authors say, it's really a question of chemistry; the CO2 you metabolize from your fat has to go somewhere.
To be clear, I think your instinctive reaction is correct. It would be extremely silly for someone to read this analysis and conclude that they'll lose weight if they learn some special breathing technique to maximize CO2 output. The point is that "calories in - calories out" as a diet strategy is the same kind of error.
Look at a specific microcosm: dating. Dating is "awful" now (according to people both young and old), in a particular way that it wasn't even ten years ago. And sure, this is in part because we do everything online these days, and online dating has a few inherent problems with it. But not as many as you'd think; online dating used to "work" at least alright, in a way that it very much doesn't today / with none of the particular pathologies that it has today.
Dating sites and apps used to do things that actually helped people meet — vaguely optimizing for relationships. So people increasingly gravitated toward using dating apps. And for a while (peaking, I'd say, around the early 2010s), this actually increased the number of people meeting and getting into relationships.
And then one company, Match Group, came along and gradually bought up every "good" dating site, and enshittified them all, in a particular way that maximizes user retention + profit margins (and thereby minimizes the chance of a successful, happy relationship being formed.) They made dating apps bad at being dating apps. But there are no good dating apps — so people now feel stuck/confused, flailing around trying to make "online dating" work when there are only bad options for doing so.
I posit that online social networking in general went through the same evolution. Not because of one asshole company buying up and enshittifying everything, mind you; more because of market consolidation under a few companies who were all willing to copy one-another's homework in advancing the frontier of enshittified social experiences.
Facebook (and Facebook-like experiences) used to be a place you'd turn in the expectation of seeing updates from your actual literal friends, and engaging with those updates. Now it's radioactive for that purpose — and so is abandoned to being a sea of advertisements (and memes from boomers too inattentive to realize when the people they're talking at have left the table.)
And Instagram and even Snapchat have just copied TikTok's enshittified-from-the-start model of "personalized TV but all programs are 10 seconds long."
I have many friends I met in the 2000s and 2010s, where I recall heavily relying on social media as a fit-to-purpose tool to maintain and deepen those friendships. But I can't imagine what social network I could lean on to serve as that kind of tool for me today.
---
Yes, IM and group-chat apps always existed and still exist today. But that's not what traditional social networks got you.
It's funny that I even feel the need to explain this, but here's what social-networks-as-tools had to offer:
1. profile pages — like dating profiles or LinkedIn profiles, but from a lens of "this is what I want potential friends to know about me"!
2. "walls" — a specific semi-public place, attached to a person's profile, to leave a message "performatively" for not only that person, but also anyone else who looked at that person's wall, to see (think: birthday wishes.) Critically, walls are owned and therefore moderated by the profile they're attached to — so, unlike a feed, you can't really (successfully) cyberbully someone on their own wall. They can just delete your message; block you (which will block you from posting to their wall); or disable non-friends from posting to their wall entirely.
3. a home page view, that is simply a dumb chronological view of anything your direct friends have posted to their own walls. Not including friends-of-friends content. It was a social norm, back in the heyday of social networking, that you'd always be caught up on on everything your friends have posted — because it shouldn't add up to much. Nobody could "share" anything out of its originally intended broadcast audience (the poster's friends), and thus there was no benefit to "posting performatively, as if for a mass audience" — and therefore, posts were sparse and personal, making it practical to truly inbox-zero your feed in maybe 20 minutes per day.
Modern social networks don't have profile pages (at least, not that anyone populates with anything — Facebook has vestigial ones nobody uses), owner-moderated public walls, or non-re-shareable "just for mutuals" posts. They have none of the tools that we originally associated with the category of "a tool that makes it easier to network socially." And yet these apps that do not successfully accomplish social networking, are what we today refer to as "social networking apps." And are what everyone therefore thinks to turn to when trying to network socially online.
People are weirdly resistant to acknowledging the obvious implications of things if those implications seem big. There’s a broad “nothing ever happens” heuristic that people tend to mistake for wisdom.
Big food corporations profit from ultra-processed foods that manipulate our natural systems. They design products that override satiety signals using calculated combinations of sugar, fat and salt to activate brain dopamine pathways. Their priority is profit growth, not consumer health.
The consequences are significant health issues like obesity, diabetes and cardiovascular diseases. Healthcare systems struggle with preventable conditions while millions experience declining health and shorter lifespans. These corporations employ questionable strategies: marketing to children, lobbying against regulations, funding misleading research, and shifting responsibility to consumers.
Medications like Ozempic represent a threat to this model by reducing appetite and interrupting compulsive eating. Recent industry concerns about declining sales show how these medications could undermine their business approach. If consumers regain control over their eating habits, corporations may finally face consequences for practices that have profited from health problems for decades.
It feels like Musk is single-handedly making a great case for why unlimited accumulation of wealth is a bad idea.
It's pretty colorful language, but my mind immediately jumps to "financial terrorist". He's using his enormous amount of wealth and influence as a weapon to bludgeon anyone and anything in his way.
I've come out of self imposed retirement from the shitshow that this site has become just to point out that how long you have been here doesn't have any bearing whatsoever on the quality of your contribution.
OP is right in that HN uses the 'it's been flagged' excuse to avoid facing some very uncomfortable truths, and that is fractionally why the world is currently being destabilized to a degree that we have not seen in the last 90 years or so. We - all of us - in the tech world are part and parcel and in many ways instrumental in this, and dare I say guilty as well. We're chasing the $ but we're losing sight of our impact on the world.
Agree. There's a provocative word you don't mention: hypocrisy.
This has become my reflexive internal response when I hear complaining about small-beer animal cruelty. Why are we so quick to empathize with this sunfish, and so slow to do it for creatures whose products most of us eat every single day? The horror show of factory farming creates many orders of magnitude more suffering than all these little anecdotes put together.
Zoos are cruel? Hypocrisy. Circuses are cruel? Hypocrisy. Bullfighting is cruel? Hypocrisy. Euthanizing stray dogs is cruel? Hypocrisy. Using animals in a film shoot is cruel? Hypocrisy. Keeping a single guinea pig is cruel? Hypocrisy (but against the law in Germany). And so on.
People in developed countries have become very good at dealing with cognitive dissonance.
I feel this a lot, not so much from the perspective of someone that belongs to a formerly "protected" group, but came into tech at the height of the tech revenge-of-the-nerds style "zeitgeist" in the early 2010's to 2015, around the same time he mentions being involved in startups. My first job was a startup, with a bunch of students and a professor at my alma mater. We failed miserably - not in the way I had envisioned, but because of just basic VC funded stuff. We were a $20 million company with half a dozen of us, which would have been great for any of us, even our founders - but the VC's wanted a $200 million company. Poof.
That put a bitter taste in my mouth that has gotten more bitter when the "promise" of a society led by technocrats has yielded a barrage of increasingly shitty and invasive products that don't provide any additional utility to anyone except the people who stand to profit from them. It's exhausting, extremely depressing, and if I had to do it again I probably would have avoided tech, as much as I like what I do - I feel a deep sense of shame sometimes at the state of how it's gone.
The secret is to add every meeting into your Jira as a task, and then close it once the meeting is done.
Equally, instead of talking about meetings as detracting from your work, start talking about them as the work.
When your manager asks about your milestones, or accomplishments, or success stories, make meeting attendance front and center.
When discussing software development, bug fixing, etc in the meetings, point out that you won't actually do any of it. Point out that 20+ hours of your week is in meetings, 10 hours of admin (reading, writing, updating tickets), 5 hours of testing etc.
"This task will take 40 hours. At 1 hour per week I expect to be done in October sometime. If all goes to plan'
Yes, it seems cynical, but actually it has real outcomes. Firstly your "productivity" goes up. (As evidenced by your ticket increase.)
Secondly your mental state improves. By acknowledging (to yourself) that you are fundamentally paid to attend meetings, you can relax in your own productivity.
Thirdly by making your time allocations obvious to your manager, you place the burden for action on him.
If you convince your colleagues to do the same, you highlight the root problem, while moving the responsibility to fix it off your plate.
https://forum.qubes-os.org/uploads/db3820/original/2X/c/c774...
Once install is done, login and save this file:
After: Reboot then: There are a few edge cases, packages which require systemd, but I've been running thousands of systems including desktops this way for a decade.Yes, I also run thousands of systems with systemd too.