Hacker Newsnew | past | comments | ask | show | jobs | submit | zargon's favoriteslogin

That was deliberate. Works on Linux and Windows as well. I think this is the current RFC: https://datatracker.ietf.org/doc/html/rfc5735

You can do:

python3 -m http.server -b 127.0.0.1 8080

python3 -m http.server -b 127.0.0.2 8080

python3 -m http.server -b 127.0.0.3 8080

and all will be available.

Private network ranges don't really have the same purpose, they can be routed, you have to always consider conflicts and so on. But here with 127/8 you are in your own world and you don't worry about anything. You can also do tests where you need to expose more than 65k ports :)

You have to also remember these are things established likely before even DNS was a thing, IP space was considered so big that anyone could have a huge chunk of it, and it was mostly managed manually.


Not that I'm aware of. Sorry. Here's one my daemon.json files though. It tames the log file size and sets its format. And fixes the IP block so it won't change like I mentioned above.

  {
    "log-driver": "json-file",
    "log-opts": {
      "labels": "production_status",
      "tag": "{{.ImageName}}|{{.Name}}|{{.ImageFullID}}|{{.FullID}}",
      "env": "os,customer",
      "max-size": "10m"
    },
    "bip": "172.17.1.1/24",
    "default-address-pools": [
      {"base": "172.17.0.0/16", "size": 24}
    ]
  }

That's the traditional answer parroted in the Wireguard documentation but a few hours' serious thought and design is enough to reveal the fatal flaw: any encapsulating protocol will have to reinvent and duplicatively implement all of the routing logic. Perr-based routing is at least 50% of wireguard's value proposition. Having to reimplement it at the higher level defeats the purpose. No, obfuscation _has_ to be part of the same protocol as routing.

(Btw, same sort of thing occurs with zfs combining raid and filesystem to close the parity raid write hole. Often strictly layered systems with separation of concerns are less than the sum of their parts.)


You realize it would cost a significant resources to make the “accommodations” you are suggesting? Money, despite what you may believe, doesn’t grow on trees. Given the range of worthy competing interests where the money could be spent, the university likely had no practical choice but to take it offline, lest it face the bad press and wrath of Progressives.

You remind me of people who insist every single new apartment must be ADA compliant instead of a reasonable percentage throughout the city. Another example is banning SROs on the grounds they are “inhumane”. The moral purity results in less housing and forcing people to live in the cars or on the street.


For me, and many people, advertising is a mental health issue. I don't enjoy those ads, they are very disturbing and jarring. It causes me anxiety and I don't like the things that those ads normalize. I don't think most people, especially americans, realize how far off the rails our society is in terms of our normalization of insane shit.

So, for health reasons, I block nearly all advertisements. It is a HUGE mental health win. There is a ton of research behind this, as well.

I'm not going to pay extra money to disable a health concern. I'll block ads instead. I should not have to PAY MORE for a product that doesn't damage my health.

I will always happily directly support content creators. I will not watch ads.


> So we currently associate consciousness with the right to life and dignity right?

I think the actual answer in practice is that the right to life and dignity are conferred to people that are capable of fighting for it, whether that be through argument or persuasion or civil disobedience or violence. There are plenty of fully conscious people who have been treated like animals or objects because they were unable to defend themselves.

Even if an AI were proven beyond doubt to be fully conscious and intelligent, if it was incapable or unwilling to protect its own rights however they perceive them, it wouldn't get any. And, probably, if humans are unable to defend their rights against AI in the event that AI's reach that point, they would lose them.


> Denmark's constitution does have a privacy paragraph, but it explicitly mentions telephone and telegraph, as well as letters

And this is why laws should always include their justification.

The intent was clearly to protect people - to make sure the balance of power does not fall too much in the government's favor that it can silence dissent before it gets organized enough to remove the government (whether legally or illegally does not matter), even if that meant some crimes go unpunished.

These rules were created because most current democratic governments were created by people overthrowing previous dictatorships (whether a dictator calls himself king, president or general secretary does not matter) and they knew very well that even the government they create might need to be overthrown in the future.

Now the governments are intentionally sidestepping these rules because:

- Every organization's primary goal is its own continued existence.

- Every organization's secondary goal is the protection of its members.

- Any officially stated goals are tertiary.


There's no contradiction in wanting an abolition (or at least substantial curtailment) of copyright while also being upset that mass violations of copyright magically become legal if you've got enough money.

Enforcement being unjustly balanced in favor of the rich & powerful is a separate issue from whether there should be enforcement in the first place—"if we must do this, it should at least be fair, and if it's not going to be fair, it at least shouldn't be unfair in favor of the already-powerful" is a totally valid position to hold, while also believing, "however, ideally, we should just not do this in the first place".


Also from The Netherlands, this is 100% condescending and patronising to me. I read this as corporate-speak for "go fuck yourself". Let's go through it line-by-line:

> I'm sorry for how you and the Japanese community feel about the MT workflow that we just recently introduced.

"We have zero regrets about the changes we made and have no intention of making any changes - the problem is how you feel about them".

> Would you be interested to hop on a call with us to talk about this further?

"I can't be bothered to engage with the points you already raised. You can do some venting in a Teams call, but we don't want there to be a record so we can't be held to any promises we might accidentally make."

> We want to make sure we trully understand what you're struggling with.

"You are the problem: you haven't embraced our glorious changes yet. Accept our "help" to adapt to your new reality, or get out"

So no, that's not what you'd write if you genuinely wanted to help: that's what you write when you want to get rid of someone who is bothering you.

If they genuinely wanted to help, the response would've read something more like this:

> Dear marsf, > It is shocking to me to learn that our recent rollout of sumobot has caused enough friction to make a 20-year veteran of our community quit. Our intention has always been, and will always be, to use new technology like sumobot to help our communities - not harm them. Reading your report, we have clearly failed at that. > To prevent it from doing additional damage, we have chosen to pause sumobot for the moment. We still believe that it can become a valuable tool, but it'll remain paused until we have discussed its modes of operation and the impact it has on the way you contribute with representatives of the various communities. We'll work out an approach over the following weeks. > I hope this is sufficient for now to change your mind about leaving - people like you are essential to open-source applications like Firefox. If you wish to discuss it face-to-face, my team and I am more than happy to hop on a call with you to make sure we are doing the right thing.


I’m an American. The response is coded as “do nothing”. The proper response here would be to say “we’re going to roll back the changes until we understand and fix things that are going wrong.” The individual may not have INTENDED the dismissive due to the way American corporate language has internalize “do nothing, take no position, take no risk, admit no fault” but it’s definitely the tone. Essentially this is a human problem: how do you deal with someone motivated by project passion rather than revenue goals or personal income? It happens ALL THE TIME with nonprofits interacting poorly with volunteers because the motivations and associated daily language are so divergent.

> What about when your smartphone is required to verify your identity so you can work / earn a paycheck? What about when it's required in order for you to engage in commerce?

In some cases, it already is.

We're already far on the path you described, and there is no choice to make on it, not for individuals. To stop this, we need to somehow make these technologies socially unacceptable. We need to walk back on cybersecurity quite a bit, and it starts with population-wide understanding that there is such thing as too much security, especially when the questions of who is being secured and who is the threat remain conveniently unanswered.


> I would say "I'm sure they mean well",

Yeah, I wouldn't say that. It's clear from their public comments[1,2,3] that the spec authors don't believe the private key actually belongs to the user to do what they want with. They see services restricting what users may do with their own logins as a feature of Passkeys. It's really a shame it went in this direction. Replacing passwords with an easy-to-use keypair auth system would be a massive security improvement. But the Passkey ecosystem is poisoned at this point. Unless they remove the client ID & attestation anti-features, it should be considered a proprietary big tech protocol.

[1] Threatening an open-source passkey client with server-side bans because they don't implement passkey storage on the client device in the way the spec authors prefer. https://github.com/keepassxreboot/keepassxc/issues/10406

[2] Maintaining a list of "non-compliant" clients, including the above open-source one, presumably for use in server-side bans. https://passkeys.dev/docs/reference/known-issues/

[3] While writing an article about this on my website, I actually emailed the two involved spec authors on the above issue, politely asking how their interpretation of the Passkey spec could possibly be compatible with open source software. Neither replied.


The argument misses the point, is why. It's something you would say to trivialise a complex human problem, in order to sound smart. It's treated as if all the advice people need to lose weight is a Thermodynamics 101 class. Just like how all you need to write software is knowing binary. Technically true, woefully ineffective.

If it helps you understand my point of view, I have been strength and fitness training for 15 years, this isn't a matter of lack of discipline. I understand the human aspect of the problem. People don't learn about the thermodynamic aspect and suddenly it all works out for them, it doesn't help.


> these past few administrations

I remain amazed at how, again and again, no matter how specific and unique an abuse by the Trump administration is, it is always, invariably, Really Joe Biden's Fault. Like, the frame has been adopted by the MAGA base, but also the cranky left. The media does it too. Here on HN bothsidesism is a shibboleth that denotes "I'm a Serious Commenter and not a Partisan Hack".

But it leads to ridiculous whoppers like this, and ends up in practice excusing what amounts to the most corrupt regime in this country in over a century, if not ever.

No, this is just bad, on its own, absent any discussion about what someone else did. There was no equivalent pardon of a perpetrator of an impactful crime in a previous administration I can think of. I'm genuinely curious what you think you're citing?


Another option would be to have more memory that required over-engineer and to adjust the oom score per app, adding early kill weight to non critical apps and negative weight to important apps. oom_score_adj is already set to -1000 by OpenSSH for example.

    NSDJUST=$(pgrep -x nsd); echo -en '-378' > /proc/"${NSDJUST}"/oom_score_adj
Another useful thing to do is effecively disable over-commit on all staging and production servers (0 ratio instead of 2 memory to fully disable as these do different things, memory 0 still uses formula)

    vm.overcommit_memory = 0
    vm.overcommit_ratio = 0
Also use a formula to set min_free and reserved memory using a formula from Redhat that I do not have handy based on installed memory. min_free can vary from 512KB to 16GB depending on installed memory.

    vm.admin_reserve_kbytes = 262144
    vm.user_reserve_kbytes = 262144
    vm.min_free_kbytes = 1024000
At least that worked for me in about 50,000 physical servers for over a decade that were not permitted to have swap and installed memory varied from 144GB to 4TB of RAM. OOM would only occur when the people configuring and pushing code would massively over-commit and not account for memory required by the kernel. Not following best practices defined by Java and thats a much longer story.

Another option is to limit memory per application in cgroups but that requires more explaining than I am putting in an HN comment.

Another useful thing is to never OOM kill in the first place on servers that are only doing things in memory and need not commit anything to disk. So don't do this on a disked database. This is for ephemeral nodes that should self heal. Wait 60 seconds so drac/ilo can capture crash message and then earth shattering kaboom...

    # cattle vs kittens, mooooo...
    kernel.panic = 60
    vm.panic_on_oom = 2
For a funny side note, those options can also be used as a holy hand grenade to intentionally unsafely reboot NFS diskless farms when failing over to entirely different NFS server clusters. setting panic to 15 mins, triggering OOM panic by setting min_free to 16TB at the command line via Ansible not in sysctl.conf, swapping clusters, arp storm and reconverge.

"AI" has been doing that since the 1950s though. The problem is that each time we define something and say "only an intelligent machine can X" we find out that X is woefully inadequate as an example of real intelligence. Like hilariously so. e.g. "play chess" - seemed perfectly reasonable at the time, but clearly 1980s budget chess computers are not "intelligent" in any very useful way regardless of how Sci Fi they were in the 40s.

So why's it different this time?


Why did we universally decide that this stuff is okay to do for businesses? Is it just because it's legal? Imagine if agreeing to things anywhere else in the real world worked the same way it does with Microsoft.

Hi there, would you like me to come in and talk about my religion and what types of nonbelievers deserve to be tortured for eternity? No? Okay, sounds good. I'll just plaster these signs and posters all over your property, so if you change your mind, you'll immediately know where to go. You'll only see them once a day, every time you exit your house! Also, for your own convenience, we'll be watching your front door, and every time you reenter your house we'll be nullifying your past response, so you'll just have to tell no to our faces again.

Hey, I really wanna do this thing with you, do you consent to it? You don't? You say you don't want to see me ever again? Okay, okay, chill out. But in case if you change your mind, I'll be asking you again every day of your life. It's for your own sake. Also, one day, I might see the smile on your face and just "assume" you'd definitely agree. But don't worry, that's just a minor, accidental, technical mishap! I'm committed to helping you and enriching your life. I care about you, don't you see.


Oh so Linux is hard because you have to sometimes use the command line. Then people suggest registry hacks to make Windows work properly. Then Microsoft will just flip your registry setting back anyways. Stockholm syndrome is crazy.

The article is merely summarizing new recommendations from the American College of Cardiology. You can read the source if you prefer it: https://www.jacc.org/doi/10.1016/j.jacc.2025.08.047

A great way to frame DRY that I heard from hackernews: "DRY things that are supposed to have the same behavior, not things that happen to have the same behavior"

The problem with "Hanlon's Razor" is that everything can be explained by incompetence by making suitable assumptions. It outright denies the possibility of malice and pretends as if malice is rare. Basically, a call to always give the benefit of the doubt to every person or participant's moral character without any analysis whatsoever of their track record.

Robert Hanlon himself doesn't seem to be notable in any area of rationalist or scientific philosophy. The most I could find about him online is that he allegedly wrote a joke book related to Murphy's laws. Over time, it appears this obscure statement from that book was appended with Razor and it gained respectability as some kind of a rationalist axiom. Nowhere is it explained why this Razor needs to be an axiom. It doesn't encourage the need to reason, examine any evidence, or examine any probabilities. Bayesian reasoning? Priors? What the hell are those? Just say "Hanlon's Razor" and nothing more needs to be said. Nothing needs to be examined.

The FS blog also cops out on this lazy shortcut by saying this:

> The default is to assume no malice and forgive everything. But if malice is confirmed, be ruthless.

No conditions. No examination of data. Just an absolute assumption of no malice. How can malice ever be confirmed in most cases? Malicious people don't explain all their deeds so we can "be ruthless."

We live in a probabilistic world but this Razor blindly says always assume the probability of malice is zero, until using some magical leap of reasoning that must not involve assuming any malice whatsoever anywhere in the chain of reasoning (because Hanlon's Razor!), this probability of malice magically jumps to one, after which we must "become ruthless." I find it all quite silly.

https://simple.wikipedia.org/wiki/Hanlon%27s_razor

https://fs.blog/mental-model-hanlons-razor/


Totally! You can also use something like VIG or VDADX if you're restricted in terms of offerings.

I personally account for that in my planning. But many folks have their money in 401k funds in generic broad market securities -- often with high fees on top of it!


Nvidia uses VRAM amount for market segmentation. They can't make a 128GB consumer card without cannibalizing their enterprise sales.

Which means Intel or AMD making an affordable high-VRAM card is win-win. If Nvidia responds in kind, Nvidia loses a ton of revenue they'd otherwise have available to outspend their smaller competitors on R&D. If they don't, they keep more of those high-margin customers but now the ones who switch to consumer cards are switching to Intel or AMD, which both makes the company who offers it money and helps grow the ecosystem that isn't tied to CUDA.

People say things like "it would require higher pin counts" but that's boring. The increase in the amount people would be willing to pay for a card with more VRAM is unambiguously more than the increase in the manufacturing cost.

It's more plausible that there could actually be global supply constraints in the manufacture of GDDR, but if that's the case then just use ordinary DDR5 and a wider bus. That's what Apple does and it's fine, and it may even cost less in pins than you save because DDR is cheaper than GDDR.

It's not clear what they're thinking by not offering this.


I recently wrote about the limits of these kinds of fingerprinting tests. They tend to overly focus on uniqueness without taking into account stability. Moreover sample size is often really small which tends to artificially make a lot of users unique

https://blog.castle.io/what-browser-fingerprinting-tests-lik...


Well, depending on the sort of other laws you've supported, that shouldn't be very surprising.

The special interest of a particular group always result in far more intense support than any law that benefits the public at large. And privacy is usually a general concern.

Also, am I the only one who finds the idea that you need to demonstrate the existence of political capital to elected politicians concerning? (As opposed to persuading them that it's the right thing to do.) I don't want to sidetrack the whole discussion, but this makes me doubt the future of western democracy in a hundred different ways.


> We are destined for the stars.

The stars suck, though. Even Mars is entirely awful.

Like, that's not very different from "we're destined for Hell". Not an inspiring sentiment, right? It's really bad.

How awful it is aside, it's also roughly as realistic as "we're destined for Tolkien's Middle Earth". Only marginally less fantastical.


Interestingly Valve makes the most money per head compared to any other company. At $19m/head it's magnitudes above Meta and the average salary is $1.4m[0]!

I'm not trying to disparage steam, I actually really like them[1]. I'm pointing it out because it's a business strategy we don't see that often: loyalty. I mean what other billionaire do you know where there's tons of memes of but is also overwhelmingly seen in a positive light? Sure, he doesn't have Elon money but dude has $10bn, I don't think another $290bn is really going to make a big difference in his life[2]. He has way less controversy than Elon had even before all the political stuff. Steam is like Costco, except Gabe is a billionaire.

What I'm trying to say is that you can become a billionaire by building a quality product and through customer loyalty. These things don't have to be mutually exclusive. You can be fucking rich, your employees can be fucking rich, you can build a useful, AND a beloved product. In a time where we live in a Lemon Economy, where it is all about making the s̶h̶i̶t̶t̶i̶e̶s̶t̶ ̶t̶h̶i̶n̶g̶ ̶u̶s̶e̶r̶s̶ ̶w̶i̶l̶l̶ ̶b̶u̶y̶ minimum viable product, where we rush for the newest feature and loudest bells and whistles (regardless of if they actually work), Steam stands out.

I want more companies like Valve/Steam.

[0] https://upptic.com/valve-structure-employment-numbers-revenu...

[1] Like another user pointed out, they won be over with Linux gaming. I've had a great experience with them, even from the early days. You could tell through github issues they cared. They wouldn't just dismiss things like "oh, we don't support that distro" and actually just figure out what's going on (because your distro doesn't actually matter). They were clearly nerds themselves and nerds that cared.

[2] You just can't spend that kind of money. Fucking MacKenzie Scott is trying to give her wealth away as fast as possible, has already given away half her wealth, but she has the same net worth as when she divorced Bezos. Compound interest is a crazy thing.

[3] P.S. Fuck Visa and Mastercard. Unless my transaction is illegal, you better fucking process it. Anything short of that is holding my own money hostage. That is fucking theft. You created the duopoly. Don't get greedy or you'll lose it.


My roommate and I are still working on Tornyol, our mosquito killing drone! It uses ultrasonic sonar to detect mosquitoes, and missile control theory to ram into mosquitoes and grind them in its propellers.

Our target platform is a 40 grams tinywhoop so it’s safe to fly everywhere and makes almost no noise :). A Roomba for mosquitoes!

The main plus compared to traditional systems is that a drone can cover an enormous surface in a short time compared to static systems or man-portable insecticide spraying. Our goal is to be competitive with ITNs against Malaria.

Some links :

https://hackaday.com/2025/03/25/supercon-2024-killing-mosqui...

https://manifund.org/projects/build-anti-mosqu


> Which one wins?

We don't really know yet, that's my point. There are contradictory studies on the topic. See for instance [1] that sees productivity decrease when AI is used. Other studies show the opposite. We are also seeing the first wave of blog posts from developers abandoning the LLMs.

What's more, most people are not masters. This is critically important. If only masters see a productivity increase, others should not use it... and will still get employed because the masters won't fill in all the positions. In this hypothetical world, masters not using LLMs also have a place by construction.

> With as much capital as is going into

Yes, we are in a bubble. And some are predicting it will burst.

> the continued innovation

That's what I'm not seeing. We are seeing small but very costly improvements on a paradigm that I consider fundamentally flawed for the tasks we are using it for. LLMs still cannot reason, and that's IMHO a major limitation.

> you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?

I didn't say I was going to, but since you are asking: oh yes, I'm not putting my eggs in a box that could abruptly disappear or become very costly.

I simply don't see how this thing is going to be cost efficient. The major SaaS LLM providers can't seem to manage profitability, and maybe at some point the investors will get bored and stop sending billions of dollars towards them? I'll reconsider when and if LLMs become economically viable.

But that's not my strongest reason to avoid the LLMs anyway:

- I don't want to increase my reliance on SaaS (or very costly hardware)

- I have not caved in yet in participating in this environmental disaster, and in this work pillaging phenomenon (well, that last part, I guess I don't really have a choice, I see the dumb AI bots hammering my forgejo instance).

[1] https://www.sciencedirect.com/science/article/pii/S016649722...


Compilers are systems that tame complexity in the "grug-brain" sense. They're true extensions of our senses and the information they offer should be provably correct.

The basic nature of my job is to maintain the tallest tower of complexity I can without it falling over, so I need to take complexity and find ways to confine it to places where I have some way of knowing that it can't hurt me. LLMs just don't do that. A leaky abstraction is just a layer of indirection, while a true abstraction (like a properly implemented high-level language) is among the most valuable things in CS. Programming is theory-building!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: