Hacker Newsnew | past | comments | ask | show | jobs | submit | mcny's commentslogin

When I say I want full transparency, I usually am talking about how much pay they received and in the case of elected representatives, their net worth at least once a year.

I wouldn't ask for a full health report to be made public by law. Maybe a summary for elected officials.


Why do you need to know how much they are paid and their net worth? What difference does it make to you? Public official pay is already available online. A quick google search will tell you how much congress people get paid, and the DoD pay scale is available online as well.

Sorry but I don't understand

Isn't this a table?

    |sn|name|
    |--|--|
    |1|George|
    |2|John|
I feel like I've been doing this at least since about 2013?

Edit: I get it now. It was not a part of the original spec.

    |sn|name  |
    |--|------|
    | 1|George|
    | 2|John  |
It can look better if we use fixed width font and add padding I guess?

It can, the gripe that I don't have a good solution for is what happens when entry 3 is a 7-letter name?

With LSP and Marksman [0], those tables get formatted automatically on save for me.

https://github.com/artempyanykh/marksman


Then you need to re-pad everything (clean looking git diff be damned). It's just the reality of dealing with bounding boxes. Maybe we don't notice it in HTML and such since the browser redraws them for us for free.

A reasonable format would not insist you lay out tables visually any more than it would insist you center your headers if you'd like them horizontally centered when rendered. For instance, Asciidoctor has syntax for table cells that requires no whitespace and lets you put any content at all in a cell.

I'm not aware of any table-capable markdown renderer that requires tables to be padded correctly. It's purely a source-text-readability concern.

No doubt some janky one exists somewhere, but nobody uses that.


IntelliJ repads the tables automatically when cells get bigger. I think you can even resize using the mouse!

you switch to org!

Honestly, updating tables in markdown has been my most successful use of AI :)

Because from what I understand, the whole premise of billionaires giving AI all their money is

1. AI succeeds and this success is defined by they will be able to raise prices really high

2. AI fails and receives a bailout

But if there are alternatives I can run locally, they can't raise prices as high because at some point I'd rather run stuff locally even if not state of the art. Like personally, USD 200 a month is already too expensive if you ask me.

And the prices can only go up, right?


not really, in a scenario where demand dies down surplus gpu compute becomes so under-utilized we would see it drop even lower I would think. Prices will go up of course if we keep seeing more tokens needed to solve problems, and demand keeps going up with no increase in efficiency or capacity, which is again not what we're seeing really.

That's good. We can tell people that so they will submit us patches for free.

Maybe we could even have a neat website with a leaderboard of sorts where we honor top contributors like some kind of gamification.

I think we would really need about five highly opinionated people with good technical and people skills to volunteer as paid maintainers for tailwind or any oss project to succeed.


But we are idiots.

There's a reason why flour has iron and salt has iodine, right? Individual responsibility simply does not scale.


We are idiots who will bear the consequences of our own idiocy. The big issue with all transactions done under significant information asymmetry is moral hazard. The person performing the service has far less incentive to ensure a good outcome past the conclusion of the transaction than the person who lives with the outcome.

Applies doubly now that many health care interactions are transactional and you won't even see the same doctor again.

On a systemic level, the likely outcome is just that people who manage their health better will survive, while people who don't will die. Evolution in action. Managing your health means paying attention when something is wrong and seeking out the right specialist to fix it, while also discarding specialists who won't help you fix it.


> We are idiots who will bear the consequences of our own idiocy

This is just factually not true. Healthy people subsidize the unhealthy (even those made unhealthy by their own idiocy) to a truly absurd degree.


Well, the biggest consequences aren't financial, they're losing your quality of life, or your life itself.

But the effects aren't just financial, look in an ER. People who for one reason or another haven't been able to take care of themselves in the emergency room for things that aren't an emergency, and it means your standard of care is going to take a hit.

Ah yeah, good point.

Sure?

So they do end up bearing most of the brunt of their own decisions. But you're also right, it's not entirely on them.

Neither does collective responsibility, for the same reason, particularly in any sort of representative government. Or did you expect people to pause being idiots as soon as they stepped into the ballot box to choose the people they wanted to have collective responsibility?

Let's say I have a public website with https. I allow anyone to post a message to an api endpoint. Could a server like this read the message? How?

They may not be able to decrypt it now, but it is well known that most of encrypted Internet traffic is permanently stored in NSA data centers [1] with hopes of decrypting it soon once quantum computing can do it.

[1] https://en.wikipedia.org/wiki/Utah_Data_Center


> but it is well known that most of encrypted Internet traffic is permanently stored in NSA data centers

It's "well known"? News to me.

I doubt the NSA has storage space for even 1 year's worth of "most of encrypted Internet traffic", much less for permanently storing it.


They have a relationship with your cert provider and get a copy of your cert or the root so they can decrypt the traffic.

I thought the whole point of the acme client was that the private key never leaves my server to go to let's encrypt servers. Now yes, if I am using cloudflare tunnel, I understand the tls terminates at cloudflare and they can share with anyone but still it has to be a targeted operation, right? It isn't like cloudflare would simply share all the keys to the kingdom?

Yes. They could issue their own certificates, but we have CT to mitigate that, too.

no, the private keys are yours - the root CA just 'signs' your key in a wrapper that is was "issued" by ex: letsencrypt, and letsencrypt just has one job: validate that you own the domain via acme validation.

That is not how PKI works. Your cert provider does not have a copy of your private key to give out in the first place.

Having the private key of the root cert does not allow you to decrypt traffic either.


they would just compromise wherever your tls is terminated (if not E2E which most of the time it is not), but also just taking a memory dump of your vm / hardware to grab the tls keys and being able to decrypt most future traffic and past is also an option.

It's funny that people still have any expectation of privacy when using a vm hosted at a place like AWS or Azure... They're giving any and every last bit you have, if the right people ask.

It isn't just aws though. You could say exactly the same about digital ocean or linode.

Even if you have your own rack at a colocation, you could argue that if you don't have full disk encryption someone could simply copy your disk.

I am just trying to be practical. If someone is intent on reading what users specifically send me, they can probably find bad hygiene on my part and get it but my concern is they should not be able to do this wholesale at scale for everyone.


> if you don't have full disk encryption someone could simply copy your disk.

You can have full-disk encryption then. It can still possibly be compromised using more advanced methods like cold boot attacks but they are relatively involved, and is very detectable in the form of causing downtime.


actually, even the CTO of AWS couldn't hijack an abusive VM server because legal did not allow them to, but when the government is asking it I guess that all flies out of the window.

Pretty much as you say. Legal exists within a system of laws. Hypothetically these laws might not have a carve-out for "CTO doesn't like the behavior" but they almost certainly do have a carve-out for "national security reasons". You'll pretty much never find a lawyer advising a client to break the law because it would be more ethical to do so.

who knows how often or what kind of access is/can be given, but we will never know most likely because National Security Letters are almost always accompanied with gag orders

That's why I self host.

yes, unless you pinned the public key

And if it is plugged in to the wall, I'd be tempted to add a touch screen display and a camera just in case.

But really my use case is as simple as

1. Wake word, what time is it in ____

2. Wake word, how is the weather in ____

3. Wake word, will it rain/snow/?? in _____ today / tomorrow / ??

4. Wake word, what is ______

5. Wake word, when is the next new moon / full moon?

6. Wake word, when is sunrise / sunset?

And something similar like that


So you need a clock maybe? Plus something like wttr.in

Problem is it should be accessible by voice for like a ninety year old person.

Everybody says how good Claude is and I go to my code base and I can't get it to correctly update one xaml file for me. It is quicker to make changes myself than to explain exactly what I need or learn how to do "prompt engineering".

Disclaimer: I don't have access to Claude Code. My employer has only granted me Claude Teams. Supposedly, they don't use my poopy code to train their models if I use my work email Claude so I am supposed to use that. If I'm not pasting code (asking general questions) into Claude, I believe I'm allowed to use whatever.


What's even the point of this comment if you self-admittedly don't have access to the flagship tool that everyone has been using to make these big bold coding claims?

isn't Claude Teams powerful? does it not have access to Opus?

pardon my ignorance.

I use GitHub Copilot which has access to llms like Gemini 3, Sonnet/Opus 4.5 ang GPT 5.2


Because the same claims of "AI tool does everything" are made over and over again.

The claims are being made for Claude Code, which you don't have access to.

I believe part of why Claude Code is so great because it has the chance to catch its own mistakes. It can run compilers, linters, browsers and check its own output. If it makes a mistake, it takes one or two extra iterations until it gets it right.

It's not "AI tool does everything", it's specifically Claude Code with Opus 4.5 is great at "it", for whatever "it" a given commenter is claiming.

Spider monkey being under 24k while v8 and JavaScript Core kiss 60k is alarming. Why are we so far behind?

Firefox is optimized against speedometer and similar "real world" benchmarks. This is a list of highly artificial benchmarks. Mozilla could attempt to optimize it for these benchmarks, but it wouldn't do much to make the web browsing experience better.

Basically lack of human resources to make it great, note that Firefox is already around 3% market share, and Mozilla keeps having other priorities.

Mozilla has their priorities ass backwards and does anything except what prople actually want: developing Firefox.

+1 on this... the spend outside of development is kind of fascinating as a while IMO. That they let Thunderbird languish for so long, they pretty much nuked XULRunner, and a bunch of pretty cool tech along the way.

No wonder my Firefox feels sluggish compared to Chrome on M3. Sigh.

From a web developer's perspective, I think the problem is that the people who built these systems were too nice.

Let me explain.

Web browsers have the option to reject syntax errors in HTML and immediately fail, but they don't. Similarly, I think compilers could be a bit slower, check a few more things, and refuse to produce an output if a pointer could be null.

I'm just a code monkey, so I don't have all the answers, but is it possible to have a rule like "when in doubt, throw an error"? The code might be legal, but we aren't here to play code golf. Why not deny such "clever" code with a vengeance and let us mortals live our lives? Can compilers do that by default?


A web developer without knowing the full history.

Netscape Navigator did, in fact, reject invalid HTML. Then along came Internet Explorer and chose “render invalid HTML dwim” as a strategy. People, my young naive self included, moaned about NN being too strict.

NN eventually switched to the tag soup approach.

XHTML 1.0 arrived in 2000, attempting to reform HTML by recasting it as an XML application. The idea was to impose XML’s strict parsing rules: well-formed documents only, close all your tags, lowercase element names, quote all attributes, and if the document is malformed, the parser must stop and display an error rather than guess. XHTML was abandoned in 2009.

When HTML5 was being drafted in 2004-onwards, the WHATWG actually had to formally specify how browsers should handle malformed markup, essentially codifying IE’s error-recovery heuristics as the standard.

So your proposal has been attempted multiple times and been rejected by the market (free or otherwise that’s a different debate!).


We do that, and then people complain the type checker is too hard to satisfy, and go back to dynamic languages.

This... is why Rust exists, yes.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: