Hacker Newsnew | past | comments | ask | show | jobs | submit | more menaerus's commentslogin

> Scalability is always going to be poor when writers attempt to modify the same object, no matter the solution you implement.

MVCC.


Well, yes, that's one way of avoiding mutating the same object of course.


Technically it is not because eventually it will be mutated, and that's one way of achieving the scalability in multiple writers scenario.


Pretty cool. For it to scale they are building their own deterministic hypervisor too [0], but also a new distributed database to support their workloads more efficiently [1].

[0] https://antithesis.com/blog/deterministic_hypervisor

[1] https://antithesis.com/blog/2025/testing_pangolin


Yeah, I think so too now that I read some documentation about it. It appears that the main issue with the spinlock pattern is that it inhibits "a severe performance penalty when exiting the [spinlock] loop because it [CPU] detects a possible memory order violation." [0].

~10 years ago, on Haswell, it took ~9 cycles to retire, and from Skylake onward, with some exceptions, it takes a magnitude more - ~140 cycles.

These numbers alone suggests that it really messes up hard with the CPU pipeline, perhaps BP (?) or speculative execution (?) or both (?) such that it will basically force the CPU to flush the whole pipeline. This is at least how I read this. I will remember this instruction as "damage control" instruction from now on.

[0] https://www.felixcloutier.com/x86/pause


Some things from the article are debatable for sure, and some are maybe missing like the one you mention with PAUSE instruction, which I also have not been aware of, but generally speaking I thought it was a really good content. Lean system engineering skills applied to real world problems. I especially appreciated the examples of large-scale infra codebases doing it in practice.


> There is a reason default lock implementations from OS don't spin even a little bit.

glibc pthread mutex uses a user-space spinlock to mitigate the syscall cost for uncontended cases.


WDYM? Vector is an abstraction over dynamically sized arrays so sure it does use heap to store its elements.


I think usefulcat interpreted "std::vector<int> allocated and freed on the stack" as creating a default std::vector<int> and then destroying it without pushing elements to it. That's what their godbolt link shows, at least, though to be fair MSVC seems to match the described GCC/Clang behavior these days.


Codegen from Matlab/Simulink/whatever is good for proof of concept design. It largely helps engineers who are not very good with coding to hypothesize about different algorithmic approaches. Engineers who actually implement that algorithm in a system that will be deployed are coming from a different group with different domain expertise.


You do realize that there's a handful of literally the same people here on HN continuously evangelizing one technology by constantly dissing on the other? Because of the pervasiveness of such accounts/comments it invites other people, myself included, to counter-argue because most of the time the reality they're trying to portray is misrepresented or many times simply wrong. This is harmful and obviously invites for a flame war so how is that not by the same principle you applied to above account a guideline breach too?


We act on what we see, and we see what people make us aware of via flags and emails.

Comments like yours are difficult because they’re not actionable or able to be responded to in a way you’ll find satisfying if you don’t link to the comments that you mean.

Programming language flamewars have always been lame on HN and we have no problem taking action against perpetrators when we’re alerted to them.


So none of the functions you implement have in/out parameters?


if you use few you can have em all in the registers perhaps (not sure what arch they rollin?)


Right? If this is really true, that some random folk without compiler engineering experience, implemented a completely new feature in ocaml compiler by prompting the LLM to produce the code for him, then I think it really is remarkable.


Oh wow, is that what you got from this?

It seems more like a non experienced guy asked the LLM to implement something and the LLM just output what and experienced guy did before, and it even gave him the credit


Copyright notices and signatures in generative AI output are generally a result of the expectation created by the training data that such things exist, and are generally unrelated to how much the output corresponds to any particular piece of training data, and especially to who exactly produced that work.

(It is, of course, exceptionally lazy to leave such things in if you are using the LLM to assist you with a task, and can cause problems of false attribution. Especially in this case where it seems to have just picked a name of one of the maintainers of the project)


Did you take a look at the code? Given your response I figure you did not because if you did you would see that the code was _not_ cloned but genuinely compiled by the LLM.


It’s one thing for you (yes, you, the user using the tool) to generate code you don’t understand for a side project or one off tool. It’s another thing to expect your code to be upstreamed into a large project and let others take on the maintenance burden, not to mention review code you haven’t even reviewed yourself!

Note: I, myself, am guilty of forking projects, adding some simple feature I need with an LLM quickly because I don’t want to take the time to understand the codebase, and using it personally. I don’t attempt to upstream changes like this and waste maintainers’ time until I actually take the time myself to understand the project, the issue, and the solution.


What are you talking about? It was ridiculously useful debugging feature that nobody in their sanity would block because "added maintenance". MR was rejected purely because of political/social reasons.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: