Hacker Newsnew | past | comments | ask | show | jobs | submit | foltik's commentslogin

They seemed to have replaced Mega Bloks with Legos, not skyscraper building materials.

It’s a shame this isn’t the default.

Yet more slop that amusingly tries to rebrand low pass filtering and dynamic feature selection as “strategic ignorance”

I understand — the reviewers clearly see it differently, which is why they’ve been carefully evaluating my paper for the past 15 days.

who are the reviewers? Statler and Waldorf?

Agree that just being hand-written doesn’t imply quality, but based on my priors, if something obviously looks like vibe-code it’s probably low quality.

Most of the vibe-code I’ve seen so far appears functional to the point that people will defend it, but if you take a closer look it’s a massively over complicated rat’s nest that would be difficult for a human to extend or maintain. Of course you could just use more AI, but that would only further amplify these problems.


Why doesn’t linux just add a kconfig that enables TCP_NODELAY system wide? It could be enabled by default on modern distros.

Looks like there is a sysctl option for BSD/MacOS but Linux it must be done at application level?

Perhaps you can set up iptables rule to add the bit.

I have a python script [0] which builds and statically links my toolbox (fish, neovim, tmux, rg/fd/sd, etc.) into a self contained —-prefix which can be rsynced to any machine.

It has an activate script which sets PATH, XDG_CONFIG_HOME, XDG_DATA_HOME, and friends. This way everything runs out of that single dir and doesn’t pollute the remote.

My ssh RemoteCommand then just checks for and calls the activate script if it exists. I get dropped into a nice shell with all my config and tools wherever I go, without disturbing others’ configs or system packages.

[0] https://github.com/foltik/dots


Is this available somewhere? I'm curious to see how this works.

Published a minimal version and added a link! This implements everything I mentioned except for static linking, so YMMV depending on your C/CXX toolchain and installed packages.

Thank you!

Oh the irony.

27us and 1us are both an eternity and definitely not SOTA for IPC. The fastest possible way to do IPC is with a shared memory resident SPSC queue.

The actual (one-way cross-core) latency on modern CPUs varies by quite a lot [0], but a good rule of thumb is 100ns + 0.1ns per byte.

This measures the time for core A to write one or more cache lines to a shared memory region, and core B to read them. The latency is determined by the time it takes for the cache coherence protocol to transfer the cache lines between cores, which shows up as a number of L3 cache misses.

Interestingly, at the hardware level, in-process vs inter-process is irrelevant. What matters is the physical location of the cores which are communicating. This repo has some great visualizations and latency numbers for many different CPUs, as well as a benchmark you can run yourself:

[0] https://github.com/nviennot/core-to-core-latency


I was really asking what "IPC" means in this context. If you can just share a mapping, yes it's going to be quite fast. If you need to wait for approval to come back, it's going to take more time. If you can't share a memory segment, even more time.


No idea what this vibe code is doing, but two processes on the same machine can always share a mapping, though maybe your PL of choice is incapable. There aren’t many libraries that make it easy either. If it’s not two processes on the same machine I wouldn’t really call it IPC.

Of course a round trip will take more time, but it’s not meaningfully different from two one-way transfers. You can just multiply the numbers I said by two. Generally it’s better to organize a system as a pipeline if you can though, rather than ping ponging cache lines back and forth doing a bunch of RPC.


> space stops being rare air — and becomes infrastructure

This reeks of AI slop. Plenty of “it’s not just X, its Y” in there too.


Could you say more about which extensions you’re referring to? I’ve often heard this take, but found details vague and practical comparisons hard to find.


Dynamic rendering, timeline semaphores, upcoming guaranteed optimality of general image layouts, just to name a few.

The last one has profound effects for concurrency, because it means you don’t have to serialize texture reads between SAMPLED and STORAGE.


Not the same commenter, but I’d guess: enabling some features for bindless textures and also vk 1.3 dynamic rendering to skip renderpass and framebuffer juggling


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: