Agree that just being hand-written doesn’t imply quality, but based on my priors, if something obviously looks like vibe-code it’s probably low quality.
Most of the vibe-code I’ve seen so far appears functional to the point that people will defend it, but if you take a closer look it’s a massively over complicated rat’s nest that would be difficult for a human to extend or maintain. Of course you could just use more AI, but that would only further amplify these problems.
I have a python script [0] which builds and statically links my toolbox (fish, neovim, tmux, rg/fd/sd, etc.) into a self contained —-prefix which can be rsynced to any machine.
It has an activate script which sets PATH, XDG_CONFIG_HOME, XDG_DATA_HOME, and friends. This way everything runs out of that single dir and doesn’t pollute the remote.
My ssh RemoteCommand then just checks for and calls the activate script if it exists. I get dropped into a nice shell with all my config and tools wherever I go, without disturbing others’ configs or system packages.
Published a minimal version and added a link! This implements everything I mentioned except for static linking, so YMMV depending on your C/CXX toolchain and installed packages.
27us and 1us are both an eternity and definitely not SOTA for IPC. The fastest possible way to do IPC is with a shared memory resident SPSC queue.
The actual (one-way cross-core) latency on modern CPUs varies by quite a lot [0], but a good rule of thumb is 100ns + 0.1ns per byte.
This measures the time for core A to write one or more cache lines to a shared memory region, and core B to read them. The latency is determined by the time it takes for the cache coherence protocol to transfer the cache lines between cores, which shows up as a number of L3 cache misses.
Interestingly, at the hardware level, in-process vs inter-process is irrelevant. What matters is the physical location of the cores which are communicating. This repo has some great visualizations and latency numbers for many different CPUs, as well as a benchmark you can run yourself:
I was really asking what "IPC" means in this context. If you can just share a mapping, yes it's going to be quite fast. If you need to wait for approval to come back, it's going to take more time. If you can't share a memory segment, even more time.
No idea what this vibe code is doing, but two processes on the same machine can always share a mapping, though maybe your PL of choice is incapable. There aren’t many libraries that make it easy either. If it’s not two processes on the same machine I wouldn’t really call it IPC.
Of course a round trip will take more time, but it’s not meaningfully different from two one-way transfers. You can just multiply the numbers I said by two. Generally it’s better to organize a system as a pipeline if you can though, rather than ping ponging cache lines back and forth doing a bunch of RPC.
Could you say more about which extensions you’re referring to? I’ve often heard this take, but found details vague and practical comparisons hard to find.
Not the same commenter, but I’d guess: enabling some features for bindless textures and also vk 1.3 dynamic rendering to skip renderpass and framebuffer juggling
reply