Hacker Newsnew | past | comments | ask | show | jobs | submit | mikepurvis's commentslogin

The nix cli almost exclusively pulls GitHub as zipballs. Not perfect but certainly far faster than a real git clone.

That it supports fetching via Git as well as various via forge-specific tarballs, even for flakes, is pretty nice. It means that if your org uses Nix, you can fall back to distribution via Git as a solution that doesn't require you to stand up any new infra or tie you to any particular vendor, but once you get rolling it's an easy optimization to switch to downloading snapshots.

The most pain probably just becomes from the hugeness of Nixpkgs, but I remain an advocate for the huge monorepo of build recipes.


Yes agreed. It’s possible to imagine some kind of cached-deltas scheme to get faster/smaller updates, but I suspect the folks who would have to build and maintain that are all on gigabit internet connections and don’t feel the complexity is worth it.

> It’s possible to imagine some kind of cached-deltas scheme to get faster/smaller updates

I think the snix¹ folks are working on something like this for the binary caches— the greater granularity of the content-addressing offers morally the same kind of optimization as delta RPMs: you can download less of what you don't need to re-download.

But I'm not aware of any current efforts to let people download the Nixpkgs tree itself more efficiently. Somehow caching Git deltas would be cool. But I'd expect that kind of optimization to come from a company that runs a Git forge, if it's generally viable, and to benefit many projects other than Nix and Nixpkgs.

--

1: https://snix.dev/


Yes indeed. That said nix typically throws away the .git dir so it would require some work to adapt a solution to nix that operates at the git repo level.

The ideal for nix would be “I have all content at commit X and need the deltas for content at commit Y” and i suspect nix would be fairly unique in being able to benefit from that. To the point that it might actually make sense to just implement the fact git repo syncs and have a local client serving those tarballs to the nix daemon.


Notably the games you listed are all f2p/esports games, and that does matter in terms of how much budget developers have to polish a realistic look vs ship a cartoon and call it the "art style".

I just upgraded to 9700 XT to play ARC Raiders and it's absolutely a feast for the eyes while also pioneering on several fronts especially around the bot movement and intelligence.


They're not targeting high-end PCs. They're targeting current generation consoles, specifically the PS5 + 1080p. It just turns out that when you take those system requirements and put them on a PC—especially a PC with a 1440p or 2160p ultrawide—it turns out to mean pretty top of the line stuff. Particularly if as a PC gamer you expect to run it at 90fps and not the 30-40 that is typical for consoles.

Without disagreeing with the broad strokes of your comment, it feels like 4K should be considered standard for consoles nowadays - a very usable 4K HDR TV can be had for $150-500.

Thats a waste of image quality for most people. You have to sit very close to a 4k display to be able to perceive the full resolution. On PC you could be 2 feet from a huge gaming monitor, but an extremely small percentage of console players have the tv size and distance ratio where they would get much out of full 4k. Much better to spend the compute on higher framerate or higher detail settings.

I think higher detail is where most of it goes. A lower resolution, upscaled image of a detailed scene, at medium framerate reads to most normal people as "better" than a less-detailed scene rendered at native 4k, especially when it's in motion.

> You have to sit very close to a 4k display to be able to perceive the full resolution.

Wait, are you sure you don't have that backward? IIUC, you don't[] notice the difference between a 2K display and a 4K display until you get up to larger screen sizes (say 60+ inches give or take a dozen inches; I don't have exact numbers :) ) and with those the optimal viewing range is like 4-8 feet away (depending on the screen size).

Either that or am I missing something...

[]Generally, anyway. A 4K resolution should definitely be visible at 1-2 feet away as noticeably crisper, but only slightly.


My first 4K screen was a 24" computer display and let me tell you, the difference between that and a 24" 1080p display is night and day from 1-2 feet away. Those pixels were gloriously dense. Smoothest text rendering you've ever seen.

I didn't use it for gaming though, and I've "downgraded" resolution to 2x 1440p (and much higher refresh rates) since then. But more pixels is great if you can afford it.

It's one thing to say you don't need higher resolution and fewer pixels works fine, but all the people in the comments acting like you can't see the difference makes me wonder if they've ever seen a 4K TV before.


I still use 4K@24", unfortunately they're getting scarce. 4K@27" is where it's at now unfortunately. But I'll never go back to normal DPI. Every time at the office it bugs me how bad regular DPI is.

That's fair, but it makes me wonder if perhaps it's not the resolution that makes it crisper but other factors that come along with that price point, such as refresh rate, HDR, LCD layer quality, etc.

For example, I have two 1920x1080 monitors, but one is 160 Hz and the other is only 60 Hz, and the difference is night and day between them.


It’s best to think about this as angular resolution. Even a very small screen could take up an optimal amount of your field of view if held close. You get the max benefit from a 4k display when it is about 80% of the diagonal screen distance away from your eyes. So for a 28 inch monitor, that’s a little less then 2 feet, pretty typical desk setup.

Hmm... I still have my doubts, but you make a good point. I'll have to think on this a bit. Thanks for clarifying!

Assuming you can render natively at high FPS, 4k makes a bigger difference on rendered images than live action because it essentially brute forces antialiasing.

I think you're underestimating the computing power required to render (natively) at 4K. Some modern games can't even natively render at 1440p on high-end PCs.

1440p and 2160p is a total waste of pixels, when 1080p is already at the level of human visual acuity. You can argue that 1440p is a genuine (slight) improvement for super crisp text, but not for a game. HDR and more ray tracing/path tracing, etc. are more sensible ways of pushing quality higher.

You sounded like someone who doesn’t have 1440p or 2160p.

I have a 77’ S95D and my 1080p Switch looked horrible. Try it also with a 1080p screen bigger than 27 inch.


I also have a 77” OLED and there’s no question that 4K content is noticeably better looking on it that 1080p content.

> 1440p and 2160p is a total waste of pixels, when 1080p is already at the level of human visual acuity.

Wow, what a load of bullshit. I bet you also think the human eye can't see more than 30 fps?

If you're sitting 15+ feet away from your screen, yeah, you can't tell the difference. But for most people, with their eyes only being 2-3 feet away from their monitor, the difference is absolutely noticeable.

> HDR and more ray tracing/path tracing, etc. are more sensible ways of pushing quality higher.

HDR is an absolute game-changer, for sure. Ray-tracing is as well, especially once you learn to notice the artifacts created by shortcuts required to get reflections in raster-based rendering. It's like bad kerning. Something you never noticed before will suddenly stick out like a sore thumb and will bother the hell out of you.


Text rendering alone makes it worthwhile. 1080p densities are not high enough to render text accurately without artefacts. If you double pixel density, then it becomes (mostly) possible to renderi text weight accurately, and things like "rythm" and "density" which were things that real typographers concerned themselves with start to become apparent.

I'm sorry, you need to go to an optician. I can see the pixels at a comfortable distance at 1440p.

Alternatively, you play modern games with incredibly blurry AA solutions. Try looking at something older from when AA actually worked.


You're probably looking up close at a small portion of the screen - you'll always be able to "see the pixels" in that situation. If you sit far back enough to keep the whole of the screen comfortably in your visual field, the argument applies.

Oh goodness no, I sit way further away from the screen than most people. People always drag my screens closer when they have to borrow my desk

You are absolutely wrong on this subject. Importantly, what matters is PPI, not resolution. 1080P would look like crap in a movie theater or on a 55" TV, for example, while it'll look amazing on a 7" monitor.

I'm the owner of a 2020 Volvo V60 that has been at Waterloo Volvo since March of this year, racking up an increasingly terrifying bill of various parts and wiring harnesses all ordered one after the other from Sweden.

Despite my frustrations with their shop, they have been very good about keeping me in a revolving door of 2025 and 2026 loaner cars, especially the XC40 and XC60. Despite the occasional glitched audio or freezing bugs, I think they really have done a good job with the Android Automotive integration. It's nice having it logged in and able to see my Google Maps search history, but without having to actually have my phone on me or plugged in for CarPlay. For example, if another family member borrows the car and all that stuff just works for them too without them having to separately configure their phone.

I would be nervous about how well it all will be supported over the long term, especially once these cars are >4yrs old and off lease. But at that point you can always fall back to projection.


But most "apps" are just webviews running overcomplicated websites in them, many of which are using all the crazy features that the GP post wants to strip out.

You have to do extra work to get the hardlink deduplication in the store though:

https://nix.dev/manual/nix/2.20/command-ref/nix-store/optimi...

Unlike in FHS distros where you get some of the separation for free with usr/lib vs usr/share, most nix packages don't have separate store paths for binary vs non-binary files. At most you'll get the headers and build scripts split off in a separate dev path.


Building for an fpga shouldn’t be any harder than building for cortex mcus, and there are lots of free/oss toolchains and configurations for those.

Compiling RTL to run on an FPGA is way more complicated than compiling code to run on a CPU. Typically it has to meet timing, which requires detailed knowledge of logic placement. I'm not saying that's impossible, just that it's more complicated.

> shouldn’t

Is doing so much heavy lifting here, I need to ask; how much FPGA configuration you have done before?


Very little, just student projects in undergrad.

So yes, in that sense I'm talking out of my ass. But perhaps you can help enlighten me what it is that makes building FPGA firmware different from building MCU firmware.


Right but the switch has internal buffers and ability to queue those packets or apply backpressure. Resolving at that level is a very different matter from an electrical collision at L1.

Not as far as TCP is concerned it isn't. You sent the network a packet and it had to throw it away because something else sent packets at the same time. It doesn't care whether the reason was an electrical collision or not. A buffer is just a funny looking wire.

I think it’s more that the production lines that existed to build them in volume have all been long dismantled, so it would be prohibitively expensive and all the people involved would be doing it for the first time.

And even if you found the money to resurrect the production lines, modern regulations probably wouldn't look too kindly on making new consumer goods with several pounds of lead in each unit. Better set aside your morals and enough money to buy some politicians while you're at it.

At least in C++ there's also the matter that the type switching happens at compile time, not runtime. You get a new instance of the function for each type that it's called with.

That's why C++ libraries with generic interfaces need to have their full implementations in the header or else have explicit instantiations for known supported types. If you compile the library first and the project later, there's no way of knowing at library-compile-time which instantiations of the generic will later be needed, and like, the types might not even exist yet at that point.


You're right, C++ generic implementation is different than Java for example. That's why it was hard for me to nail the features the generic should have in general... C++ is more like duck typing with precompiled checking i.e. If you have no traits you still get integers summed for example because the generated code from the headers is compilable. On Java it's different: all the generics have types erased and there is typecasting later, unless they are inherited by other types

I'm not a Java person, but my understanding is that in Java everything is boxed anyway, so the implementation of the "generic" logic is always just dealing with a pointer-sized entity and it's a matter of whether the dereferenced object on the other end of that has the required fields/methods.

> Java everything is boxed anyway

Not true, there's int and Integer.


Right but you can’t use them as type arguments to Java’s generics so they’re not really relevant in this context.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: