You are missing that each update takes AGES while it tortures your disk for patching the files (on my machine it takes 15min or so, and that's on an SSD). So I agree that this is careless and reminds me of the GTA5 startup time that was fixed by a dedicated player who finally had enough and reverse engineered the problem (see https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...). I still find these things hard to accept.
Steam update durations depend on compression + CPU performance + SSD I/O. Things will be harder when the disk is almost full and live defragmentation kicks in to get free space for contiguous files. Some SSDs are fast enough to keep up with such a load, but a lot of them will quickly hit their DRAM limits and suddenly that advertised gigabyte per second write speed isn't all that fast. Bonus points for when your SSD doesn't have a heatsink and moving air over it, making the controller throttle hard.
Patching 150GiB with a compressed 15GiB download just takes a lot of I/O. The alternative is downloading a fresh copy of the 150GiB install file, but those playing on DSL will probably let their SSD whizz a few minutes longer than spend another day downloading updates.
If your SSD is slower than your internet capacity, deleting install files and re-downloading the entire game will probably save you some time.
11% still play HD2 with a spinning drive? I would've never guessed that. There's probably some vicious circle thing going on: because the install size is so big, people need to install it on their secondary, spinning drive...
Even though I have two SSDs in my main machine I still use a hard drive as an overflow for games that I judge are not SSD worthy.
Because it's a recent 20TB HDD the read speeds approach 250MB/s and I've also specifically partitioned it at the beginning of the disk just for games so that it can sustain full transfer speeds without files falling into the slower tracks, the rest of the disk is then partitioned for media files that won't care much for the speed loss. It's honestly fine for the vast majority of games.
> It's honestly fine for the vast majority of games.
Yes, because they apparently still duplicate data so that the terrible IOPS of spinning disks does not factor as much. You people need to stop with this so that we can all have smaller games again! ;-) <--- (IT'S A JOKE)
PrimoCache is awesome, highly recommended. I’d only say to make sure your computer is rock stable before installing it, in my limited experience it exponentially increases the risk of filesystem corruption if your computer is unstable.
It is no surprise to me that people still have to use HDD for storage. SSD stopped getting bigger a decade plus ago.
SSD sizes are still only equal to the HDD sizes available and common in 2010 (a couple TB~). SSD size increases (availability+price decreases) for consumers form factors have entirely stopped. There is no more progress for SSD because quad level cells are as far as the charge trap tech can be pushed and most people no longer own computers. They have tablets or phones or if they have a laptop it has 256GB of storage and everything is done in the cloud or with an octopus of (small) externals.
SSDs did not "stop getting bigger a decade plus ago." The largest SSD announced in 2015 was 16TB. You can get 128-256TB SSDs today.
You can buy 16-32TB consumer SSDs on NewEgg today. Or 8TB in M.2 form factor. In 2015, the largest M.2 SSDs were like 1TB. That's merely a decade. At a decade "plus," SSDs were tiny as recently as 15 years ago.
Perhaps my searching skills aren’t great but I don’t see any consumer ssds over 8TB. Can you share a link?
It was my understanding that ssds have plateaued due to wattage restriction across SATA and M.2 connections. I’ve only seen large SSDs in U.3 and E[13].[SL] form factors which I would not call consumer.
The mainstream drives are heavily focused on lowering the price. Back in the 2010s SSDs in the TB range were hundreds of dollars, today you can find them for $80 without breaking a sweat[1]. If you're willing to still spend $500 you can get 8TB drives[2].
I bought 4x (1TB->4TB the storage for half the price after my SSD died after 5 years (thanks samsung), what you mean they 'stopped being bigger'?
Sure, there is some limitation in format, can only shove so many chips on M.2, but you can get U.2 ones that are bigger than biggest HDD (tho price is pretty eye-watering)
By stopped getting bigger I mean people still think 4TB is big in 2025. Just like 2010 when 3/4TB was the max size for consumer storage devices. u.2/u.3 is not consumer yet, unfortunately. I have to use m.2 nvme to u.2 adapters which are not great. And as you say, low number of consumer cpu+mobo pcie lanes has been restricting from the number of disks side until just recently. At least in 2025 we can have more than 2 nvme storage disks again without disabling a pcie slot.
I think this is more a symptom of data bloat decelerating than anything else. Consumers just don't have TBs of data. The biggest files most consumers have will be photos and videos that largely live on their phones anyway. Gaming is relatively niche and there just isn't that much demand for huge capacity there, either -- it's relatively easy to live with only ~8 100GB games installed at the same time. Local storage is just acting as a cache in front of Steam, and modern internet connections are fast enough that downloading 100GB isn't that slow (~14 minutes at gigabit speeds).
So when consumers don't have (much) more data on their PCs than they had in 2015, why would they buy any bigger devices than 2015? Instead, as sibling commenter has pointed out, prices have improved dramatically, and device performance has also improved quite a bit.
(But it's also true that the absolute maximum sized devices available are significantly larger than 2015, contradicting your initial claim.)
I read that SSDs don't actually guarantee to keep your data if powered off for an extended period of time, so I actually still do my backup on HDDs. Someone please correct me if this is wrong.
A disk that is powered off is not holding your data, regardless of whether it is an HDD, SDD, or if it is in redundant RAID or not. Disks are fundamentally a disposable medium. If you don't have them powered on, you have no way to monitor for failures and replace a drive if something goes wrong - it will just disappear someday without you noticing.
Tape, M-DISC, microfilm, and etched quartz are the only modern mediums that are meant to be left in storage without needing to be babysit, in climate controlled warehousing at least.
Do you poweroff your backup HDDs for extended periods of time (months+)? That's a relatively infrequent backup interval. If not, the poweroff issue isn't relevant to you.
(More relevant might be that backups are a largely sequential workload and HDDs are still marginally cheaper per TB than QLC flash.)
Which doesn’t matter at all in the case of Helldivers 2 as it’s only available for PC, PS5, and XBS/X. That’s a good part of why PC players were so irritated, actually: when all this blew up a few months ago, the PC install sizes was ~133 GB vs the consoles’ 36 GB.
Helldivers 2 is only on current gen consoles so older ones are beside the point, the current ones use NVMe SSDs exclusively. PC is the only platform where HDDs or SATA SSDs might still come up.
The nice thing is that Emacs 30.1 now has much better support for touchscreen events. It will take some time for packages to make use of that, but at least it is now possible. For instance, you should now be able to increase/decrease text size by pinching.
> In fact, for many applications malfunctioning is better than crashing — particulary in the embedded world where Rust wants to be present.
Not a fact. Particularly in the embedded world, crashing is preferable to malfunctioning, as many embedded devices control things that might hurt people, directly or indirectly.
> If a pacemaker stops — telling a victim “but the memory was not corrupted in the crash” is a weak consolation.
If a pacemaker suddenly starts firing at 200Hz, telling a victim "but at least it didn't crash" is a weak consolation. A stopping pacemaker is almost always preferable to a malfunctioning one, as most people with pacemakers still have sufficient natural rhythm to survive this for long enough to get help.
> We actually had a recent Cloudflare outage caused by a crash on unwrap() function
Please read the whole article. If the unwrap hadn't caused an exit, the process would've run out of memory, leading to a much less deterministic behavior which is much harder to diagnose and fix. I always prefer an early exit with a clear error instead of getting killed by the OOM reaper.
Something else I usually don't see: A system hitting a fail-safe is a lot easier to detect and handle from the outside than one that just enters an unknown invalid state.
Like, if the rule were "Always-Keep-Running" then hospital equipment power supplies wouldn't have circuit breakers that cut the power when something is wrong. But cutting power seems lot easier to detect for the backup power supply so it can fully take over.
it's funny because I have seen the opposite.
Engineer: "it crashed because it dereferenced a null pointer"
boss: "add null pointer checks everywhere!"
... and because it used "if" instead of "assert", it made the null pointer arg a valid argument, making it a tolerable state of the running software, which displaced the locus of crashes far from the source of the issue. Moral of the story, use "assert" to make it crash as early as possible and debug THAT. You want to restrict the representable states in the software, not expand them by adding null checks everywhere.
Yeah, the blog post is a very confused write-up. I saw lots of similar posts on LinkedIn recently, with quite a lot of likes and echo chamber comments. It’s just hilarious how a narrative emerges that reinforces biases due to ignorance. There must be a name for that sort of fallacy.
I love to write in Rust precisely because I can express failures more explicitly, it’s the transparency that wins here.
I’d frame the issue Cloudflare had rather in the PR review and QA corner, maybe as some AI complacency. But it’s not a problem with Rust.
> If the unwrap hadn't caused an exit, the process would've run out of memory
It was trying to push an element into a full ArrayVec. The options are:
- Blindly write off the end of the array. Obviously no one wants this, despite the decades of tradition...
- Panic and unwind, as the program actually did in this case.
- Return an error.
Some folks assume that returning an error instead of unwinding would've been better. But my assumption is that the outcome would've been the same. I think the issue came up when loading configs, which isn't usually a recoverable situation. If you have an "invalid config error", you're probably just going to return that all the way up, which is effectively the same outcome as unwinding: your process exits with an error code. There are cases where the difference matters a lot, but I don't think this was one of them.
The real gap seems to be why it took hours for folks to notice that this service was crash looping. That should normally be really prominent in alerts and dashboards. (Probably part of the story is that alerts were firing all over the place. Tough day at the office.)
>Not a fact. Particularly in the embedded world, crashing is preferable to malfunctioning, as many embedded devices control things that might hurt people, directly or indirectly.
It really depends on how deeply Turing you mechanism is. By being "Turing" I mean "the behavior is totally dependant on every single bit of previous information". For a reliable system turing-completeness is unacceptable for separate functions i.e. it should produce a correct result in a finite amount of time no matter what hapened in the past. Particulary, that's why modern real-time systems cannot be fit into Turing machine, because Turing machine has no interrupts.
>If a pacemaker suddenly starts firing at 200Hz, telling a victim "but at least it didn't crash" is a weak consolation. A stopping pacemaker is almost always preferable to a malfunctioning one
You almost make an excuse for general unreliability of programs. Mainstream C is unreliable, C++ is unreliable, Rust is unreliable. I can agree that Rust is not less reliable than C/C++, but it is definitely less reliable than some other language e.g. BEAM-based ones. I mean in Rust standard library some time ago I actually read "in these and these conditions the following code will deadlock. But deadlock is not an undefined behavior, so it's ok". The designers of Rust did not really try to support any kind of "recover and continue" way of functioning. Yes, you can catch the panic, but it will irreversibly poison some data.
Also stopping is a state which definitely might happen anyway and must be planned for. The pacemaker's wires can be dislodged or damaged, power sources fail, these things will stop the pacemaker regardless of how much you love the "just keep going" approach to software engineering. Which means medics have already thought about what they're going to do when this happens to a patient, and "The software failed" just goes on the same list as "10 year battery only last 8 years" in terms of undesirable but hardly impossible scenarios which constitute a medical emergency.
> Not a fact. Particularly in the embedded world, crashing is preferable to malfunctioning, as many embedded devices control things that might hurt people, directly or indirectly.
Strong agree on this one, usually embedded systems are designed to "fail safe" on crash - the watchdog will trip, and hardware reset will put everything in a deterministic, known safe state.
What you want, above all else, is not to fall into undefined behavior. That's the beauty of `unsafe`, it bounds UB into small boxes you know to test the hell out of.
A pacemaker crashing can restart, log a diagnostic, continue operating, and report the diagnostic remotely. (yes, really! Bluetooth to phone or to dedicated relay gadget.)
A pacemaker in an unknown state goes forever undiagnosed.
It's honestly mind boggling how people react to this. Rust turns unknown failures in C and C++ into known failures and suddenly the C/C++ people start caring about the failures, but attribute the failure to the new language, even though the same failures are secretly lurking in their C/C++ code bases. It's kind of like trying to silence a whistleblower.
>Please read the whole article. If the unwrap hadn't caused an exit, the process would've run out of memory, leading to a much less deterministic behavior which is much harder to diagnose and fix. I always prefer an early exit with a clear error instead of getting killed by the OOM reaper.
I am running into an undiagnosable CUDA "illegal memory access" problem in vLLM, a code base that is a mix of python and CUDA (via pytorch). At a certain load something appears to either overflow or corrupt the memory and vLLM restarts, which takes a minute, because it has to reload several dozens of GBs into memory and then rerun the CUDA graph optimizations.
The pacemaker argument is complete nonsense, because the pacemaker must keep working even if it crashes. You can forcibly induce crashes into the pacemaker during testing and engineer it to restart fast enough that it hits its timing deadline anyway. Meanwhile a silent memory corruption could cause the pacemaker to enter an unknown state where the code that runs the pacemaker algorithm is overwritten and it simply stops working altogether. Having a known failure state is a thousand times more preferrable to an unknown number of unknown failure states. Critical sections (mutexes) and unsafe code has to be panic free (or at least panic safe) in Rust, so the concept of writing code without panics isn't exactly a niche concept in Rust. For every panic based feature, there is usually a panic free equivalent.
>Rust turns unknown failures in C and C++ into known failures and suddenly the C/C++ people start caring about the failures
I'm actually the one who promotes paranoidal assert-s everywhere. I do agree the original statement from the article is ambiguous, probably should have written something like "memory safety in Rust does not increase reliability".
>The pacemaker argument is complete nonsense, because the pacemaker must keep working even if it crashes. You can forcibly induce crashes into the pacemaker during testing and engineer it to restart fast enough that it hits its timing deadline anyway.
I'm not sure whether there is a deadlock-free modification of Rust — deadlock is not considered an undefined behavior in Rust.
The Arduino was 30$ when it came out. The Raspberry was 35$.
I'd be astonished if they manage to get the Steam Machine down to 800$ (bundled with a controller). Knowing how Valve loves their margins, it's probably closer to 1000$ or even more. This is not something you spontaneously buy to play around with.
In one of the interviews that came out when the Steam Machine embargo ended, someone from Valve said that, unlike with Steam Deck, they can't afford to sell at a loss because the form factor and the OS of the Machine make it possible to buy it just for general compute, which would be devastating with negative margins. So, unfortunately, I guess it will be 800-1000$ in the end
That is still cheaper than the index when it came out, and it sounds like a general improvement in all areas. Flagship vr for less than the cost of the latest smartphone seems pretty reasonable given how low adoption is.
Moore's Law is Dead (chip analyst YouTuber) believes a $300 bill of materials and $600, maybe even $450 for the lower SKU.
The CPU and GPU in this are last generation, and he believes Valve got a bulk discount on unsold RDNA3 mobile GPUs. They did something similar with the Steam Deck riding off a Magic Leap custom design. People predicted that would be pricey too but launched with (and still has) a $399 model.
You have to take MLID with a several large grains of salt. He gets a lot wrong, and when he does, he deletes the corresponding videos. There are Reddit subs that won't allow links to his videos for that reason. Valve has also said that it won't be console-type pricing, more like the pricing of a decent SFF PC.
Having said that, it would be great to see the GabeCube come in around the prices he's guesstimating. I hope it finds great success.
300$ BOM sounds incredibly low to me, especially now with the exploding prices for storage and RAM. Maybe they have already stocked up on it, but I doubt it. The huge heat sink also cannot be cheap. Then of course there's the whole tariff uncertainty.
The lower SKUs for Steam Deck are sold pretty much with zero margin or even at a loss, as Newell said this was a strategic decision to enter the mobile gaming market. However, for PC gaming, Valve already has a monopoly, and selling general-purpose hardware with little or no margin sounds like a recipe for losing money quick, which is not what Valve is known for...
But hey, we don't have to argue. Let's meet on HN again when the price is announced and I'll happily eat my words. :-)
The hardware does not compare favorably to a 2024 (current, in other words) Mac Mini.
Consider what the Steam Machine requires sacrificing:
- fewer USB-C connectors
- larger physical footprint
- can't buy and take it home today
It's hard to do an apples-to-apples comparison between the chips, but where one is better than the other on some dimension, it's offset by some other. They're approximately comparable.
Now, what's the price of the Mac Mini? $598.92
When people are talking about price points that have a higher margin than even Apple's devices have, you have to stop and consider whether people who are tossing around numbers like $750 with a straight face are actually trying to be rational but failing or they are just totally yielding to getting caught up in the hype.
The Mac Mini for 600$ comes with 16GB unified memory and 256GB storage. To make it comparable, you'd need to configure it with 24GB and 512GB, resp., and voila, 999$. You are surely aware that the lowest SKUs from Apple have way lower margins, since nobody buys them anyway because who wants to live with 256GB of storage? It's just to have that sweet "starting at 599$" on the store front page.
Now, of course Apple takes ridiculous money for additional RAM and storage, but this is exactly how they achieve their famous margins (everything's soldered, so you cannot upgrade yourself). Thanks to the AI hype, DDR5 RAM is very expensive at the moment, as is GDDR5, as is SSD storage. Nobody here, including you, knows what kind of deal Valve will be able to get here. There's a reason they are not announcing any prices yet, because there are so many uncertainties at the moment (tariffs, anyone?), that most probably Valve themselves do not know yet.
Depends what people are buying a steam machine for and what are the alternatives. Can you use a Mac mini for pretty much the same purpose? Genuine question since I'm not a gamer nor a Mac user.
The Mac Mini does not have the sustained thermal capability of the big 6-inch fan in the Steam Machine. So it'll throttle. The Mac Studio probably has better thermals than what Steam will ship, but it's far more expensive.
The Mac Mini can play many games, but it cannot play most games like this Steam Machine. Developers barely supported the Mac before the ARM switch, and now it's somehow even worse.
Gaming with a Mac is an exercice in zen enlightenment.
AMD is in dire straits - their GPU market share is basically nothing right now, I wouldn't be surp
Plus the 7600M (which is suspected to be the Steam Machine GPU) is an existing design on a legacy node, and they don't have to worry about it threatening their current lineup. They can go pretty low with the price.
They might get something out of it - considering the modest hardware, devs will have to optimize for it, which might get them a couple extra percent on more modern hardware as well.
The APU from AMD they have is pretty much magic. You will not find anything comparable, any Ryzen APU you can actually buy is pretty much trash apart from very light-weight gaming. You absolutely need a separate GPU, and even low-end will set you back at least 250$. The only way to build something comparable for cheap would be to buy used.
The APU in the steam deck isn't anything too special (the 740M is comparable but is RDNA3). For the GabeCube they are using a customized 7600M, which previously has been used in many eGPUs for Chinese handhelds (at a very high price).
AMD does have some pretty powerful APUs right now, but I don't think we'll see it on many mass market
How customized it is, I guess we'll find out closer to release, but I am guessing just based on the dimensions that it is just customized specifically for the case, for space and cooling reasons.
A similar PC without the fit and finish with just consumer parts comes in at around $900.
Curious how much pull valve has with AMD to get this into people's hands.
> Steam Machines can become an existencial crisis for PlayStation and Xbox.
Xbox, as a console, already is in an existential crisis.
I think people have weird expectations about what the Steam Machine will cost. From what Valve has said so far (cheaper than if you build it yourself from parts), it will still cost significantly more than a PS5, and probably also more than a PS5 Pro, while having less performance than both. You will not beat the PS5 in terms of performance per dollar. Yes, games are more expensive on PS5, but most people don't work that way but just want to know whether they'll be able to play GTA6 on day one.
As someone old enough to remember OSS, the amount of stockholm syndrome going on here is impressive. The fun times fiddling with 'vmix' to get different applications to play sounds at the same time, yeah, that was awesome. MIDI? USB Audio? Resampling? Jack? There's also this wireless thing, what's it called, redlip or something, but who cares about this modern stuff, I still got my C64 here, it's awesome, everything just works and it boots in a second.
Not sure what exactly you mean with "open source OS" and if Lineage counts as one in your book: it supports quite a few cheap and also fairly recent Motorola phones, which are also easy to unlock:
For family, I just got a used Edge 30 Neo for ~100$ and put LineageOS on it, and it works like a charm. Phones like the Moto g84 go for even less and still can be bought new for a decent price.
Xiaomi would be even cheaper, but I would highly discourage getting one because the unlock process is plain ridiculous nowadays.
And as others have already noted, if you don't mind getting a phone that's a few years old, a used Pixel 5 is not expensive (still happily using a Pixel 4a and don't see why I would need to upgrade).
> Had Ruby Central acquiesced the logs would've been parsed and sold.
Which the privacy policy of RubyCentral allows, so I don't get why they suddenly have ethical problems with that, apart of course from throwing shade on Andre. Parsing logs for company access is what basically everyone does, and frankly, I don't see the problem with getting leads from data like this. That has nothing to do with "selling PII".
Yes. While I personally don’t like this practice, it is so widespread and there is so much demand for it that it’s not unusual given their privacy policy makes explicit mention of it.
The best argument you could make is that gem owners should be able to see “who” downloads their gems. If they were self-hosting the packages, they would have that data. Of course, charging for it is the ookier part.
Say you provide a service for free and are desperate for corporate sponsorship. Who wouldn't look at what companies are using your service and contact them with "Hey, I'm seeing you are using our service, can we have a chat"? You basically have no other means of contacting companies nowadays without getting into trouble for cold-calling/spamming.