Yeah, I know Rust isn’t everyone’s favorite but I’d expect at least some awareness that we’ve seen a lot of reliability improvements due to many of these ideas in a language which isn’t focused on FP. I ended up closing the tab when they had the example in TypeScript pretending the fix was result types rather than validation: that idea could be expressed as preferring that style, an argument that it makes oversights less likely, etc. but simply ignoring decades and decades of prior art suggests the author either isn’t very experienced or is mostly motivated by evangelism (e.g. COBOL didn’t suffer from the example problem before the first FP language existed so a far more interesting discussion would be demonstrating awareness of alternatives and explaining why this one is better).
Sure, my point was simply that it’s not as simple as the author assumes. This is a common failure mode in FP advocacy and it’s disappointing because it usually means that a more interesting conversation doesn’t happen because most readers disengage.
I get why it reads like FP evangelism, but I don’t think it’s “ignoring decades of prior art.” I’m not claiming these ideas are exclusive to FP. I’m claiming FP ecosystems systematized a bundle of practices (ADT/state machines, exhaustiveness, immutability, explicit effects) that consistently reduce a specific failure mode: invalid state transitions and refactor breakage.
Rust is actually aligned with the point: it delivers major reliability wins via making invalid states harder to represent (enums, ownership/borrowing, pattern matching). That’s not “FP-first,” but it’s very compatible with functional style and the same invariants story.
If the TS example came off as “types instead of validation,” that’s on me to phrase better, the point wasn’t “types eliminate validation,” it’s “types make the shape explicit so validation becomes harder to forget and easier to review.”