Cylindrical straw not included. Limited time offer. Warranty may be void if spaceship uses any reaction wheel or propulsion system. Other exclusions and limitations apply, see ...
There are two things one might care about when computing an SDF .. the isosurface, or the SDF itself.
If you only care about the isosurface (ie. where the function is 0), you can do any ridiculous operations you can think of, and it'll work just fine. Add, sub, multiply, exp .. whatever you want. Voxel engines do this trick a lot. Then it becomes more of a density field, as apposed to a distance field.
If you care about having a correct SDF, for something like raymarching, then you have to be somewhat more careful. Adding two SDFs does not result in a valid SDF, but taking the min or max of two SDFs does. Additionally, computing an analytical derivative of an SDF breaks if you add them, but you can use the analytical derivative if you take a min or max. Same applies for smooth min/max.
To add some more detail, the max of two SDFs is a correct SDF of the intersection of the two volumes represented by the two SDFs, but only on the inside and at the boundary. On the outside it's actually a lower bound.
This is good enough for rendering via sphere tracing, where you want the sphere radius to never intersect the geometry, and converge to zero at the boundary.
A particular class of fields that have this property is fields with gradient not greater than one.
For example, linear blends of SDFs. So given SDFs f and g you can actually do (f(pos)+g(pos))/2 and get something you can render out the other side. Not sure what it will look like, or if it has some geometrical interpretation though.
Note that speed of convergence suffers if you do too many shenanigans.
I did some simple experiments and fairly swiftly discovered where I went wrong. I'm still not totally convinced that there isn't something clever that can be done for more operations.
My next thought is maybe you can do some interesting shenanigans by jumping to the nearest point on one surface then calculating a modulation that adjusts the
distance by an amount. I can certainly see how difficult it would become if you start making convex shapes like that though. There must be a way to take the min of a few candidates within the radius of a less precise envelope surface.
No I was thinking a hard min, but one that finds a greedy but inaccurate distance and then a refinement takes some samples that measure nearest within a radius. This would handle modulations of the shape where it folded back upon itself as long as they don't fold within the subsample radius.
It's multi sample but selective rather than weighted.
I owe iq so much; a living legend. Inigo, if you happen to ever read this, thanks so much for all the work you've published. Your Youtube videos (not to mention shadertoy) sparked an interest in graphics I never knew I had.
For anyone that's unfamiliar, his Youtube videos are extremely well put together, and well worth the handful of hours to watch.
> "tagged" unions of ADT languages like Haskell are arguably pretty clearly inferior to the "untagged" unions of TypeScript
dude .. wut?? Explain to me exactly how this is true, with a real world example.
From where I stand, untagged unions are useful in an extremely narrow set of circumstances. Tagged unions, on the other hand, are incredibly useful in a wide variety of applications.
Example: Option<> types. Maybe a function returns an optional string, but then you are able to improve the guarantee such that it always returns a string. With untagged unions you can just change the return type of the function from String|Null to String. No other changes necessary. For the tagged case you would have to change all(!) the call sites, which expect an Option<String>, to instead expect a String. Completely unnecessary for untagged unions.
A similar case applies to function parameters: In case of relaxed parameter requirements, changing a parameter from String to String|Null is trivial, but a change from String to Option<String> would necessitate changing all the call sites.
> From where I stand, untagged unions are useful in an extremely narrow set of circumstances. Tagged unions, on the other hand, are incredibly useful in a wide variety of applications.
I think your Option/String example is a real-world tradeoff, but it’s not a slam-dunk “untagged > tagged.”
For API evolution, T | null can be a pragmatic “relax/strengthen contract” knob with less mechanical churn than Option<T> (because many call sites don’t care and just pass values through). That said, it also makes it easier to accidentally reintroduce nullability and harder to enforce handling consistently, the failure mode is “it compiles, but someone forgot the check.”
In practice, once the union has more than “nullable vs present”, people converge to discriminated unions ({ kind: "ok", ... } | { kind: "err", ... }) because the explicit tag buys exhaustiveness and avoids ambiguous narrowing. So I’d frame untagged unions as great for very narrow cases (nullability / simple widening), and tagged/discriminated unions as the reliability default for domain states.
For reliability, I’d rather pay the mechanical churn of Option<T> during API evolution than pay the ongoing risk tax of “nullable everywhere.
My post argues for paying costs that are one-time and compiler-enforced (refactors) vs costs that are ongoing and human-enforced (remembering null checks).
I believe there is a misunderstanding. The compiler can check untagged unions just as much as it can check tagged unions. I don't think there is any problem with "ambiguous narrowing", or "reliability". There is also no risk of "nullable everywhere": If the type of x is Foo|Null, the compiler forces you to write a null check before you can access x.bar(). If the type of x is Foo, x is not nullable. So you don't have to remember null checks (or checks for other types): the compiler will remember them. There is no difference to tagged unions in this regard.
I think we mostly agree for the nullable case in a sound-enough type system: if Foo | null is tracked precisely and the compiler forces a check before x.bar, then yes, you’re not “remembering” checks manually, the compiler is.
Two places where I still see tagged/discriminated unions win in practice:
1. Scaling beyond nullability. Once the union has multiple variants with overlapping structure, “untagged” narrowing becomes either ambiguous or ends up reintroducing an implicit tag anyway (some sentinel field / predicate ladder). An explicit tag gives stable, intention-revealing narrowing + exhaustiveness.
2. Boundary reality. In languages like TypeScript (even with strictNullChecks), unions are routinely weakened by any, assertions, JSON boundaries, or library types. Tagged unions make the “which case is this?” explicit at the value level, so the invariant survives serialization/deserialization and cross-module boundaries more reliably.
So I’d summarize it as: T | null is a great ergonomic tool for one axis (presence/absence) when the type system is enforced end-to-end. For domain states, I still prefer explicit tags because they keep exhaustiveness and intent robust as the system grows.
If you’re thinking Scala 3 / a sound type system end-to-end, your point is stronger; my caution is mostly from TS-in-the-wild + messy boundaries.
I think the real promise of "set-theoretic type systems" comes when don't just have (untagged) unions, but also intersections (Foo & Bar) and complements/negations (!Foo). Currently there is no such language with negations, but once you have them, the type system is "functionally complete", and you can represent arbitrary Boolean combination of types. E.g. "Foo | (Bar & !Baz)". Which sounds pretty powerful, although the practical use is not yet quite clear.
> For the tagged case you would have to change all(!) the call sites
Yeah, that's exactly why I want a tagged union; so when I make a change, the compiler tells me where I need to go to do updates to my system, instead of manually hunting around for all the sites.
---
The only time an untagged union is appropriate is when the tag accounts for an appreciable amount of memory in a system that churns through a shit-ton of data, and has a soft or hard realtime performance constraint. Other than that, there's just no reason to not use a tagged union, except "I'm lazy and don't want to", which, sometimes, is also a valid reason. But it'll probably come back to bite you, if it stays in there too long.
> > For the tagged case you would have to change all(!) the call sites
> Yeah, that's exactly why I want a tagged union; so when I make a change, the compiler tells me where I need to go to do updates to my system, instead of manually hunting around for all the sites.
You don't have to do anything manually. There is nothing to do. Changing the return type of a function from String|Null to String is completely safe, the compiler knows that, so you don't have to do any "manual hunting" at call sites.
I believe that, by the description provided, most languages that you're talking about must actually represent 'untagged unions' as tagged unions under the hood. See my sibling comment. I'm curious
C doesn't support any untagged unions (or intersections) in the modern sense. In a set-theoretic type system, if you want to call a method of Foo, and the type of your variable is Foo|Bar|Baz, you have to do a type check for Bar and Baz first, otherwise the compiler won't compile.
If I have an untagged union in <language_of_your_choice>, and I'm iterating over an array of elements of type `Foo|Bar|Baz`, and I have to do a dynamic cast before accessing the element (runtime typecheck) .. I believe that must actually be a tagged union under the hood, whether or not you call it a tagged union or not... right? ie. How would the program possibly know at runtime what the type of a heterogeneous set of elements is without a tag value to tell it?
The ligatures part of this article gets me every time I re-read it. I think reading this article may have been the first time I realized that even large, well-funded projects are still done by people who are just regular humans, and sometimes settle for something that's good enough.
By the authors definition, I've been writing perfect software for over a decade.
It's never required LLMs. In fact, I think the idea that "LLMs allow us to write software for ourselves" borders on missing the point, for me at least. I write software for myself because I like the exploratory process .. figuring out how do do something such that it works with as little friction as possible from the side of the user; who is of course myself, in the future.
I like nitpicking the details, getting totally side-tracked on seemingly frivolous minutiae. Frequently enough, coming to the end of a month long yak-shave actually contributes meaningful insight to the problem at hand.
I guess what I'm trying to say is "you're allowed to just program .. for no other reason than the fun of it".
As evidence for my claims: a few of my 'perfect' projects
I get what you're saying - I personally scratch that itch by doing woodworking and hobby electronics; I just love doing it and the end product is often just a means to an end; to craft something and enjoying the process of it.
But programming doesn't give me that same feeling, and honestly; the scope of doing and learning everything needed to make my projects without LLM's are just way out of reach. Learning these things would not be relevant to my career or my other hobbies. So, for me I use LLM's the way a person who's not into carpentry might buy the services of a carpenter, despite the possibility of them doing the project themselves after investing tons of time into learning how.
These days, I spend my personal coding time on building personal interfaces either as a shell script or as emacs packages. So many tools and applications hinders power usage.
Some people enjoy cooking. Some people enjoy eating great food. Some people enjoy both. Some people enjoy cooking certain things, and also like eating things they would never bother cooking themselves.
There is nothing wrong with any of these perspectives.
> the key to productive software development is more and more libraries
You had me until this statement. The idea that "more and more libraries" is going to solve the (rather large) quality problems we have in the software industry is .. misguided.
Don’t use a library unless you really need it. Someone recently recommended I add Zod to a project where I am only validating two different JSON objects in the entire project. I like Zod, but I already wrote the functions to progressively prove out the type in vanilla JS.
100% agree. This actually makes AI-aided development a big improvement (as long as you’re careful). You can have an LLM write you a little function, or extract the correct one from a big library, and inline it into your module.
I'm talking great libraries in great languages. Like how the kmettverse revolutionized writing Haskell. Libraries that make you completely reconsider what it is you're trying to do.
Most people use shit libraries in shit languages. NPM slopfests have no bearing on what I'm talking about.
reply