Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Owner of Tonbo here. This critique makes sense in a classic web-app model.

What's shifting is workloads. More and more compute runs in short-lived sandboxes: WASM runtimes (browser, edge), Firecracker, etc. These are edge environments, but not just for web applications.

We're exploring a different architecture for these workloads: ephemeral, stateless compute with storage treated as a format rather than a service.

This also maps to how many AI agent service want per-user or per-workspace isolation at large scale, without operating millions of always-on database servers.

If you're happy running a long-lived Postgres service, Neon or Supabase are great choices.





This makes no sense. DB connections have been part of the "short-lived sandbox" since the very beginning. CGI, PHP, ... all use database connections, and that's way faster and correcter (with proper transactions) than this approach.

And you use Rust ... so you care about speed and correctness. This seems like a very wrong approach.


CGI/PHP treated database connections as something that's always available. That pushes a lot of hidden complexity onto the database platform: it has to be reachable from anywhere, handle massive fan-out, survive bursty short-lived clients, and remain correct under constant connect/disconnect.

That model worked when you had a small number of stable app servers. It becomes much harder when compute fans out into thousands or millions of short-lived sandboxes.

We're already seeing parts of the data ecosystem move away from this assumption. Projects like Iceberg and DuckDB decouple storage from long-running database services, treating data as durable formats that many ephemeral compute instances can operate on. That's the direction we're exploring as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: