Hacker Newsnew | past | comments | ask | show | jobs | submit | Eikon's commentslogin

What I often do on ZeroFS [0] is to convert issues to discussions, it's a one click operation on github and helps reduce the noise for ill-defined issues.

[0] https://github.com/Barre/ZeroFS


Most cloudflare products are very slow / offer very poor performance. I was surprised by this but that’s just how it is. It basically negates any claimed performance advantage.

Durable objects, r2 as well as tunnel have been particularly poor performing in my experience. Workers has not been a great experience either.

R2 in particular has been the slowest / highest latency s3 alternative I ever had experience with, falling behind backblaze b2, wasabi and even hetzner’s object storage.


got 12/31ms and 6/13ms cached so Cloudlfare muat not be that slow in Europe after all... ;)

SlateDB is awesome, that’s ZeroFS [0] storage backend and it’s been great!

[0] https://github.com/Barre/ZeroFS


Unfortunately, this doesn’t support conditional writes through if-match and if-none-match [0] and thus is not compatible with ZeroFS [1].

[0] https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/1052

[1] https://github.com/Barre/ZeroFS


I work on SeaweedFS. It has support for these if conditions, and a lot more.


> Keep in mind that votes aren't supposed to be about whether you agree or disagree

That’s not what happens in practice.


On ZeroFS [0] I am doing around 80 000 minutes a month.

A lot of it is wasted in build time though, due to a lack of appropriate caching facilities with GitHub actions.

[0] https://github.com/Barre/ZeroFS/tree/main/.github/workflows


I found that implementing a local cache on the runners has been helpful. Ingress/egress on local network is hella slow, especially when each build has ~10-20GB of artifacts to manage.


What do you use for the local cache?


Just wrote about my approach yesterday: https://jeffverkoeyen.com/blog/2025/12/15/SlotWarmedCaching/

tl;dr uses a local slot-based cache that is pre-warmed after every merge to main, taking Sidecar builds from ~10-15 minutes to <60 seconds.


ZeroFS looks really good. I know a bit about this design space but hadn't run across ZeroFS yet. Do you do testing of the error recovery behavior (connectivity etc)?


This has been mostly manual testing for now. ZeroFS currently lacks automatic fault injection and proper crash tests, and it’s an area I plan to focus on.

SlateDB, the lower layer, already does DST as well as fault injection though.


Wow, that's a very cool project.


Thank you!


Is there a great opensource CI system that integrates nicely with github repos?



Shameless plug :)

https://www.merklemap.com/search?query=ycombinator.com&page=...

Entries are indexed by subdomain instead of by certificate (click an entry to see all certificates for that subdomain).

Also, you can search for any substring (that was quite the journey to implement so it's fast enough across almost 5B entries):

https://www.merklemap.com/search?query=ycombi&page=0


Not 100% related but not 100% not-related either: I've got a script that generates variations of the domain names I use the most... All the most common typos/mispelling, all the "1337" variations, all the Levenhstein edit distance of 1, quite some of the 2, etc.

For example for "lillybank.com", I'll generate:

    llllybank.com
    liliybank.com
    ...
and countless others.

Hundreds of thousands of entries. They then are null-routed from my unbound DNS resolver.

My browsers are forced into "corporate" settings where they cannot use DoH/DoT: it's all, between my browsers and my unbound resolver, in the clear.

All DNS UDP traffic that contains any Unicode domain name is blocked by the firewall. No DNS over TCP is allowed (and, no, I don't care).

I also block entire countries' TLD as well as entire countries' IP blocks.

Been running a setup like that (and many killfiles, and DNS resolvers known to block all known porn and know malware sites etc.) since years now already. The Internet keeps working fine.


Any insights you can share on how you made search so fast? What kind of resources does it take to implement it?


Most of merklemap is stored on ZeroFS [0] and thus allows to scale IO ressources quite crazily :)

[0] https://github.com/Barre/ZeroFS


> Watch Ubuntu boot from ZeroFS

Love it


How does ZeroFS handle consistency with writes?


If you use 9P or NBD it handles fsync as expected. With NFS, it's time based.

https://github.com/Barre/ZeroFS#9p-recommended-for-better-pe...


Oh awesome! I was searching for consistency, but I guess durability is the word used for filesystems. Thanks!


The first page of results doesn't include ycombinator.com. I get `app.baby-ycombinator.com`, `ycombinator.comchat.com`, everything in between.

Substring doesn't seem like what I'd want in a subdomain search.


> Substring doesn't seem like what I'd want in a subdomain search.

Well, if you want only subdomains search for *.ycombinator.com.

https://www.merklemap.com/search?query=*.ycombinator.com&pag...


Thank you!!! Needed exactly this at work.


Glad it was helpful!


I am working on ZeroFS [0], a POSIX filesystem that works on top of S3.

[0] https://github.com/Barre/ZeroFS


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: