Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Cloudflare-sync – Tool for using Cloudflare as a dynamic DNS provider (github.com/mxplusb)
146 points by mxplusb on Sept 5, 2019 | hide | past | favorite | 42 comments


This is not very efficient:

- It updates the records every 30 seconds even if it doesn't have to.

- It has to be run as a daemon, eating memory and CPU time constantly.

- only checks one IP service (ipify)

I have a more advanced version, which:

- you can run as a cron job

- only updates the records if the IP address actually changed.

- checks multiple IP services (so in case multiple of them going down, it will still work)

- have a better command line interface

https://github.com/kissgyorgy/cloudflare-dyndns


It would also probably be good to use Cloudflare's own ip lookup service, available on every CF website's "/cdn-cgi/trace" page. The only thing is that you need to find a ipv4-only CF website to get the v4 instead of v6.


Accessing the cloudflare IP address directly works.

For example http://1.1.1.1/cdn-cgi/trace


IIRC that's never been a public API; more of an artifact for troubleshooting/support. Relying on it / parsing it directly was never recommended when I was there.


Is this documented somewhere? Are there other endpoints available by default under /cdn-cgi/? I looked around a bit but couldn't find any official documentation of this.


This is actually the fastest, always responds under 100ms! I put this in, thanks!


very groovy, thanks!


Huh. didn't know this existed! Thanks! Is there a list of other pages cloudflare serves outside of this one?


Don't think there's a list; the only other pages that I see regularly are for email protection[0], CF apps[1] (the ones that inject scripts), expect-ct (CT monitoring), and image resizing[2].

0: https://blog.cloudflare.com/introducing-scrapeshield-discove...

1: https://www.cloudflare.com/apps/

2: https://blog.cloudflare.com/announcing-cloudflare-image-resi...


Or just have the tool open an ipv4 socket when accessing it.


That only works if you're not behind carrier grade NAT.


The goal is to lookup your own public IP address to store it in a DNS record. If you're behind a carrier grade NAT, there isn't much use in that exercise, right?


If you're behind a carrier NAT, would dyndns even have a purpose?


I'm using this one with Google Cloud, although it only checks IPs using a single provider (otherwise it's pretty much the same as you're describing it):

https://github.com/andreycizov/ddns-gcloud


I'm using this one https://github.com/adrienbrignon/cloudflare-ddns, working like a charm :)

(made by me ahah)


I like the approach of the program managing its own scheduling much better than cron. You have an endpoint that is constantly available to scrape metrics from. You can send it an RPC to say "hey, do the sync now". The behavior when updating the code is well-defined. This particular daemon doesn't really do any of these things, but when they're needed it's trivial to add.

I also don't think it's a bad idea to do the update at the desired interval. Someone could go into Cloudflare's web interface and mess up the configuration; with this design, the mistake is fixed within 30 seconds. With a design where an update request is only issued if an IP change is detected, then that will probably never be fixed and inconsistent behavior will result. The underlying problem is assuming that your sync program knows the entire state of the Universe and can properly detect a change. It can't and you shouldn't design something that can. All you want to do is push your known-good state to a configuration repository. So that's what the code does.

Strange behavior works when a sync process does different things every iteration. If Cloudflare's API changes for some reason and your program only applies an update when there's a change means you only detect the failure when an IP address change is needed. Now your IP address is wrong in Cloudflare while you rewrite your program. If you just apply the change periodically even though there is no diff, you get a nice alert that the API is broken before you need to change. That increases the probability that you'll have a working program at the time when you actually need to change the IP. Instead of stressing out about the problem, you can take time to fix it properly, and no outage results.

Furthermore, this program properly maintains a global rate limit of API calls across all configurations. That is way easier to do in a daemon (limiter.Take() whenever you feel like making an update) than in a cron job; which is going to have to make a different number of API calls each time it runs, and you don't control when the job runs. The rate limit is likely high enough that you will never notice, but technically this approach is more correct than the "yolo do whatever" cron job.

Programs that are in a sleep(30 seconds) call do not use much memory or CPU. They can be trivially swapped out if the system is under memory pressure. A go application like this is going to use less than 10M of RAM all in. In 1962 that's a lot. Today that's nothing.

Given the existence of a kubernetes configuration in the repository, I think they made the right decision with the design here. Kubernetes does have cron, but cron is a high-overhead thing in Kubernetes. It creates a new job and schedules the associated pod every time the scheduling interval is reached. That's one of the most expensive things you can do, and while no harm will be caused running this every 30 seconds, the maintainability is lower and the probability of something unusual happening is higher. Speaking from experience, "kubectl get events" also becomes completely unusable when you use cron to run frequent jobs.

To be fair, I have no idea what sort of environment involves production traffic in a kubernetes cluster going to a dynamic IP address, and that sounds like something I'd look into before writing a sync script like this. But overall, I think the design is solid. It has the potential to be properly observable. It has consistent behavior. You can detect problems before they become an outage. It properly rate-limits calls to Cloudflare no matter how many records you have configured it to sync. That, to me sounds like a good design.

I would personally have used a statefulset instead of a deployment. A deployment is designed to be 100% available. A statefulset is designed to have the exact number of replicas you specify running at once. For a sync job like this, you probably only ever want 1 running. When you upgrade the software, you don't want both the old version to be syncing stuff while the new version is waiting to pass its first health check. But in this case, I doubt anything bad will happen in that case, so it doesn't really matter.


> I like the approach of the program managing its own scheduling much better than cron.

If you write your own scheduler, you have reinvented the wheel. Running a cron job is way more simpler and everybody is familiar with it. Nobody is familiar with your own custom scheduler.

> You have an endpoint that is constantly available to scrape metrics from.

I don't think you need metrics for this service at all. It has to "just work".

> You can send it an RPC to say "hey, do the sync now".

You don't need any of that. Keeping a service alive is unnecessary work, an RPC interface is overengineering. This is a very simple problem, don't overcomplicate it.

> Someone could go into CloudFlare's web interface and mess up the configuration; with this design, the mistake is fixed within 30 seconds.

My script have a --force switch, so you can do the same with it. Still, you shouldn't manage an IP manually when using a DDNS script. The cache is to "be nice" with CloudFlare (don't hit them unnecessarily) and your bandwidth. ddclient also had this feature.

> If you just apply the change periodically even though there is no diff, you get a nice alert that the API is broken before you need to change.

That's a valid point, I agree, but I don't think it's a big problem. REST APIs endpoints should never change, you should have years before CloudFlare deprecate and enpoint, so I think in the breaking API-regard, is a non-issue.

> Furthermore, this program properly maintains a global rate limit of API calls across all configurations.

I did not even think about rate limits, because my script will hit CloudFlare API so rarely, it should never reach rate limits. Also, if it would be needed, it could be implemented with the cache, which I have anyway. Also, you can side-step the issue by setting the time between updates bigger like minutes. Availability should be a non-issue, because if your service needs high availability, you should get a fix IP anyway.

> Programs that are in a sleep(30 seconds) call do not use much memory or CPU. They can be trivially swapped out if the system is under memory pressure. A go application like this is going to use less than 10M of RAM all in. In 1962 that's a lot. Today that's nothing.

Agree, but 10mb is still infinity times more than zero. :)

> Given the existence of a kubernetes configuration in the repository

A dynamic DNS script like this should be nowhere near at a Kubernetes cluster. If you think there is a use-case for Kubernetes with this script, you have huge problems anyway (e.g. not understanding what Kubernetes is and how should you operate it properly.)

> To be fair, I have no idea what sort of environment involves production traffic in a kubernetes cluster going to a dynamic IP address, and that sounds like something I'd look into before writing a sync script like this.

So we are on the same page about Kubernetes. :)


I wrote a similar script that supports any type of record update, not just A, 7 different public IP providers over multiple methods, config via file / env variables / cli arguments, and both DigitalOcean + CloudFlare individually or simultaneously. I use it across my entire fleet of servers, even servers with static public IPs so that I can snapshot them to new boxes and have all my DNS records update automatically.

https://github.com/pirate/bash-utils/blob/master/dns.sh

    Helper script to update a DNS record on multiple providers.
Usage: ./dns.sh --domain=domain.example.com [--get|--set=value] [...options]

Options:

    [domain]               The DNS domain you want to get or set (required)

    --domain=example.com   Same as passing [domain] directly as an argument
    -t=|--type=A           The DNS record type, e.g. A, CNAME, etc. (A default)

    -g|--get               Get the record value (the default)
    -s=|--set=value        Set the record value, e.g. 123.235.324.234 or the
                             special value 'pubip' to use current public ip


    -l=|--ttl=n            Set the record TTL to n seconds (overrides api default)
    -p=|--proxied          Set the record to be proxied through CDN (Cloudflare only)
    -a=|--api=cf,do        List of DNS providers to use, e.g. all (default) or cf,do
    -r=|--refresh=n        Run continusouly every n seconds in a loop
    -w=|--timeout=n        Wait n seconds before aborting and retrying
    
    -c=|--config=file      Path to a dotenv-formatted config file to load
    -e=|--config-prefix=X  Load config vars with prefix X e.g. X_VERBOSE=1

    ...


Another little-known Cloudflare tip is to use their Argo product's TryCloudflare feature when you want to open a localhost port to the world for experiments, demos.

It enables painless NAT traversal like ngrok, with the added benefit of Cloudflare's network acceleration and security.

https://developers.cloudflare.com/argo-tunnel/trycloudflare/


Nice! FWIW, it doesn't seem to support streaming, so large payloads can take a while (as the entire request needs to be returned to cloudflare, and only then does argo send the response payload back to the client, so you get a double-latency hit).


I definitely recommend anyone check out Argo. I use it in production and it's great.


having cloudflare in front is even better than just DDNS: you can hide your origin ip and avoid being DOSed.

I then use chrome's ssh extension with nassh-relay[1] to ssh, allowing me to not ever need to know the ip of my machine.

I have a modified version of docker-cloudflare[2] that also emails me when re-iping occurs (~once a year), just in case.

[1] https://github.com/zyclonite/nassh-relay

[2] https://github.com/joshuaavalon/docker-cloudflare


Good news is these tools can stop using the global API token on Cloudflare and use a scoped token just for this purpose: https://blog.cloudflare.com/api-tokens-general-availability/


Indeed, "but" the minimum token you can create is one that allows its wielder to edit a whole zone's DNS, not just (say) one specific record.


This is great news! I've always wanted to use the API with fail2ban but didn't want to run the risk of using the global API key.


That's cool.

No discredit to the author, they posted about this a few days ago with a couple other tools https://support.cloudflare.com/hc/en-us/articles/36002052451...

See also https://dnsomatic.com/ if you'd prefer to share your info with a third party and not deal with running your own tool.


Somewhat related question: when I go to add a domain to CloudFlare, it did a great job of importing all of my existing DNS records. Since `dig any` doesn't seem to work anymore, how could they be doing this?


They just have a long list of likely host / service names. It misses things all the time for me, but gets an awful lot correct.


Here's another tool, this one written in Rust that does this: https://github.com/nickbabcock/dness


I wrote a little Go tool a few years back that can use your Digital Ocean account as a Dynamic DNS provider. It has been running on my EdgeRouter ever since with a cron job.

https://github.com/2bytes/dodns


Nice! Very useful, didn't know this was possible.


I have been using https://github.com/oznu/docker-cloudflare-ddns

Support IPV6, only update if IP changes, has multiple IP providers, Docker image available for amd64/arm (alpine).


Those who don't learn about ddclient are doomed to re-implement it.


And that's despite CF themselves advocating it: https://www.cloudflare.com/technical-resources/#ddclient


I’d be down for ddclient reimplemented in Go


After using a handfull of dynamic DNS services my gut feeling is that you should really be running your own DNS server on a static IP and using that for dynamic DNS. It's pretty much the only way you can be sure something isn't just going to disapear over night. I guess that's more or less what's going on here but why bother letting someone else controll it?


The purpose of using a dynamic DNS service is because you don't have a static IP, isn't it?

Services that specifically provide free DDNS may not stick around long, but professional DNS providers with REST APIs change very rarely. I don't think cloudflare's DNS service will change very soon.


Also contributing yet another Cloudflare DDNS updater[0]! It was a neat experiment to play with the API. I've ran it in cron and as a systemd timer.

[0]: https://github.com/wyattjoh/cloudflare-ddns


A bash script for the same thing (which I thought originated from Cloudflare, but maybe not):

https://gist.github.com/benkulbertis/fff10759c2391b6618dd


In high school, I had a bash script + cron job that did this exact thing so that I could host my own Minecraft server on a junky old Dell desktop. I learned a ton from that setup about Linux and the web in general.


Or just install the ddns-Cloudflare package in OpenWRT and call it a day.


This is very handy! Bonus points for having it all dockerized too!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: