What to do with an AWS account, an own domain and the desire to fiddle with some AWS services? One obvious thing is to abandon dynamic DNS services like FreeDNS and host the names of two DSL routers (a Fritz!Box 9490 and a Fritz!Box 3270) in Route 53. So what are the options?
My search turned up three types of options. I’ll disucuss them in non-canonical order though: The Bad, the Good and the Ugly
The Bad – local programs
The simplest solutions crop up in the Google query
route53 dyndns: Most of them are programms running on a computer
behind the router. They update Route 53 periodically using
the API of Route 53. The simplest one are shell scripts using the
aws(1) cli commands and are triggered periodically by
This approach fails short in several ways:
I don’t want to run additional computers 24/7 just to update the DNS.
Less moving parts are better.
I want to access both routers even when no computers are running behind the routers.
The usecases are remote administration, connecting the routers via VPN and accessing files stored directly on them.
The router knows best when its public IP changes and also the value of the new IP1.
There’s no need to do that every five minutes or so.
The AWS credentials must be on the clients – in hostile territory.
It’s generally a bad idea to distribute IAM credentials and especially so for Route 53. In Route 53 there is currently no way to allow one client to change only one specific DNS record. The IAM permissions only work on the complete DNS zone:
Not all Route 53 resources support permissions. You can’t grant or deny access to the following resources: […] Individual records […] — Amazon Route 53 - Developer Guide
Forunately both Fritz!Boxes support “Dynamic DNS” via some well known providers but also allow to specify custom URLs with placeholders. This eliminates the additional computer and this class of solutions.
So the search goes on.
The Good – “Building a Serverless Dynamic DNS System with AWS”
The great article explains how to setup two Lambdas, the API-Gateway and Route 53 for dynmic DNS updates. Configuration is stored in S3 or in Dynamo-DB (the GitHub version). The article goes the extra mile to explain the stuff with nice diagrams and the Github repository contains a CloudFormation template to automate the setup.
The bad part (for my usecase) is: The protocol is not compatible to anything I’ve encountered so far. The article contains an example request for updating the IP. It looks like this (reformatted for clarity):
https://MY_API_ID.execute-api.us-west-2.amazonaws.com/prod ?mode=set &hostname=host1.dyn.example.com &hash=96772404892f24ada64bbc4b92a0949b25ccc703270b1f6a51602a1059815535
hash part is the SHA-256 digest of the IP, the FQDN and the secret password.
It serves as an authentication token. But the Fritz!Box currently supports
only Base-64 encoding using the
<b64>mydata</b64> placeholder but no hashing.
The second gotcha is: The example request does not feature a query paremeter for the IP address. It is not accessible in the hash because a hash is not reversible. So it might be determined automatically by using the source address of the HTTP connection.
The third gotcha pertains the response: The result of the Lambda is a JSON document which the Fritz!Box cannot handle.
Another nitpick: The CloudFormation template contains everything in one big file. While this may be convenient for distribution it is quite bad for development because the code cannot be edited or debugged in an IDE using the appropriate language tooling.
So while the article is great and good for learning some AWS things but this solution requires considerable work upfront to get off the ground using standard clients.
The Ugly – DynDNS53
DynDNS53 is a project by Scott Armitage (aka.
sTywin) on GitHub. The
supported protocol looks good – a subset of the DynDNS protocol also
used by Google-Domains. This means:
The request is a plain HTTP GET request, the query parameters contain the IP-address and the hostname.
Authentication is done with Basic Authentication
The response is plain-text – basically a keyword with at most one parameter.
This is exactly what the Fritz!Boxes supports using custom URLs!
The repository features one AWS Lambda written in Pyhton and instructions to setup the AWS API-Gateway, IAM and so on.
This project ticked most of my boxes:
- Serverless – no additional server to manage2
No additional client code.
No additional dependencies.
Contrary to the Greathouse project the code is …minimal… Indeed it is so minimal that the configuration is part of the code and the setup instructions are more than twice the code size.
Fortunately there are some open pull requests which
- provides a
Makefilefor setting up and updating the stuff #5,
- fixes a bug #9,
- updates to Python3 and PEP8 #10.
Makefile is great for initial setup – setting up the
API Gateway using the AWS Console is a duty not for humans but for condemned
sinners in hell.
It is good that the pull requests are there and that they make live much easier. But that they are not merged shows that the original author doesn’t maintain the project any more :-( Therefore I have forked it, merged the pull requests and improved the code in other ways.
Some might complain about missing features, some might value the minimalism. I thought this is a good start with room for improvement while learning more AWS stuff.
… and some odd ends
The three solutions above cover most of the ground for providing a few routers with dynamic DNS services. The following solutions didn’t make the race but are notable in some way:
This one makes a point by not using a Lambda function but only API-Gateway features – especially the feature to call an AWS API directly. Technically interesting but has limitations regarding authentication and the request format. For example the zone ID must be in the request URL – the clients should not need to know internal IDs like these.
A really small Lambda function! The cons are: It is unmaintained and it uses another framework (Chalice) which is from AWS but requires to fiddle with Pyhtons
README.mdfeatures a notable warning to Fritz!Box users that it supports only HTTP (no encryption) but API-Gateway only HTTS (encrypted).
The point seems to be management of EC2 instances which are watched by CloudWatch events. These events trigger the Lambda functions. Therefore the EC2 instances don’t need DynDNS clients at all. That’s an interesting approach but doesn’t cover my usecase.
Although these projects didn’t make the race for various reasonse they show interesting approaches and hence deserve a honourable mention.
So – in the end – did DynDN53 the job at hand? As life alway is: Yes, but…
The plan was to have no additional server to manage. AWS Lambda and API Gateway take care of that. But the API Gateway accepts only contemporary HTTPS which the older Fritz!Box (the 3270) does not support. The newer one has no problem with that. Therefore a small NGINX server forwards the HTTP requests for the older box to the API Gateway. Which means – a managed server :-(
Programs running behind the router have to do some acrobatics to get the public IP of the router. And getting both the IPv4 and IPv6 address is a small stunt. ↩︎
It is quite interesting to setup an empty webserver as a simple honeypot and – without telling anyone the IP or URL – to watch the constant attacks pelting on every open port. At least for a while. After that it gets quite annoying. ↩︎