TL;DR
- Use Route53 to register your domain and handle your DNS.
- Use S3 to make two buckets: example.com and www.example.com. The former redirects to the latter.
- Use ACM (amazon certificate manager) to manage your HTTPS/SSL cert.
- Use Cloudfront to serve your site over HTTPS.
- Use Hugo / Jekyll or whatever you like to create your site.
- Use awscli to upload your site with your preferred cache settings.
- Laugh all the way to the bank as your blog is safe from the occasional ‘slashdotting’ (high-traffic day) for pennies a month.
Domain registration
First, register your domain using Route 53.
This will automatically create a Route 53 ‘hosted zone’ for your domain, helpfully.
S3
Next you need to make two S3 buckets.
One bucket will be your ‘apex’ domain (eg example.com
). The other bucket will be your www domain (eg www.example.com
). The apex will redirect readers to the www subdomain. Some may prefer vice-versa, but it’s trickier to configure CNAME-style aliases on an apex DNS A record so I’ll leave that to smarter folk than me, and stick with www.
To create the buckets, follow these steps:
- Go to the S3 console
- Select whichever region you think is closest to your target audience
- Click ‘Create bucket’
- Bucket name:
example.com
(this will be your domain name, not literally ‘example’)
- Click ‘Next’ 3 times, leaving the options and permissions to their standard values
- Click ‘Create bucket’
- Repeat the above steps for
www.example.com
Next configure the apex bucket: (you could have done this while creating the buckets to be honest)
- Click the
example.com
bucket name (not the checkbox next to it)
- Click Properties > Static website hosting
- Select ‘Redirect requests’
- For ‘Target bucket’ enter
www.example.com
- For ‘Protocol’ enter
https
- Click Save
Next configure the www bucket:
You may as well put in some test content at this point while you’re in the S3 console:
- Click the
www.example.com
bucket
- Click ‘Upload’
- Drag in a simple
index.html
file, click Next
- Leave the permissions as-is, click Next
- Scroll down to ‘Metadata’
- For ‘Header’ select
Cache-Control
with the value max-age=300
- Save that row, click Next > Upload
Certificate Manager aka ACM
- Go to the ACM console
- Select the ‘N Virginia’ zone (this is ESSENTIAL: Cloudfront can only use NV certs for some reason)
- If this is your first cert, click Provision > Get started > Request a public cert
- If you have other certs already, click Request a certificate > select Request a public certificate > click Request a certificate
- You should now be on the ‘Request a certificate’ screen with 5 steps listed in the left column
- For ‘Domain name’ enter
example.com
(not www.example.com
)
- Click ‘Add another name’
- For the new domain row enter
*.example.com
- Click ‘Next’
- Select ‘DNS verification’ which is very easy because you’re using Route 53
- Skip tags and click ‘Review’
- Click ‘Confirm and Request’
- You should now be in the ‘Validation’ step
- Allow it to add verification CNAMES to Route 53, by expanding each of the domains and clicking ‘Create record in Route 53’ for each.
- It should show ‘Success: The DNS record was written to your Route 53 hosted zone. It may take up to 30 minutes for the changes to propagate, and for AWS to validate the domain’
- Click ‘Continue’
- It should say “Validation not complete The status of this certificate request is “Pending validation”. No further action is needed from you. Amazon is validating your domain name.”
- Go and do something else for 30mins or an hour, get a coffee or something while ACM and Route 53 talk to each other.
- After a while the certificate’s status should display
Issued
and you can proceed to configuring Cloudfront.
Cloudfront
We need to create 2 Cloudfront distributions, one for the ‘apex’ domain, one for www.example.com
.
First, create the www distribution:
- Go to the Cloudfront console
- Click Create distribution
- Select Web > Get started
- For ‘Origin domain name’, grab it from S3:
- Open a new browser tab and go to the S3 console
- Click on
www.example.com
> Properties > Static hosting and copy the ‘endpoint’ address without the protocol
- Eg you’ll have
www.example.com.s3-website-ap-southeast-2.amazonaws.com
- Careful not to use the REST endpoint that it may suggest in the dropdown box, eg:
www.example.com.s3.amazonaws.com
, as this makes permissions different to configure
- Leave ‘Origin path’ empty
- ‘Origin id’ should be autofilled, leave it as-is.
- For ‘Viewer protocol policy’ select
Redirect http to https
- Leave the caching settings as default
- We’ll configure this in S3 so that the browser gets a cache-control header
- If you try to configure your caching in Cloudfront here instead of in S3, it doesn’t send a cache-control header to the client, so you won’t get reliable/fast client-side caching.
- For ‘Alternate domain names’ enter
www.example.com
- Select SSL Certificate > Custom SSL Certificate
- Click in the empty box just beneath and select the appropriate ACM cert which should appear
- This particular interface is broken in Firefox for me, you should use Chrome I guess.
- Click ‘Create distro’
Next, create the apex distribution:
- Click Create distribution
- Select Web > Get started
- For ‘Origin domain name’, grab it from the apex S3 bucket as described above
- It’ll likely be exactly the same, just without the
www.
prefix.
- Eg
example.com.s3-website-ap-southeast-2.amazonaws.com
- For ‘Viewer protocol policy’ select
Redirect http to https
- For ‘Alternate domain names’ enter
example.com
- Select SSL Certificate > Custom SSL Certificate
- Select your certificate in the box below, as with the other distribution
- Click ‘Create distro’
This will take a while for Cloudfront to spin up the distributions, for me it took 18 mins.
Maybe go for another coffee break. Wait for their State column to show Enabled
.
Route 53 DNS
Next we need to configure Route 53 so that your DNS entries point to Cloudfront.
This has been updated mid-2020 for the new Route53 interface.
- First we need to get the domains from Cloudfront
- Go into the cloudfront distributions list
- Look for the columns ‘Domain Name’ and ‘Origin’.
- For the row where the origin starts with ‘www.’, copy the domain name. I will call this your ‘www cloudfront domain’ and looks like
abcdefghijklmn.cloudfront.net
- Do the same for the row where the origin doesn’t start with ‘www’; this is your ‘apex cloudfront domain’
- Go to the Route 53 console
- Select Hosted zones
- Click on the ‘mydomain.com’ link (not its circle ‘radio’ box to its left)
- Create the apex IPV4 record set:
- Click ‘Create record’ under the ‘Records’ subheading.
- Select ‘simple routing’ and click next.
- Click ‘define simple record’
- Leave ‘Record name’ empty
- For ‘Value/Route’ select ‘Alias to Cloudfront distribution’
- Two new fields will now appear beaneath, for location and distribution.
- First field is location, which can only left as ‘us-east-1’.
- Second new field is ‘Choose distribution’ - here paste the ‘apex cloudfront domain’ from earlier, eg
abcdefghijklmn.cloudfront.net
- For ‘Record type’ select
A
- Click ‘Define simple record’
- Click ‘Create records’
- Create the apex IPV6 record set:
- Click ‘Create record’
- Select simple routing
- Click ‘define simple record’
- Leave ‘Record name’ empty
- For ‘Value/Route’ select ‘Alias to Cloudfront distribution’
- Second new field is ‘Choose distribution’ - here paste the ‘apex cloudfront domain’ from earlier, eg
abcdefghijklmn.cloudfront.net
- For ‘Record type’ select
AAAA
- Click ‘Define simple record’
- Click ‘Create records’
- Create the WWW record set:
- Click ‘Create record’
- Select simple routing
- Click ‘Define simple record’
- For ‘Record name’ enter
www
- For ‘Value/Route traffic to’ select ‘IP Address or another value depending on the record type’
- In the box that appears underneath, paste the ‘www cloudfront domain’ from earlier, eg
abcdefghijklmn.cloudfront.net
- For ‘Record type’ select
CNAME
- For ‘TTL’ select
300
- this is a 5 min cache lifetime. Feel free to adjust this later once everything’s working, if you like.
- Click ‘Define simple record’
- Click ‘Create records’
Test all the things
First, test the apex domain:
- Run
curl --verbose http://example.com/
- Should see
< Location: https://example.com/
to show it’s upgrading you from HTTP to HTTPS
- Run
curl --verbose https://example.com/
- Should see
< location: https://www.example.com/
to show it’s redirecting you to the www subdomain
- Run
curl --verbose http://www.example.com/
- Should see
< Location: https://www.example.com/
to show it’s upgrading you from HTTP to HTTPS
- Run
curl --verbose https://www.example.com/
- Should see
< cache-control: max-age=300
which tells the web browsers to cache things for 5 mins
- You should see
< x-cache: Miss from cloudfront
on first run, followed by ‘Hit’ on subsequent runs
- Finally open it in a browser and you should see the index.html you uploaded earlier.
Uploading
Once you’ve generated your site with Hugo or Jekyll or Notepad or whatever, you’ll want to upload it. Here’s how I like to make this convenient and cache-friendly:
- Install the AWS command line utilities
- macOS:
brew install awscli
after installing Homebrew
- Windows/Linux: I’m not sure, sorry
- API keys can be found in the console by clicking your name (top right) > Credentials > Access Keys
aws configure
to enter your API keys
- Careful not to expire an existing key or it’ll potentially break things for your colleagues!
- Copy your files up:
- Run this command when the current working directory is one up from your site’s root
index.html
public
(as specified below) is what Hugo calls the folder containing the files to upload, your static site generator might be different, use whatever is appropriate
aws s3 sync public s3://www.example.com --cache-control max-age=300 --exclude ".*"
- The above uses –exclude to skip files like
.DS_Store
- The above sets the cache to 300s aka 5 mins to ensure your site is cached and snappy but also quick to update when needed. Adjust if you like.
You can also manually update cache settings in the S3 console like so:
- Open S3 console
- Click your
www.example.com
link
- Select all files
- Click Actions > Change Metadata > Key: Cache-Control; Value:
max-age=300
> Save
If you’ve already uploaded your site but forgot to set the cache header, here’s a trick that can be used:
aws s3 cp s3://www.example.com/ s3://www.example.com/ --recursive --metadata-directive REPLACE --cache-control max-age=300
Phew, that was long! I sincerely hope I didn’t miss anything.
Thanks for reading, I hope this helps someone, and have a great week!
Photo by NASA on Unsplash