christopher@baus.net

Going serverless: Converting Yield.IO to AWS and Lambda

Yield.IO is a simple web app I wrote primarily to test new technologies. The US Treasury publishes bond yields daily, and Yield.IO polls for those updates and charts the latest bond yields. It started out as a Node.js app with a jQuery frontend, but I recently migrated the UI to React.

I wanted to add SSL support and looked into using Let’s Encrypt (which provides free certificates) with Node.js, but since AWS also provides free SSL certificates for applications, I decided to migrate the app to Lambda and create a “serverless” application.

Yield.IO is a relatively straight forward application to migrate to Lambda because the data is public. There are no authenticated interfaces. In my (albeit limited) experience, authentication with Cognito adds a significant amount of complexity to the API layer.

In the Node.js app, yield data is stored in a JSON file and served directly from memory. The server periodically checks for data updates. When new yields are found it writes the changes to local storage, and then updates a Twitter
feed when new daily data is retrieved.

Plan

To convert to Lambda, I planned to make the following changes:

  • Serve client and static files from S3

All HTML/CSS/JS for the UI will be served from an S3 bucket.

  • Store and serve JSON from S3

Along with the UI application code, the application data will also be served from S3.

Since the front-end and back-end are both served with Node.js, I am using a custom webpack configuration. I will replace this with a standard create-react-app configuration.

  • Generate wildcard SSL certificates with Certificate Manager

The whole point of this exercise is to implement SSL for Yield.IO. Certificate manager provides free certs.

The only way to provide SSL for your own domain to an S3 bucket is to deploy a certificate to CloudFront and then proxy to S3. This creates a new HTTP endpoint called a distribution.

This allows AWS to manage DNS for the Cloud Front distribution.

  • Convert server code to a Lambda function which is triggered off a
    Scheduled Event to check for data updates

This is the most involved step as it requires code change to convert from a Node.js service which reads and writes from local storage to a Lambda function which reads and writes to S3.

Implementation

Converting my custom webpack configuration to create-react-app turned out to be painless. I only had to add custom build steps to convert my SCSS files as per the excellent documentation.

Getting S3/Certificate Manager/CloudFront/Route 53/Lambda to all work together took a little more effort. I failed to create an SSL certificate which covers the bare domain plus all subdomains. I made this mistake before, and it can be expensive time wise to fix (more on that later). You have to add *.domain.com plus domain.com to the certificate otherwise browsers will generate errors when using the bare domain.

I found that CloudFront has a few peculiarities. First there is a bug. When you select your S3 endpoint in the UI which is not in the US-EAST-1 region, then the URL is wrong. I had to copy my endpoint URL (yield.io.s3-website-us-west-2.amazonaws.com) from S3 and set it manually.

Also CloudFront does not integrate with S3 to determine if its cache should be invalidated, so after S3 is updated, the cache needs to be invalidated manually with the API. I found that even after I instructed CloudFront to clear its cache, it still held onto previous versions.

Also CloudFront takes about 20 minutes or more to deploy changes to what it calls "distributions." If you are testing out a new configuration, this significantly slows down your progress. For instance I deployed a distribution with an SSL certificate only to find out I generated the SSL certificate incorrectly. This required two changes CloudFront, plus the time to generate and validate a new certificate. That was one lengthier steps in the process.

But with CloudFront you get great performance and features like HTTP/2 support for no extra work. I haven't done exact measurements, but it feels like the site loads much faster from CloudFront then it did from my previous DigitalOcean instance running Node.js.

The final piece of the hosting puzzle was to create an alias in Route 53 to my CloudFront distribution IP.

My Node.js app polls the Treasury for data updates using a settimeout() call. I converted this to a Scheduled Event using Lambda. This was fairly straight forward. I had to convert local storage calls to calls to the S3 API, and then make sure my Lambda function had the proper permissions. I also had to invalidate CloudFront after the JSON file was updated, or else it would remain cached by CloudFront.

The edit, run, debug cycle is a bit tedious. You can approximate the execution of Lambda functions locally using Node.js, but to test the function in the full environment requires zipping the changes and dependencies and re-running the function in the AWS console.

In the end, it took one long day to convert from Node.js to Lambda.

What I like

  • Free SSL certificates
  • Performance. Serving from CloudFront is fast.
  • No shell accounts to maintain and upgrade.
  • Cost. The pay as you go model is awesome for low volume side projects like Yield.IO. I expect the costs will be only a few dollars per month.

What I don’t like

  • The sea of AWS console browser tabs

It took 6 different services to configure a simple one page web app. AWS feels like a sea of disparate tools rather than a unified whole.

  • When using the S3 upload UI you have to constantly configure permissions

There needs to be an easier way to automate deployments. I ended up using the UI, but it required changing permissions every time I uploaded a new version to S3.

  • Takes a long time to deploy CloudFront distributions

I understand the complexity of updating CloudFront, but the long deployment times make testing a slow process.

  • Edit/Upload/Test cycle with Lambda

It would be beneficial if there was a better way to edit and test Lambda functions locally. Again, like S3, the deployment process needs better automation.

Conclusion

I think serverless is the future of application development, but it is still the early days. AWS is a powerful environment, but it feels like a bunch of services which are loosely held together rather than a coherent whole. With that said, the performance and cost of the final product maybe worth dealing with the idiosyncrasies of the platform in the short term.

Postscript

After deploying the changes, I realized that CORS was not working properly on the hosted JSON file through CloudFront even though I had enabled CORS on the S3 bucket. Debugging CloudFront distributions is a tedious process, as every change requires about 20 minutes to deploy.

I eventually narrowed the problem down to the following: in the "Behaviors" for the distribution, the OPTIONS method must be enabled. This allows OPTIONS calls to be passed through to S3.

Post-Postscript

I thought I solved my CORS issues by enabling the OPTIONS call in CloudFront, but a couple days later CORS stopped working. I was a bit confused by the documentation which recommends whitelisting CORS headers, but I eventually found the setting.

In the "Behaviors" for a Distribution, there is a setting called: "Cache Based on Selected Request Headers." This has to be set to "whitelist." I then added the CORS specific headers to the whitelist including: "Access-Control-Request-Headers", "Access-Control-Request-Method", and "Origin," which, again, resolved the CORS issue.

Show Comments