Our team commenced to use the Amplify framework in Autumn 2020 when we had kicked off the development of a new internal web-portal. The portal is a cloud-deployed and fully serverless web-site intended to help our system engineers and IT operation guys with automation of their daily routines.
It was some kind of an experiment because no one in either our or neighborhood teams had any hands-on experience with Amplify. The experiment didn’t go really well and we ended up with removal of all the framework’s code from our Git repo a month ago. The main reason was loss of dozens and dozens of development man-days solely for struggling with Amplify: its bugs, instability, verbosity and boilerplate code.
This article is going to be a summary of our team’s experience with Amplify framework and some kind of warning for other developers. I’ll start with a brief description of our portal architecture, continue with detailed explanation of difficulties we faced up while using Amplify, and end up with a short discussion of its alternatives.
Amplify and Its Main Features
AWS Amplify is some mixture of tools, frameworks and services intended to speed up development, deployment and testing of your Cloud web-apps and services. It has a ton of features and capabilities right out of the box. Some of them may be useful for your app, some not. Decisive factors in our case were:
1) Amplify ability to generate a lot of boilerplate code for standard things like adding a new Lambda function or new API Gateway into your code and quickly wiring them with DynamoDB. It’s just a matter of running a couple console commands.
2) Environment-focused Amplify design. When you start a new Amplify project it doesn’t create an environment with cloud resources in your AWS account, but CloudFormation environment template instead. So any developer in your team can create their own development environment in just a few console commands. And they will have their own dedicated set of the cloud resources: own lambdas, own APIs, own DynamoDB tables, etc. In fact you operate Amplify environments almost the same way as you operate Git branches in your repository.
We had decided to go with a typical architecture for fully serverless AWS-hosted web-sites:
- Single-table DynamoDB for storage of non-sensitive data
- AWS Secrets Manager for storing credentials, keys, hashes, etc.
- Lambda functions for running API request handlers, business logic and data access code
- AWS API Gateways for hosting of API endpoints
- S3 storage for static web-site content
- ReactJS frontend
- Amazon Cognito for users management, authentication and authorization.
The good part about Amplify is that it is able to quickly generate all the boilerplate code for such kind of architecture. The bad part is the rest of this article :)
Amplify uses CloudFormation template files to generate cloud resources of your project and they are VERY verbose. Just check this XML template of a lambda function. It only listens to an SQS queue and makes a couple of DDB requests on each message received.
Very simple thing, but the function template is 300 lines of code. Plus source code of the function itself, plus few other auxiliary files generated by Amplify (like parameters.json or event.json). Now imagine you have 20 of such functions. You get over 6000 lines of JSON templates code in your source files. And what if your project is pretty big and operates on couple of hundreds of Lambdas?
You may say that Amplify generates major parts of such template files for you. I agree, but you still need to thoroughly examine and carefully modify the generated code to integrate it into your project or to add some custom parts. Just check this example to see how many steps a developer needs to make to create such an SQS handler lambda in his project.
Rigid folder structure, too many files
Every lambda function you add with “amplify add function my-func-name” generates 2 folders (function folder and nested /src) and 9 files inside them plus your custom source code files. Now multiply it on a number of lambdas in a typical serverless project. Even 10 functions (which is very few) will mess you into 200+ files and 40+ directories.
Next issue comes with the rigid Amplify folder structure, file and parameter names of your templates. Just try to change the CloudFormation file from “myfunctionname-cloudformation-template.json” to smth more short and readable and Amplify will immediately break with an error message hardly related to the real source of the problem.
Official Amplify GitHub project has over 2000 open issues in its repositories. And that’s because it is really, really buggy. Here are just few examples of my favorite errors we faced while using Amplify:
- How do you like a CORS error starting to happen during your local debug because Amplify had silently upgraded its minor version after running npm install? In our case it was somewhere between 4.46.1 and 4.50 versions. Any call from your local browser to API of your development environment caused CORS error. We didn’t find a source of the error so we forcefully downgraded the framework back to the last stable version 4.46.1 and the error went away.
- Somewhere around version 4.48–4.50 Amplify changed its templates files format, so a first developer who unluckily updated his local Amplify got 6000 added and 6000 removed configuration code lines and broke CI/CD build pipeline. We had our GitHub PRs hooked up with Amplify cloud build environment and Amazon build servers got some error while processing new configuration files. The result was another loss of several work hours and forced revert back to the stable 4.46.1. From my point of view, issues like these should never happen because of minor version upgrade.
- Somewhere deep within Amplify CLI auto-generated code emerges this weird guy: “ENOENT: no such file or directory, stat ‘/codebuild/output/src183976200/src/projectname/amplify/.temp/#current-cloud-backend/function/shared/lib/nodejs/node_modules/.bin/gp12-pem’ “. Sometimes it happens on the build server, sometimes on a local machine. Google knows nothing about it. We’ve lost several hours trying to resolve the issue but ended up killing the Amplify environment and pushing code changes into a brand new one.
- Last bug example is cleaning up an Amplify environment resources after you delete the environment. CLI command “amplify env remove <env name>” reported success, you see no errors and think all is good. Yes? No. Amplify doesn’t delete S3 bucket with deployment zip archives and sometimes silently fails to delete other resources like APIG endpoints, IAM roles, cloudformation nested stacks. So you have to go to the AWS UI console and manually clean them up. Your manual interventions become especially annoying in case you have S3 buckets and API Gateway endpoints limits on your AWS account.
Ok, let’s say, I’ve convinced you to avoid using Amplify for your next AWS serverless project, but what are alternatives?
- serverless.com — very convenient and simple framework with great documentation. It will work the best in case you have many independently deployed microservices accessed with REST API. The main drawback of serverless.com is limited support for advanced scenarios. For example, imagine you have a Kinesis stream, SNS and SQS publishers/subscribers automatically wired to them, Lambda message handlers triggering cascade of Step Function workflows, multiple DynamoDB tables, S3 Glacier, and etc. Serverless framework won’t help you much here. But the next guy definitely will.
- Pulumi — I don’t have experience with its usage, so can’t tell you much here.
You may want to check this nice article for the detailed discussion of your cloud provider framework selection.
Don’t get me wrong, I like the Amazon cloud platform, but not all its parts are equally great. AWS Amplify is definitely one of such not really good things. Hopefully I’ve warned you enough to be cautious about it and avoid making the mistakes my team has made. Thanks for reading!