Testing is crucial in any software project. When shifting to a serverless world, we need to accept and embrace multiple paradigm shifts, which also affect how we can test our applications. By doing so on multiple layers, we can drastically increase our confidence of releasing code and having minimal impact on the service availability and stability of the software we develop.
This workshop consists of multiple independent modules which can be done in any order. The modules are:
For some exercises, you need to have certain tools installed. These will be highlighted at the beginning of the respective exercise.
In the Function-as-a-Service (FaaS) realm, unit tests are rather straight forward. The function as the unit under test has a clear interface consisting of inputs (function parameters) and outputs (function return value). We can therefore easily mock any dependencies and assert different outputs for the respective input values.
Requirements: You need to have Node.js installed.
- Take a look at the function defined in unit-tests and understand what it does.
- Investigate and run the unit tests in the directory by first running
npm install
and thennpm test
. - Add a unit test that checks correct error handling of the function in case no
jokeID
is provided.
Solution
test("Input errors are handled", async () => {
const result = await handler({});
expect(result).toBeDefined();
expect(result.Error).toBe("no jokeID provided");
});
Local development for more complex applications can be tedious if we don't have access to the tools we know and love. For web applications, for example, it's useful if we can use cURL or similar HTTP clients to directly hit our application running locally and verify different scenarios. A nice way to achieve this locally and gain the benefits of being able to develop with our favorite tools is to use wrappers which run our code as a normal web application locally and as a function which understands API Gateway requests when it's running in a serverless context (e.g., AWS Lambda).
In Node.js, Express is a popular framework for building web applications. It can easily be wrapped using another third party library and, as such, function transparently in a Lambda context.
Requirements: You need to have Node.js installed.
- Read up on
serverless-http
and understand how it works - Check out the example application in local-testing and investigate how it uses the serverless-http framework
- Run the application locally by running
npm install
and thennpm start
- Send an HTTP request to the app
curl -X GET localhost:8080 -H 'Content-Type:application/json' -d '{"name":"Alice"}'
- Deploy the app to AWS Lambda and hook it up with API Gateway.
- Research how you could do something similar with the web framework and programming language of your choice
Integration testing is crucial to being confident that your application behaves as expected towards its peripheral systems and environments. When working with serverless services, this is usually not so easy. Those services are highly abstracted and mostly closed source. That's why we cannot just spin them up on our local computer or in a CI environment. A good alternative is LocalStack as it provides high-quality emulations for the APIs of many serverless services. By using it, we can run a dummy environment in almost no time, then run tests against it and delete it again. Even though, these tests don't give us a 100% certainty because the emulation may be faulty, they can drastically increase our confidence before deploying to actual infrastructure.
Requirements: You need to have either Docker (including docker-compose) or Podman (including podman-compose) installed.
- Take a look at the introduction to LocalStack by reading their overview documentation.
- Investigate the
docker-compose.yml
file in theintegration-tests
directory and understand how it's set up - Run
docker compose up -d
or (podman compose up -d
for Podman) and visit localhost:4566/_localstack/health to verify all services are available. - Run
aws --endpoint-url http://localhost:4566 dynamodb create-table --table-name jokes --attribute-definitions AttributeName=ID,AttributeType=S --key-schema AttributeName=ID,KeyType=HASH --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
to create thejokes
table locally. - Run
aws --endpoint-url http://localhost:4566 dynamodb list-tables
to verify it has been created. - Run
aws --endpoint-url http://localhost:4566 dynamodb put-item --table-name jokes --item '{"ID":{"S":"1"},"Text":{"S":"Hello funny world"}}'
to insert a joke into the newly created table. - Run
aws --endpoint-url http://localhost:4566 dynamodb scan --table-name jokes
to verify it has been inserted.
End-to-end tests require a whole environment to be present. The environment should be a similar as possible to the final production environment, the application will run in. Infrastructure as Code allows us to do so by having a clearly declared definition of what an environment looks like. Using that definition, we can spin up ephemeral environments, run our end-to-end tests and then tear them down again. This can usually be done with very low cost, as almost all serverless services are billed on a pay-as-you go model.
As soon as we have our infrastructure defined cleanly as code, we can use a tool like Terratest to apply the Terraform code in an automated way using random resource suffixes to prevent name clashes. Terratest then checks certain assertions on the provisioned infrastructure and afterward tears it down again. This can be achieved by using known tools and a mature environment with Go and Terraform as its backbones.
Requirements: You need to have Go and either Terraform or OpenTofu installed.
- Take a look at the infrastructure code present in e2e-tests and understand what infrastructure gets provisioned.
- Investigate the Terratest tests and run them by running
make test
. - Add another assertion that sends an HTTP request to our function and checks if it gets a response with the status code
200
. Note that Terratest provides ahttp-helper
package to facilitate that.
Solution
invokeURL := terraform.Output(t, terraformOptions, "invoke_url")
expectedStatusCode := http.StatusOK
statusCode, _ := httphelper.HttpGet(t, invokeURL+"jokes/1", nil)
if statusCode != http.StatusOK {
t.Errorf("Expected status code to be %v, got %v", expectedStatusCode, statusCode)
}
Many FaaS platforms allow performing canary deployments. By doing so, we don't release a new version of our software to all users at once. Rather, we first release it to a small percentage of them and then gradually increase that percentage. This is a very controlled process that allows us to roll back on failures or increased error rates. We can identify regressions which have slipped through our net of automated testing before they reach too many clients. This can give us a last boost of confidence in order to release and deploy new versions of our software.
-
Get familiar with how AWS CodeDeploy works by reading through their How it works guide.
-
Investigate the Terraform resources defined in testing-in-production and understand what they do.
-
Navigate to the function code
-
Install the functions dependencies with
npm install
-
Navigate to your Terraform module
-
Init and apply the infrastructure code
-
Change something about the function code and apply again to publish a new version (notice the
publish: true
flag infunction.tf
) -
Visit the CodeDeploy UI
-
Choose your application
-
Click "Create deployment" and pick "Use AppSpec editor" with "YAML"
-
Enter the following code into the text field (replacing
RESOURCE_SUFFIX
with the suffix you chose):version: 0.0 Resources: - my-function: Type: AWS::Lambda::Function Properties: Name: "canaryRESOURCE_SUFFIX" Alias: "production" CurrentVersion: "1" TargetVersion: "2"
-
Click "Create deployment"
-
You can now observe in real time how your
production
alias gets switched from version 1 to version 2 gradually using a canary deployment -
Implement a CodeDeploy deployment for one of the functions you created. You can follow this tutorial if you get stuck.