Skip to content

Summary of knowledge in regards to deploying NextJS to Amazon Lambdas

Notifications You must be signed in to change notification settings

sladg/doc-next-lambda

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 

Repository files navigation

Deploying Next to Lambda

Summary of knowledge in regards to deploying NextJS to Amazon Lambdas.

TL;DR: NextJS in standalone mode + lwa layer = deployment on Function URL.

Problems

Network adapter

NextJS expects normal HTTP request and response objects whilst AWS Lambda together with other AWS services provide event object. This incompatibility results in problematic translation of incoming data and results.

Resources and solutions:

Solution

Based on AWS's Rust adapter solution.

  • Rust-based wrapper bundled into Layer which taps into Runtime API and translates events from/to Lambda events.
  • On lambda init, Runtime extension fires up Next server and waits for server to run before forwarding Lambda events.

See: lambda-server-adapter

Cold starts

Next server takes quite a bit of time to fire-up at the beggining. Vercel is working

  • TBD describe solution,

Caching

Next uses variety of caches and different mechanisms to improve performance. This results in rather over-complicated config for CloudFront. Additional problem comes from Lambda's non-writable storage. Next will try to cache (write it) sometimes, this operation can fail, but results in lost performance.

EFS is complicate for setup and requires VPC, S3 is better option. Way to handle this globally is to patch FS functions with custom ones. There are multiple places where Next uses memory and/or filesystem as place to save data. Occurences are:

  • next/image uses it to save optimized image,
  • ISR uses it to invalidate/cache data.

To ensure proper workings, we do following:

  • experimental.isrMemoryCacheSize is set to 0 to turn-off in-memory cache for ISR,
  • entrypoint JS file patches FS before initializing Next server (see: https://github.com/sladg/doc-next-lambda/blob/master/s3fs.ts),
  • we set CACHE_BUCKET_NAME env var to lambda pointing to read-write accessible bucket lambda can use as cache.

Patching node:fs is wrong as it hugely affects the performance. This is not a sustainable solution unfortunately. After experimenting with different FS-based options (patching, EFS, mountpoints) I've concluded that this is currently not doable until Lambda allows tapping into Kernel (to allow for FUSE) or Next allows for customizing directory used for caching (to allow for EFS). Additional note on topic of caching, EFS as well as Memcache require VPC which results in complexity and additional costs.

  • TBD describe solution. Most likely incrementalCacheHandlerPath with S3. Images cached by Cloudfront / using custom optimizing solution.

Binaries

Lambda's runtime (if we ignore containerized option), uses Amazon Linux (multiple version options). This results in some of the binaries being possibly incompatible compared in runtime as buildtime used different OS / architecture. This is mostly notable in Prisma as their binaries take quite a lot of space.

  • TBD describe solution Possible to solve with Docker Lambda. If we use Container type of deployment, we will install dependencies and build in target host so this problem is overcome.

NODE_OPTIONS

For some reason, when using Runtime API, Next's server is unreachable if NODE_OPTIONS are specified as env var in given function. Logs show it starts normal, however, --require flag causes the server to not be reachable.

Size

Assets, chunks, bundles and other things generated by Next can take huge space. On a medium CRM-style projects, we can expect node_modules taking 100MB. standalone server taking around 5MB and static assets (typically located in .next/static) taking 25MB, additionally, public folder can contain rather large amount of assets as well. This means we cannot feasibly store statically generated assets inside Lambda as we would cross size limit rather quickly.

With this known, we need to route part of the traffic away from Next's server. To serve these files, we can use S3, however, we add complexity with new service type. Possible middle-ground would be using Docker container which has size up to 10G.

  • TBD describe solution

Images

Next's image optimization is slow compared to other, non-JS, solutions. The approach here should keep simplicity in mind while allowing for extension. There are 3 options to choose from:

  • use Next's image optimization,
  • use Next's image optimization with Sharp layer,
  • use separate lambda (preferable in Python, Rust or similar) to handle image optimization, this option allows for using S3 as cache on opt-in basis.

Next does not support customizing caching behaviour for images. It relies on FS and cannot be easily change/overwritten.

See: imaginex-lambda

  • TBD describe solution. This will be mostly likely distributed as public layer for easy plug-and-play.

Knowledge

Testing and benchmarks

Next 13.5 takes 800-900ms to initialise in Lambda native Node environment (very similar result for Alpine container on Lambda). This happens once-per-instance, meaning, this instance can deal with multiple requests without re-starting. Increased load on application will result in spin-up of multiple new instances, each taking this time to start.

image

Next 14.0.3 takes 200-800ms to initialize in Lambda's container environment. This is big improvement compared to v13, however, this speed depends on size of the project.

Additional note is that / ping takes extra 300+ms to respond depending on your _app.tsx and getServerSideProps configuration. Better option to check if server is running is to use /api/ping (or similar) endpoint with no dependencies in it. This results in approx. 250ms overhead in terms of waiting for Next to truly start.

About

Summary of knowledge in regards to deploying NextJS to Amazon Lambdas

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published