Replies: 5 comments 11 replies
-
use-case: e2e tests - stays the same was
now
Note: the "Don’t include nitric membrane by default in docker images" seems like a separate thing.. |
Beta Was this translation helpful? Give feedback.
-
Also would "nitric start" support more than one app? |
Beta Was this translation helpful? Give feedback.
-
Generally, I'm on board with the proposals. The start-up performance of Also, for larger projects, it means you can pick and choose which functions to start. That'll be good, particularly on lower spec dev machines. |
Beta Was this translation helpful? Give feedback.
-
One more question: nitric start-local-services Does it block? if not I assume we need a 'nitric stop-local-services'? |
Beta Was this translation helpful? Give feedback.
-
Thoughts on allowing the server to run on a specified port and/or base path to talk to one (or more) locally running servers with fewer config changes? nitric start -port 9009 -path /app1 |
Beta Was this translation helpful? Give feedback.
-
Some ideas of ways we could improve & speed up local developer experience
Remove Hot Reload from
nitric run
nitric run would be intended for e2e testing with final container images with the emulated nitric environment and local services, it would be slower to start but would provide a much better idea of the finished product once relevant docker images are built.
Add command
nitric start
nitric start
would exclusively start local nitric services and nothing else, developers could then independently run their nitric applications to connect to these services using tools likenodemon
on their local machines for hot reload and connecting debuggers would be much easier. This would give devs much more choice and control over their tooling for debugging and hot-reloading, and we could still provide opinions on this tooling via our examples and templates.Don’t include nitric membrane by default in docker images
Production docker images should be built sans membrane, these images should be wrapped with the membrane when we’re ready to push them to a nitric supported environment, we can extract both
entrypoint
andcmd
parts of the original image and wrap them for membrane invocation when pushing them to the cloud. This will also allow us to cache identical base images for multi-cloud support in future.Remove config as code collection prior to
nitric run
The CLI embedded membrane should be set up to dynamically provision resources on request by existing declaration endpoints to support things we are currently doing this for i.e. provisioning folders for minio buckets.
Beta Was this translation helpful? Give feedback.
All reactions