This is a starter application, built on Sails v1, React, Bootstrap, and Webpack. It is designed so that multiple front-ends (a customer front-end, and an admin panel perhaps; more if need be) can live side-by-side, and use the same API. It even has built-in Ngrok support. A virtual start-up in a box!
NOTE: You will need access to a MySQL / MariaDB database for the quick setup. If you want to use a different datastore, you'll need to configure it manually.
Aiven.io has FREE (no CC required) secure MySQL (5 GB), and Redis (1 GB). Both require use of SSL, and can be restricted to specified IPs. (If you are having trouble finding the FREE instances, you need to select Digital Ocean as the cloud provider.) Use my referral link to signup, and you'll get $100 extra when you start a trial (trial is NOT needed for the free servers).
npx drfg neonexus/sails-react-bootstrap-webpack my-new-site
cd my-new-site
npm run setup
npm run start OR npm run ngrok
NOTE: drfg
is a secondary, standalone script I've been working on, which can be used for your own projects: Download Release From GitHub. It downloads
/ extracts / installs a release from a GitHub repo. (Currently only supports public repos...) npx
downloads / runs NPM packages; comes standard with npm
(at least, as of v5.2.0).
- Main Features
- Branch Warning
- Current Dependencies
- How to Use
- Configuration
- Custom Security Policies
- Scripts Built Into
package.json
- Sails Scripts
- Request Logging
- Using
Webpack
- Building with
React
- Schema Validation and Enforcement
PwnedPasswords.com
Integration- Working With
Ngrok
- Support for
sails-hook-autoreload
- Getting Setup Remotely
- What About SEO?
- Useful Links
- Online in a single command, thanks to included Ngrok support.
- Automatic (incoming) request logging (manual outgoing), via Sails models / hooks.
- Setup for Webpack auto-reload dev server. Build; save; auto-reload.
- Setup so Sails will serve Webpack-built bundles as separate apps (so, a marketing site, and an admin site can live side-by-side).
- More than a few custom API helper functions to make life a little easier.
- Includes react-bootstrap to make using Bootstrap styles / features with React easier.
- Schema validation and enforcement for
PRODUCTION
. See schema validation and enforcement. - New passwords will be checked against the PwnedPasswords API. If there is a single hit for the password, an error will be given, and the user will be forced to choose another. See PwnedPasswords integration for more info.
- Google Authenticator-style OTP (One-Time Password) functionality; also known as 2FA (2-Factor Authentication). SMS is costly and vulnerable to attack. Please don't use SMS as a primary 2FA method!
- Made with 10% more LOVE than the next leading brand!
The master
branch is experimental, and the release branch (or the releases section
) is where one should base their use of this template.
master
is volatile, likely to change at any time, for any reason; this includes git push --force
updates.
FINAL WARNING: DO NOT RELY ON THE MASTER BRANCH!
- Sails v1
- React v18
- React Router v6
- Bootstrap v5
- React-Bootstrap v2
- Webpack v5
See the package.json
for full details.
All dependencies in package.json
are "version locked". This means that explicit version numbers are used, no fuzzy matches like ^
or *
.
A couple of reasons / advantages for why this is done:
- Package poisoning is a serious threat, that should NOT be taken lightly. You are relying on someone else's package, and if you have a fuzzy match for a dependency, you are opening the door wide open for bad actors to do bad things. If the author of the package you depend on gets hacked, and the hacker decides to manipulate a package for nefarious purposes, all they have to do is release a minor version to the package, and your fuzzy match will download it, no questions asked.
- If there happens to be a release that falls in a fuzzy match, which just so happens to have been released after testing is completed, but before PRODUCTION has done an
npm install
, there is a possibility of bugs being introduced into PRODUCTION, and not caught until well after customers become extremely irate. - Version locking helps prevent "works on my machine" syndrome. Because it is generally habit to
npm install
after agit pull
(or at least should be if not a Git hook), keeping versions explicit with commits helps prevent a LOT of weirdness. - It's just easier to see when dependency versions change in commit history. Helps prevent headaches.
In the end, DON'T BE FUZZY! BE EXPLICIT! Use a tool like npm-check-updates
to make dependency updates easier to audit / update.
You can quickly download / install dependencies using drfg
(Download Release From GitHub) via NPX (if you have Node.js installed, you have NPX):
npx drfg neonexus/sails-react-bootstrap-webpack
This will download this repo's latest version, extract it, then install the dependencies into the folder sails-react-bootstrap-webpack
in the current working directory.
If you want to install in a different location, just supply the new folder name as the second parameter:
npx drfg neonexus/sails-react-bootstrap-webpack my-new-site
Or, GitHub provides a handy "Use this template" (green) button at the top of this page. That will create a special clone of this repo (so there is a single, init commit, instead of the commit history from this repo).
Or, you can download a copy of the latest release manually.
See the scripts section for the various ways to build the frontend and run the backend. See the working with Ngrok section on how to spin-up an instance with Ngrok attached.
npm run setup
OR
./setup.js
The setup.js script will walk you through interactive questions, and create a config/local.js
for you, based on the contents of config/local.js.sample
.
If you already have a config/local.js
, the setup script will use the configuration options as defaults (including passwords), and rebuild it.
After you're all configured, you'll likely want an admin user:
npm run create:admin
The create admin script is designed to allow only a single admin user to be created in this manner. After this point, the API must be used.
In the config
folder, there is the local.js.sample
file, which is meant to be copied to local.js
. This file (local.js
, not the sample) is ignored by Git,
and intended for use in local development, NOT remote servers. Generally one would use environment variables for remote server configuration (and this repo is already setup to handle environment
variable configuration for both DEV and PROD). See Environment Variables for more.
These options are NOT part of the Sails Configuration Options, but are ones built for this custom repo. All of these options can be
overridden in the config/local.js
, just like every other option. If the option path is sails.config.security.checkPwnedPasswords
, then you would add:
{
security: {
checkPwnedPasswords: false
}
}
... to your config/local.js
to override any option on your local machine only.
Option Name (sails.config. ) |
Found In (config/ ) |
Default | Description |
---|---|---|---|
appName |
local.js env/development.js env/production.js |
My App (LOCAL) My App (DEV) My App
|
The general name to use for this app. |
log.captureRequests |
log.js |
true |
When enabled, all incoming requests will automatically be logged via the
RequestLog model, by the
request-logger hook, and the
finalize-request-log helper.
See Request Logging for more info. |
log.ignoreAssets |
log.js |
true |
When enabled (and `captureRequests` is `true`), this will force the logger to skip over assets (things like `.js` / `.css`, etc.). |
models.validateOnBootstrap |
models.js |
true |
When enabled, and models.migrate === 'safe' (aka PRODUCTION), then the SQL schemas of the default datastore will be validated against the model definitions.
See schema validation and enforcement for more info. |
models.enforceForeignKeys |
models.js |
true |
This is a modification option for the validateOnBootstrap configuration.
When both are true , the schema validation and enforcement will also enforce foreign key relationships.
It can be useful to disable this option when testing PRODUCTION configuration locally.
|
security.checkPwnedPasswords |
security.js |
true |
When enabled, sails.helpers.isPasswordValid() will run its normal
checks, before checking with the PwnedPasswords.com API to verify the password has not been found in a known security breach. If it has, it will consider the password invalid.
|
security      .requestLogger           .logSensitiveData
|
security.js
env/development.js
|
false |
If enabled, and NOT a PRODUCTION environment, the request logger will log sensitive info, such as passwords.
This will ALWAYS be false on PRODUCTION. It is in the PRODUCTION configuration file only as a reminder. |
Sails.js has middleware (akin to Express.js Middleware, Sails is built on Express.js after all...). Inside
of config/http.js
we create our own X-Powered-By
header, using Express.js Middleware.
There are a few environment variables that the remote configuration files are set up for. There are currently 3 variables that change names between DEV and PROD; this is intentional, and has proven
very useful in my experience. DEV has shorter names like DB_HOST
, where PROD has fuller names like DB_HOSTNAME
. This helps with ensuring you are configuring the correct remote server, and has
prevented accidental DEV deployments to PROD.
If you DO NOT like this behavior, and would prefer the variables stay the same across your environments, feel free to change them in config/env/development.js
and config/env/production.js
Variable | Default | Description |
---|---|---|
ASSETS_URL |
"" (empty string) | Webpack is configured to modify static asset URLs to point to a CDN, like CloudFront. MUST end with a slash " / ", or be empty. |
BASE_URL |
https://myapi.app | The address of the Sails instance. |
DATA_ENCRYPTION_KEY |
"" (empty string) | The data encryption key to use when encrypting / decrypting data in the datastore. |
DEV: DB_HOST PROD:Â DB_HOSTNAME |
localhost | The hostname of the datastore. |
DEV: DB_USER PROD: DB_USERNAME |
DEV: root PROD: produser |
Username of the datastore. |
DEV: DB_PASS PROD: DB_PASSWORD |
DEV: mypass PROD: prodpass |
Password of the datastore. |
DB_NAME |
DEV: myapp PROD: prod |
The name of the database inside the datastore. |
DB_PORT |
3306 | The port number for the datastore. |
DB_SSL |
true | If the datastore requires SSL, set this to "true". |
NGROK_AUTHTOKEN |
"" (empty string) | Ngrok auth token used in the ngrok.js script. |
NGROK_BASIC |
"" (empty string) | The user:pass combo to use for basic authentication with ngrok.js . |
NGROK_DOMAIN |
"" (empty string) | The domain to tunnel Sails to. Used in ngrok.js . |
SESSION_SECRET |
"" (empty string) | Used to sign cookies. If changed, will invalidate all sessions. |
Security policies that are responsible for protecting API endpoints live in the api/policies folder, and are configured in the config/policies.js file.
The most important policy, in terms of this repo, is the is-logged-in
policy. It determines if the request is being made from a valid session, and if so, passes the
session data down to controllers (and other policies). Past that, there is currently only a second policy: is-admin
. It uses the session data from is-logged-in
to
determine if the user is an admin; if they aren't, the request is rejected.
Read more about Sails' security policies: https://sailsjs.com/documentation/concepts/policies
Scripts built into package.json
:
Command | Description |
---|---|
|
Will run npm run clean , then npm run build:prod . |
|
Same thing as npm run build:prod , except that it will not optimize the files, retaining newlines and empty spaces. Will run npm run clean , then
npm run build:dev:webpack .
|
|
Will delete everything in the .tmp folder. |
|
Command to run tests, generate code coverage, and upload said coverage to Codecov. Designed to be run by CI test runners like Travis CI. |
|
Runs NYC coverage reporting of the Mocha tests, which generates HTML in test/coverage . |
|
Will run the Sails script sails run create-admin (scripts/create-admin.js).
See Sails Scripts for more info.
|
|
Alias for node --inspect app.js . |
|
Generate a DEK (Data Encryption Key). |
|
Generate a 64-character token. |
|
Generate a v4 UUID. |
|
The same thing as node app.js or ./app.js ; will
"lift our Sails" instance (aka starting the API).
|
|
The same thing as NODE_ENV=production node app.js . |
|
Will count the lines of code in the project, minus .gitignore 'd files, for funzies. There are currently about 7k custom lines in this repo
(views, controllers, helpers, hooks, etc); a small drop in the bucket, compared to what it's built on.
|
|
Same thing as node setup.js or ./setup.js . The setup script will interactively ask questions, and create a `config/local.js
based on the contents of config/local.js.sample .
|
|
Will run both npm run lift and npm run webpack in parallel. |
|
Run Mocha tests. Everything starts in the
test/startTests.js file.
|
|
Will run the Webpack Dev Server and open a browser tab / window. |
These scripts generally require access to working models, or helpers, so a quick instance is spun-up to run them. Currently create-admin
is the only script in
the scripts
folder.
See the Sails Docs for more info on Sails scripts.
Automatic incoming request logging, is a 2 part process. First, the request-logger
hook gathers info from the request, and creates a
new RequestLog
record, making sure to mask anything that may be sensitive, such as passwords. Then, a custom response gathers information from the response, again,
scrubbing sensitive data (using the customToJSON feature of Sails models) to prevent leaking of
password hashes, or anything else that should never be publicly accessible. The keepModelsSafe
helper and the custom responses (such as ok
or serverError) are responsible for the final leg of request logs.
You can easily disable request logging, by setting sails.config.log.captureRequests = false
. See custom configuration options for more.
The script npm run webpack
will start the auto-reloading Webpack development server, and open a browser window. When you save changes to assets (React files mainly), it will auto-compile the
update, then refresh the browser automatically.
The script npm run build
will make Webpack build all the proper assets into the .tmp/public
folder. Sails will serve assets from this folder.
If you want to build assets, but retain spaces / tabs for debugging, you can use npm run build:dev
.
The webpack configuration can be found in the webpack
folder. The majority of the configuration can be found in common.config.js
. Then, the other 3 files,
such as dev.config.js
extend the common.config.js
file.
React source files live in the assets/src
folder. It is structured in such a way, where the index.jsx
is really only used for local development (to help Webpack serve up the
correct "app"). Then, there are the individual "apps", main and admin. These files are used as
Webpack "entry points", to create 2 separate application bundles.
In a remote environment, Sails will look at the first subdirectory requested, and use that to determine which index.html
file it needs to actually return. So, in this case, the "main" application
will get built in .tmp/public/main
, where the CSS is .tmp/public/main/bundle.css
, the JavaScript is .tmp/public/main/bundle.js
, and the HTML is .tmp/public/main/index.html
.
Sails is currently setup (see config/routes.js) to automatically serve compiled files from .tmp/public
. If Sails needs to return the initial HTML, it will take the first
subdirectory of the request (/admin
from /admin/dashboard
), and will return the index.html
from .tmp/public
.
Example: User requests /admin/dashboard
, Sails will serve .tmp/public/admin/index.html
.
I recommend using a content CDN, something like AWS CloudFront, to help ease the burden of serving static files, and making less calls to your Sails instance(s). It may also be a good idea to consider using something like Nginx to handle serving of compiled assets, leaving Sails to only have to handle API requests.
This feature is designed for MySQL
(can LIKELY be used with most if not all other SQL-based datastores [I have not tried]). If you plan to use a different datastore, you will likely want to disable this
feature.
Inside config/bootstrap.js
is a bit of logic (HEAVILY ROOTED IN NATIVE MySQL
QUERIES), which validates column types in the PRODUCTION
database (
aka sails.config.models.migrate === 'safe'
), then will validate foreign key indexes. If there are too many columns, or there is a missing index, or incorrect column type, the logic
will console.error
any issues, then process.exit(1)
(kill) the Sails server. The idea here, is that if anything is out of alignment, Sails will fail to lift, which will mean failure to deploy on
PRODUCTION, preventing accidental, invalid live deployments; a final safety net if you will.
While yes, Sails (rather Waterline) does not actually require foreign keys to handle relationships, generally in a PRODUCTION environment there are more tools
at-play that DO require these relationships to work properly. So, by default, this repo is designed to validate that the keys are set up correctly. This feature can be turned off by changing
sails.config.enforceForeignKeys = false
in config/local.js
(or config/models.js
).
...then you can set sails.config.models.validateOnBootstrap = false
at the bottom of config/models.js
.
When a new password is being created, it is checked with the PwnedPasswords.com API. This API uses a k-anonymity model, so the password that is searched for is never exposed to the API. Basically, the password is hashed, then the first 5 characters are sent to the API, and the API returns any hashes that start with those 5 characters, including the amount of times that hash (aka password) has been found in known security breaches.
This functionality is turned on by default, and can be shutoff per-use, or globally throughout the app. sails.helpers.isPasswordValid
can be used with skipPwned
option set to true
, to disable the check per use (see api/controllers/common/login.js
for example). Inside of config/security.js
, the
variable checkPwnedPasswords
can be set to false
to disable it globally.
This repo has a custom script (ngrok.js
), which will start a Ngrok tunnel (using the official Ngrok NPM package @ngrok/ngrok
), build assets, and start Sails.
You will want to get an auth token (and create an account if you haven't already): https://dashboard.ngrok.com/tunnels/authtokens
You will need to npm i @ngrok/ngrok --save-dev
before you can do anything. I've opted to not have it pre-installed, as it does add a bit of bloat, and not everyone is going to use it.
After you have it installed, you can run ngrok.js
, with node: node ngrok
or just directly: ./ngrok.js
.
If you prefer to configure Ngrok using Sails' style configuration, you can do so with config/ngrok.js
, or config/local.js
. Additionally, the setup script
will help you configure / install Ngrok.
These are the current configuration flags. Order does not matter.
An example: node ngrok.js nobuild token=S1T2A3Y4I5N6G7A8L9I0V1E
Option | Description |
---|---|
auth=USER:PASS |
This will protect the Ngrok tunnel with HTTP Basic Auth, using the USER / PASS you supply. You can also use the NGROK_BASIC environment variable. |
build |
Adding this flag will force asset building. |
nobuild |
Adding this flag will skip asset building. |
domain=MYDOMAIN |
The domain to connect the tunnel from Sails to. You can also use the NGROK_DOMAIN environment variable. |
port=SAILSPORT |
The port to use internally for Sails. Useful if you want to run multiple instances on the same machine. The PORT environment variable or sails.config.port option is used as fall-backs if the script option isn't set. |
region=MYREGION |
The region to use for connection to the Ngrok services. One of Ngrok regions (us , eu , au , ap , sa , jp , in ). You can also use the NGROK_REGION environment variable. Defaults to global . |
token=AUTHTOKEN |
Adding this flag will set your Ngrok auth token. You can also use NGROK_AUTHTOKEN or NGROK_TOKEN environment variables. |
NOTE: For each option, the script flag will take precedent if a corresponding environment variable (or Sails configuration) is set.
For example: ./ngrok.js token=AUTHTOKEN1
> NGROK_AUTHTOKEN=AUTHTOKEN2 ./ngork.js
.
If you would like to use sails-hook-autoreload
, just install it: npm i sails-hook-autoreload --save-dev
. The config
file config/autoreload.js
is already pre-configured for this repo.
There are a lot of ways to go about remote deployments; many automated, some not so much. For the sake of argument, let's say you want to set up a remote server by hand. It would be nice if said
server (or servers if behind a load-balancer), could do a git pull
, npm install
, and if need be npm run build
. It would also be great if you could see the progress, or even
just the console of the Node server.
That's what the self-update.sh
and tmux.sh
shell scripts are for. Note, they are both using bash
, but should work just fine in zsh
.
In simplest terms, TMUX is a "terminal multiplexer". It lets you switch between programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal.
In other words, it adds a lot of magic to the terminal. One of the most useful things, is being able to run programs in the background, but still be able to see the console output later (as it is still running). It runs (on the remote server) on most Linux-y distros, including macOS.
TMUXCheatSheet.com has a great "how to install and use" that you can find here: https://tmuxcheatsheet.com/how-to-install-tmux/
Once installed, just run tmux a
(or tmux attach
), which will attach to the last open session. If there isn't one, it'll open one. Running just tmux
will create a whole new session, and you
generally don't want that.
Ctrl + b
is the command to start a TMUX shortcut. It is how you navigate around inside TMUX.
A couple shortcuts you'll want to know:
Description | Command |
---|---|
Detach TMUX | Ctrl + b then d |
Next Window | Ctrl + b then n |
Previous Window | Ctrl + b then p |
Create Window | Ctrl + b then c |
Close Window | Ctrl + b then & (aka Shift + 7 ) OR just exit |
Window Preview | Ctrl + b then w |
Rename Window | Ctrl + b then , |
For this guide, I'm going to be using Amazon Linux as the basis; it's a great default if using AWS. However, most of these steps can easily be adapted for other distros.
It should also be noted, this is by no means the only way to set up remote servers; nor is it a thorough guide. This is just a quick-n-dirty, get off the ground running without a lot of tooling, kind of guide. There are PLENTY of automated deployment managers and documentation out there. This is fairly open-ended; it is assumed you know how to do a portion of basic remote server management.
Spin up a new instance at the smallest size possible (as of this writing, t4g.nano
is the smallest) and SSH into it and follow along.
While the smallest instance size will certainly not have enough RAM to support an asset build, it is plenty for running our Node server.
To make it so the instance CAN handle an asset build (despite its lack of memory), you'll want to create a swapfile. I use 4 GB swapfiles, as that seems to be more than enough head-room for asset building, however, you can most likely get away with just 2G.
First, create the file, and allocate the space:
sudo fallocate -l 4G /swapfile
Make it readable / writable only from ROOT:
sudo chmod 600 /swapfile
Make it a proper swapfile:
sudo mkswap /swapfile
Tell the OS to actually use the swapfile:
sudo swapon /swapfile
Edit the fstab
table to make this swapfile permanent:
sudo nano /etc/fstab
Add this to the bottom of the fstab
and save:
/swapfile swap swap defaults 0 0
Now that we have the lack of memory issue dealt with, let's get the 3 bits of software installed that we for sure need: git
, node
and tmux
:
sudo yum install git nodejs tmux
This is going to assume you have a repo setup with GitHub, but the keygen is pretty much universal.
Generate SSH key:
ssh-keygen -t ed25519 -C [email protected]
Copy the public key:
cat ~/.ssh/id_ed25519.pub
Save it as a "deploy key". (Or however you need to save it in your repo to allow git pull
on the remote server).
Once you have the server's public key saved in your repo manager, you should be able to clone your repo on the remote server:
git clone [email protected]:USERNAME/REPO.git myapp
Next, you'll want to cd myapp
, and npm install
.
Before you can actually start the server for a dry-run, you need to decide how you are going to store the server's credentials (user/pass for datastores and the like). It is recommended you use
the environment variables, but it is also possible to run the interactive setup, and generate a local.js
.
You should now be able to sudo npm run lift:prod
(recommended for all remote environments, even DEV). sudo
is needed on Amazon Linux, because it requires ROOT permissions to open ports.
If everything is working as intended... congrats (or so you thought)! Now that you have everything working, it's time to get the server to update / rebuild / start itself.
Next up, you need to decide how you are going to have the tmux.sh
script run on startup. The easiest way would be to just install cronie
(for the use of crontab
):
sudo yum install cronie
Enable the service:
sudo systemctl enable crond.service
Start said service:
sudo systemctl start crond.service
Edit the crontab
to run the script at @reboot
:
@reboot cd myapp; ./tmux.sh
Force the instance to restart, and it should automatically lift the server inside of TMUX.
sudo reboot
After reconnecting to the instance, you should be able to tmux attach
and see the Sails console.
Once you've verified everything works, you can use ./tmux.sh myapp status
/ ./tmux.sh myapp start
/ ./tmux.sh myapp stop
/ ./tmux.sh myapp restart
(but you don't have to).
Now that you have a self-starting/updating server, you should create an AMI from that instance. After it's been created, you should be able to terminate the running instance, and spin up a new one using your new custom AMI, and everything should just work. Now you have the start of a robust remote fleet; because spinning up new servers is just a couple clicks (or commands) away.
I recommend looking at prerender.io. They offer a service (free up to 250 pages) that caches the end result of a JavaScript-rendered view (React, Vue, Angular), allowing search engines to crawl otherwise un-crawlable web views. You can use the service in a number of ways. One way, is to use the prerender-node package. To use it with Sails, you'll have to add it to the HTTP Middleware. Here's a quick example:
middleware: {
order: [
'cookieParser',
'bodyParser',
'prerender', // reference our custom middleware found below;
// we run this before compression and routing,
// because it is a proxy, saving time and resources
'compress',
'customPoweredBy',
'router', // custom Sails middleware handler (config/routes.js)
'assetLog', // the request wasn't caught by any of the above middleware, must be assets
'www', // default hook to serve static files
'favicon' // default hook to serve favicon
],
// REMEMBER! Environment variables are your friends!!!
prerender: require('prerender-node').set('prerenderToken', 'YOUR_TOKEN')
}
- Sails Framework Documentation
- Sails Deployment Tips
- Sails Community Support Options
- Sails Professional / Enterprise Options
react-bootstrap
Documentation- Webpack Documentation
- React Documentation
- Bootstrap Documentation
- Simple data fixtures for testing Sails.js (the npm package
fixted
)
This app was originally generated on Fri Mar 20 2020 17:39:04 GMT-0500 (Central Daylight Time) using Sails v1.2.3.