- Understanding the structure of project
- Understand you workspace
- Development server
- Build
- Running unit tests
- Running end-to-end tests
- Backend
- Frontend
- Database
- How to mount bucket? (gcloud storage)
- Upload credentials as a metadata
- Deploy
- Contact
This project uses the monorepo pattern to organize the code.
NX Quickstart
Project is divided into multiple applications and libraries. Right now there are 3 main applications:
- api - basically the main application of the projects. Vast amount of api and libraries correspond to this application
- github-interactions-api - as name suggests, this application is intended only for interaction with the github API
- sample-platform - this is the frontend of the application.
...and multiple amount of libraries, which can be found under the folder /libs/*
Libraries which correspond to the specific application are as follows:
- Backend application libraries are suffixed with the word
implementation
e.g., libraries corresponding to the applicationgithub-interactions-api
is located in the following folder:libs/github-interactions-api-implementation/*
- All frontend libraries are located inside
/libs/frontend/*
Run nx dep-graph
to see a diagram of the dependencies of your projects.
Run ng serve api (or sample platform)
for a dev server. Navigate to http://localhost:4200/. The app will automatically reload if you change any of the source files.
Run ng build api (or sample-platform)
to build the project. The build artifacts will be stored in the dist/
directory. Use the -prod
flag for a production build.
Run ng test sample-platform
to execute the unit tests via Jest.
Run nx affected:test
to execute the unit tests affected by a change.
Run npm test
to execute ALL tests in the repository.
Run ng e2e app
to execute the end-to-end tests via Cypress.
Run nx affected:e2e
to execute the end-to-end tests affected by a change.
Backend uses the library Nest.js
which is built on top of the express.js
. You can learn more about advantages and documentation on official website.
Run ng g @nrwl/nest:lib *name_of_application_in_libs*/*name_of_library*
to generate a library. E.g. ng g @nrwl/nest:lib api-implementation/test-entry
to generate test-entry
library for api-implementation
Note: Libraries are sharable across libraries and applications. They can be imported from @new-sample-platform/mylib
.
- Import the name of the generated module to the corresponding application. E.g. if library is generated for
api-implementation
, you must import it to theapp.module.ts
inapps/api
- Create folder
services
and inside it create filemy-lib.service.ts
and tag it with the decorator@Injectable
. Refer to: https://docs.nestjs.com/providers - Inside the generated library's
*.module.ts
file, don't forget to specify the controllers ascontrollers
and services which are injected to the controllers asproviders
. Learn more https://docs.nestjs.com/modules - (Optional). If you want to use database, import the
ApiImplementationDatabaseModule
as an import to the library module.
Frontend uses Angular
and RxJS
. Please refer to https://angular.io/docs and https://rxjs-dev.firebaseapp.com/guide/overview
Run ng g lib frontend/my-lib --prefix frontend --style scss
, --prefix
- library name and module will be prefixed with the word frontend
. E.g. FrontendMyLibModule
- Generate two folders:
containers
andcomponents
. Inside the containers (they correspond to specific feature, e.g. profile page), render corresponding components (e.g. profile picture, details and etc.). - To generate component:
ng g component components/my-component -m frontend-my-lib --project frontend-my-lib
. - To generate container
ng g component containers/my-container -m frontend-my-lib --project frontend-my-lib
- inside of generated library, next to the
*.module.ts
create filerouting.module.ts
. Import containers corresponding to this module and map with the sub routes. Learn more here: https://angular.io/guide/router. Don't forget to import thisrouting.module.ts
insidemy-lib.module.ts
- Now map the generated library in
routing.module.ts
(not the one inside the library, but inapps/sample-platform/src/app/routing.module.ts
. E.g.
{
path: 'my-lib',
loadChildren: () =>
import('@new-sample-platform/frontend/my-lib').then(
(mod) => mod.FrontendMyLibModule
),
},
In order to create model you have to go to libs/models/my-model
:
-
Create file
my-model.schema.ts
and export it. Refer here on how to create schemas in mongoose. -
Create file
my-mode.types.ts
It should be the following structure:import { Document, Model } from "mongoose"; export interface IMyModel { field: String; dateOfEntry?: Date; lastUpdated?: Date; } export interface IMyModelDocument extends IMyModel, Document {} export interface IMyModelModel extends Model<IMyModelDocument> {}
The interface should reflect the created schema in
my-model.schema.ts
-
Create file
my-model.models.ts
. It should be the following format:import { model } from 'mongoose'; import { IMyModelDocument } from './my-model.types'; import MyModelSchema from './my-model.schema'; export const MyModelModel: model = model<IMyModelDocument>( 'my-model', MyModelSchema );
-
Now you can import the
MyModelSchema
and apply all methods that[mongoose](https://mongoosejs.com/docs/api/model.html)
provides.
Ubuntu and Debian (latest releases):
export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update && sudo apt-get install fuse
Add yourself to fuse group:
sudo groupadd fuse
sudo adduser $USER fuse
sudo chmod g+rw /dev/fuse
sudo chgrp fuse /dev/fuse
sudo apt-get install gcsfuse
sudo usermod -a -G fuse $USER
CentOS and Red Hat (latest releases):
sudo tee /etc/yum.repos.d/gcsfuse.repo > /dev/null <<EOF
[gcsfuse]
name=gcsfuse (packages.cloud.google.com)
baseurl=https://packages.cloud.google.com/yum/repos/gcsfuse-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
sudo yum install gcsfuse
OS X:
brew install gcsfuse
sudo ln -s /usr/local/sbin/mount_gcsfuse /sbin # For mount(8) support
Windows:
Feel free to contribute!
Ubuntu and Debian (latest releases):
sudo apt-get install apt-transport-https ca-certificates gnupg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-sdk
CentOS, Redhat and OS X:
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
Windows:
Feel free to contribute!
Common for all platforms:
Type gcloud init
to get started and follow the instructions.
Note: After setting up your credentials, the gcloud
command-line tool prompts you for a default project for this configuration and provides a list of available projects. Select a project ID from the list.
When you set this property, gsutil
commands that require a project, such as gsutil mb
, use the default project ID unless you override them with the -p flag or set the CLOUDSDK_CORE_PROJECT
environment variable.
mkdir bucket
GOOGLE_APPLICATION_CREDENTIALS=credentials.json gcsfuse ccextractor-samples bucket
where:
credentials.json
is credentials of the service account.- Go to create account service key page
- From the Service account list, select New service account.
- From the Role list, select Project→Owner
- Click Create. A JSON file that contains your key downloads to your computer.
ccextractor-samples
Now you can access the bucket by heading to the bucket
folder. Make sure that you have permissions to write the file if you want to put something in the bucket.
Note: if you change the permissions on your service account, you have to download the credentials.json
file again.
In order to mount the bucket, the created VM instance should access the credentials.json
file. If you are setting up new project you have to upload your credentials.json
as a metadata to the project.
In order to upload credentials stored in g-credentials.json
to cloud metadata, execute the following command gcloud compute project-info add-metadata --metadata-from-file g-credentials=$HOME/example/g-credentials.json
You can view your GCE projects metadata on the cloud console by searching for metadata or you can view it by using gcloud:
gcloud compute project-info describe
After uploading the credentials as a metadata to the project, the startup script will handle everything else for you.
- Upload credentials as a metadata.
- Go to the gcloud page, select the image that suits your needs and click
create instance button
- Then go to the compute engine page and ssh to your VM instance.
- Mount the bucket
- Clone this repository
- Run the docker container with the database.
- Run
ng build sample-platform --watch
to compile frontend application. See the deployment of angular apps for more details - Run
npm run build
- Run
node dist/apps/api/main.js
to serve the backend in production
Note: it is much better to properly configure nginx to deploy it much easier. Please refer to #32
Contact me via email: [email protected]