API3 DAO Tracker provides a web interface to see on-chain details of the API3 DAO, including:
- Members, their stakes, shares, voting power, and voting history
- Proposal details
- All events from the smart contracts of the API3 DAO
- DAO Treasuries status
The app relies on Terraform to configure a generic Linux EC2 instance.
The EC2 instance in-turn hosts Docker, and app-services are orchestrated by Docker directly (eg. restart=always
).
The services are:
(end user) -> Cloudflare -> EC2 IP -> traefik (load balancer) -> api3-tracker (container)
Containers:
- api3tracker: The FE and BE-service
- postgres: The database the FE and BE rely on
- traefik: A load balancer that encrypts HTTP responses (using the CF origin server key pair)
- postgres-exporter: a service that exports the database as a backup on an interval
Host services: The host OS also runs some cron services, these are:
*/10 * * * * root cd /home/ubuntu/src/github.com/api3dao/api3-tracker/terraform/workspaces/api3tracker-prod && ./bin/job_logs_download.sh >> /var/log/api3-logs-download.log 2>&1
15,45 * * * * root cd /home/ubuntu/src/github.com/api3dao/api3-tracker/terraform/workspaces/api3tracker-prod && ./bin/job_supply_download.sh >> /var/log/api3-supply-download.log 2>&1
0 * * * * root cd /home/ubuntu/src/github.com/api3dao/api3-tracker/terraform/workspaces/api3tracker-prod && ./bin/job_treasuries_download.sh >> /var/log/api3-treasuries-download.log 2>&1
2,12,22,32,42,52 * * * * root cd /home/ubuntu/src/github.com/api3dao/api3-tracker/terraform/workspaces/api3tracker-prod && ./bin/job_state_update.sh >> /var/log/api3-state-update.log 2>&1
10 0 * * * root cd /home/ubuntu/src/github.com/api3dao/api3-tracker/terraform/workspaces/api3tracker-prod && ./bin/job_shares_download.sh --tag . > /var/log/api3-shares-download.log 2>&1
24 4 * * */3 root cd /home/ubuntu/src/github.com/api3dao/api3-tracker/terraform/workspaces/api3tracker-prod && bash ./bin/postgres-backup.sh >> /var/log/postgres-backups.log 2>&1
Developers can run some or all services locally using Docker Swarm, or even bare-bones, without containerisation.
One combination is running just postgres locally using Docker, eg:
docker run --rm -ti -p 5432:5432 postgres:15
and then running the FE and BE services directly (refer to Cron jobs below and yarn next dev
in package.json
).
Alternatively, one can run services using Docker Swarm, but this lacks hot-reloading.
If you haven't already enabled Swarm mode on your Docker instance, do so now (only has to be done once):
docker swarm init
The result of the above command can be ignored.
Build the FE/BE image:
docker build -t api3dao/api3-tracker:latest .
Run the stack:
docker stack deploy -c dev-tools/docker-compose.yml tracker-stack
If all goes well the application will be served at http://localhost:3000
Some commands for visualising the services:
docker ps # all docker containers
docker service ls # all swarm services
docker service ps tracker-stack_postgres --no-trunc # show status of postgres
docker stack rm tracker-stack # tear down the stack
Initialise the DB:
DATABASE_URL="postgres://postgres:[email protected]:5432/postgres?sslmode=disable" yarn prisma migrate deploy
Cron jobs (unwrapped versions of cronjobs):
DATABASE_URL="postgres://postgres:[email protected]:5432/postgres?sslmode=disable" TS_NODE_PROJECT=./tsconfig.cli.json yarn ts-node -T cli.ts logs download
DATABASE_URL="postgres://postgres:[email protected]:5432/postgres?sslmode=disable" TS_NODE_PROJECT=./tsconfig.cli.json yarn ts-node -T cli.ts supply download
DATABASE_URL="postgres://postgres:[email protected]:5432/postgres?sslmode=disable" TS_NODE_PROJECT=./tsconfig.cli.json yarn ts-node -T cli.ts treasuries download
DATABASE_URL="postgres://postgres:[email protected]:5432/postgres?sslmode=disable" TS_NODE_PROJECT=./tsconfig.cli.json yarn ts-node -T cli.ts shares download
DATABASE_URL="postgres://postgres:[email protected]:5432/postgres?sslmode=disable" API3TRACKER_ENDPOINT="ARCHIVE RPC URL" TS_NODE_PROJECT=./tsconfig.cli.json yarn ts-node -T cli.ts state update --rps-limit
Keep in mind that the Postgres DB in the docker-compose file is not configured with a volume by default, so changes will be lost on service restart.
The only requirements for installation are Docker and Terraform.
You may also need AWS CLI v2 if you want the AWS S3 backups to be enabled on your environment
Some scripts also rely on cURL and JQ
- Prepare docker image of API3 Tracker with
make build install
- Please go to
terraform/workspaces/api3tracker-local
and apply terraform planterraform init && terraform apply
. You should see all resources that will be installed on your system. You can also check running components withdocker ps
. Default local environment starts website at http://localhost:7040. - Run
./bin/postgres-download.sh
to download the latest database backup from AWS S3. As the database syncing is extremely slow and can take weeks, you should take database that is ready for development
Once you have local terraform installation, you may run
# download dependencies
yarn
# save database credentials from terraform plan (Linux only)
# if you are not usign Linux, put DATABASE_URL in .env manually
make env
# start local development server
yarn dev
Open http://localhost:3000 with your browser to see the result.
MIT