Skip to content

IntegrationEnvironment

Alok G Singh edited this page Dec 1, 2020 · 10 revisions

How to use the automatically built integration images

You have to prefix your branch integration/. For instance, the branch integration/add-ci will result in an images

  • 754489498669.dkr.ecr.eu-central-1.amazonaws.com/tyk-analytics:add-ci.
  • 754489498669.dkr.ecr.eu-central-1.amazonaws.com/tyk:add-ci.
  • 754489498669.dkr.ecr.eu-central-1.amazonaws.com/tyk-pump:add-ci.

Apart from these on-demand tags, there are standing instructions for master and release-* branches. For example, 754489498669.dkr.ecr.eu-central-1.amazonaws.com/tyk-analytics:master will fetch the latest image which will correspond to HEAD in git. You can also reference a particular sha like 754489498669.dkr.ecr.eu-central-1.amazonaws.com/tyk:ff415a6e40e3a31f78d01db9c16df7d537680597.

The latest tag is largely meaningless as it will point to the last built image (across all branches).

The tags are mutable. This tag will be updated as you push to origin. This workflow cannot run on forks as forks will not have access to the secrets on origin that make this feature possible.

These images are built by workflows in the respective repos. If you would like to see runs of the workflow, see

How to login to AWS ECR

You need an access token and a functional AWS CLI with the subaccount to publish, install, and delete packages in AWS ECR. There is a note in OneLogin with the AWS credentials which have just enough privileges to push and pull from the registry as well as access to logs. Once you have the CLI functional, you can login with:

% aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin 754489498669.dkr.ecr.eu-central-1.amazonaws.com

How to use the integration image locally in your dev setup

This assumes that you are ok with docker-compose. dc is a quick and surprisingly effective method of collaborating on integration branches.

The compose file is

version: "3"
services:
    upstream:
        image: citizenstig/httpbin

    dashboard:
        image: 754489498669.dkr.ecr.eu-central-1.amazonaws.com/tyk-analytics:latest
        ports:
            - "3000:3000"
            - "5000:5000"
        environment:
            - TYK_LOGLEVEL=${TYK_LOGLEVEL:-debug}
        volumes:
            - ../confs/default:/conf
        depends_on:
            - tyk-mongo

    gateway:
        image: 754489498669.dkr.ecr.eu-central-1.amazonaws.com/tyk:latest
        ports:
            - "8080:8080"
        volumes:
            - ../confs/default:/conf
        depends_on:
            - tyk-redis

    pump:
        image: 754489498669.dkr.ecr.eu-central-1.amazonaws.com/tyk-pump:latest
        volumes:
            - ../confs/default:/conf
        depends_on:
            - gateway
            - tyk-mongo

    tyk-redis:
        image: redis
        ports:
            - "6379:6379"

    tyk-mongo:
        image: mongo

This depends on the /integration/image/Dockerfile in the individual repositories.

How to access the developer environment

All branches with the integration/ prefix will also be automatically deployed to AWS along with other components and is accessible over the internet. If there is a similarly named branch in other components, then that branch will be used. If not, master is used. The URL to access any particular component can be constructed as http://repo_name.branch_name.dev.tyk.technology. All periods are stripped from the branch name for the purposes of DNS.

So, if your branch is called integration/coolfeature, the dashboard will be at http://tyk-analytics.coolfeature.dev.tyk.technology:3000. The gateway is at http://tyk.coolfeature.dev.tyk.technology:8181. The pump is internal and not accessible from the internet.

The dashboard for master is available at http://tyk-analytics.master.dev.tyk.technology:3000

Config files

Configuration files for the components live on an EFS volume. This volume is mounted on the bastion host at /config for editing. Each environment has its own directory and the config files have to follow a strict directory tree and naming convention. Deleting a directory will cause it to be regenrated when the environment is processed next. How each component deals with have its config file deleted from under it depends on the component.

Once generated, the config trees are stable and can be edited and the changes will persist. This can cause a problem for licensed components (like tyk-analytics) which have trial (30-day) licenses. A working license key will be available in /config/*.trial for licensed components. You will need to edit the config file to change the license key when it expires.

If you need temporary access, please tag @alok with your ssh public key. This should give you access until the bastion host is refreshed. If you want permanent access, add your pub key to https://github.com/TykTechnologies/tyk-ci/blob/master/infra/scripts/bastion-setup.sh#L4 and file a PR.

Persistance

There is no persistance. Data in redis and mongo are liable to be dropped without much notice. If you want/need persistance, please post on #devops. We attempt to track the latest redis docker image and mongo version 4.4 and is upgraded automatically.

Logs for your environemnt

The same AWS creds that can pull images from ECR also has access to the log group that your environment uses. If your branch is called integration/coolfeature, the log group is called coolfeature. You can use the AWS CLI to view logs like,

% aws logs filter-log-events --log-group-name plugin-system --start-time $(date -d '1 hour ago' '+%s%3N') | jq -r '.events[] | .message'
1:M 20 Nov 2020 05:26:55.036 * 100 changes in 300 seconds. Saving...
1:M 20 Nov 2020 05:26:55.036 * Background saving started by pid 457
457:C 20 Nov 2020 05:26:55.039 * DB saved on disk
457:C 20 Nov 2020 05:26:55.039 * RDB: 0 MB of memory used by copy-on-write
1:M 20 Nov 2020 05:26:55.138 * Background saving terminated with success
1:M 20 Nov 2020 05:30:38.045 * 100 changes in 300 seconds. Saving...
1:M 20 Nov 2020 05:30:38.045 * Background saving started by pid 260

aws logs tail --follow works something like tail on a regular local log file. A little shell snippet that makes it easier to filter logs by environment and component is:

dlogs () {
	[[ -z $AWS_PROFILE ]] && {
		print AWS creds not set
		return
	}
	local log_name=${1:-master} 
	local log_prefix=${2:-tyk} 
	local time_spec=${3:-1 hour ago} 
	aws logs filter-log-events --log-group-name $log_name --log-stream-name-prefix $log_prefix --start-time $(date -d $time_spec '+%s%3N') | jq -r '.events[] | .message'
}

With no arguments, it will fetch logs for the past hour for the tyk component in the master environment. You can also use it peek at the logs of the automation which largely live in the internal environemtn. So to see what gromit run(the scheduled job that updates environments when needed) has been up to in the past hour:

% dlogs internal grun '1 hour ago'
{"level":"info","name":"gromit","component":"run","version":"1.3.13","time":"2020-12-01T10:23:10Z","message":"starting"}
{"level":"info","env":{"Repos":["tyk","tyk-analytics","tyk-pump"],"TableName":"DeveloperEnvironments","RegistryID":"754489498669","ZoneID":"Z02045551IU0LZIOX4AO0","Domain":"dev.tyk.technology"},"time":"2020-12-01T10:23:10Z","message":"loaded env"}
{"level":"info","name":"gromit","component":"run","version":"1.3.13","time":"2020-12-01T10:59:58Z","message":"starting"}
{"level":"info","env":{"Repos":["tyk","tyk-analytics","tyk-pump"],"TableName":"DeveloperEnvironments","RegistryID":"754489498669","ZoneID":"Z02045551IU0LZIOX4AO0","Domain":"dev.tyk.technology"},"time":"2020-12-01T10:59:58Z","message":"loaded env"}