Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Glue21 JSON #340

Merged
merged 14 commits into from
May 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .adr-dir
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
doc/architecture/decisions
66 changes: 49 additions & 17 deletions cloud-info/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,33 +1,65 @@
FROM python:3 as build

SHELL ["/bin/bash", "-o", "pipefail", "-c"]

# hadolint ignore=DL3008
RUN curl -s https://dist.eugridpma.info/distribution/igtf/current/GPG-KEY-EUGridPMA-RPM-3 \
| apt-key add - \
&& echo "deb https://repository.egi.eu/sw/production/cas/1/current egi-igtf core" > /etc/apt/sources.list.d/igtf.list \
&& apt-get update \
&& apt-get install -y ca-policy-egi-core \
&& rm -rf /var/lib/apt/lists/*

WORKDIR /cloud-info

RUN python -m venv /cloud-info/venv
ENV PATH="/cloud-info/venv/bin:$PATH"

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt \
&& cat /etc/grid-security/certificates/*.pem >> "$(python -m requests.certs)"

COPY . .
enolfc marked this conversation as resolved.
Show resolved Hide resolved

RUN pip install --no-cache-dir .

# The actual image
FROM python:3

LABEL org.opencontainers.image.source=https://github.com/EGI-Federation/fedcloud-catchall-operations

SHELL ["/bin/bash", "-o", "pipefail", "-c"]

RUN mkdir /cloud-info
COPY requirements.txt /cloud-info/requirements.txt
RUN pip install --no-cache-dir -r /cloud-info/requirements.txt

# CA certificates: install and add to python
# hadolint ignore=DL3015, DL3008
RUN curl -Ls \
https://dist.eugridpma.info/distribution/igtf/current/GPG-KEY-EUGridPMA-RPM-3 \
| apt-key add - \
&& echo 'deb http://repository.egi.eu/sw/production/cas/1/current egi-igtf core' \
> /etc/apt/sources.list.d/cas.list \
&& apt-get update \
&& apt-get install -y jq \
&& apt-get install -y ca-policy-egi-core \
&& rm -rf /var/lib/apt/lists/* \
&& cat /etc/grid-security/certificates/*.pem >> "$(python -m requests.certs)"
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
jq rclone \
&& rm -rf /var/lib/apt/lists/*


COPY . /cloud-info/
RUN pip install --no-cache-dir /cloud-info
RUN mkdir /cloud-info \
&& groupadd -g 1999 python \
&& useradd -r -u 1999 -g python python \
&& chown -R python:python /cloud-info

WORKDIR /cloud-info

# All the python code from the build image above
COPY --chown=python:python --from=build /cloud-info/venv ./venv
# Add the scripts that call the cloud-info-provider as needed for the site
# these create the configuration for the site by discovering the available
# projects for the credentials and will send the output to the AMS queue and
# upload to S3
COPY ams-wrapper.sh /usr/local/bin/ams-wrapper.sh
enolfc marked this conversation as resolved.
Show resolved Hide resolved
COPY publisher.sh /usr/local/bin/publisher.sh
# These are sample configuration files for cloud-info-provider that can be used
# if the container is used outside of the catchall-operations as described in
# https://docs.egi.eu/providers/cloud-compute/openstack/cloud-info/#local-operations
COPY openstack.rc /etc/cloud-info-provider/openstack.rc
COPY openstack.yaml /etc/cloud-info-provider/openstack.yaml

USER 1999

ENV PATH="/cloud-info/venv/bin:$PATH"
CMD ["publisher.sh"]
53 changes: 34 additions & 19 deletions cloud-info/ams-wrapper.sh
Original file line number Diff line number Diff line change
Expand Up @@ -41,15 +41,13 @@ AUTO_CONFIG_PATH="$(mktemp -d)"
export CHECKIN_SECRETS_FILE="$CHECKIN_SECRETS_PATH/secrets.yaml"
enolfc marked this conversation as resolved.
Show resolved Hide resolved
# TODO(enolfc): avoid creating new tokens for every provider
export ACCESS_TOKEN_FILE="$AUTO_CONFIG_PATH/token.yaml"
USE_ACCESS_TOKEN=0
if token-generator; then
# TODO(enolfc): even if this belows fails, we should use access token as it will provide
# access to more projects
if SECRETS_FILE="$ACCESS_TOKEN_FILE" config-generator >"$AUTO_CONFIG_PATH/site.yaml"; then
# this worked, let's update the env
export CHECKIN_SECRETS_PATH="$AUTO_CONFIG_PATH/vos"
export CLOUD_INFO_CONFIG="$AUTO_CONFIG_PATH/site.yaml"
USE_ACCESS_TOKEN=1
fi
fi

Expand All @@ -60,28 +58,28 @@ if test "$CHECKIN_SECRETS_PATH" = ""; then
--middleware "$CLOUD_INFO_MIDDLEWARE" \
--ignore-share-errors \
--format glue21 >cloud-info.out
elif test "$USE_ACCESS_TOKEN" -eq 1; then
# Case 2: access token style
cloud-info-provider-service --yaml-file "$CLOUD_INFO_CONFIG" \
--middleware "$CLOUD_INFO_MIDDLEWARE" \
--ignore-share-errors \
--auth-refresher accesstoken \
--format glue21 >cloud-info.out
else
# Let's use the service account directly on the info provider
CHECKIN_DISCOVERY="https://aai.egi.eu/auth/realms/egi/.well-known/openid-configuration"
CLIENT_ID="$(yq -r '.fedcloudops.client_id' <"$CHECKIN_SECRETS_FILE")"
CLIENT_SECRET="$(yq -r '.fedcloudops.client_secret' <"$CHECKIN_SECRETS_FILE")"
# use service account for everyone
export OS_DISCOVERY_ENDPOINT="https://aai.egi.eu/auth/realms/egi/.well-known/openid-configuration"
enolfc marked this conversation as resolved.
Show resolved Hide resolved
OS_CLIENT_ID="$(yq -r '.fedcloudops.client_id' <"$CHECKIN_SECRETS_FILE")"
export OS_CLIENT_ID
OS_CLIENT_SECRET="$(yq -r '.fedcloudops.client_secret' <"$CHECKIN_SECRETS_FILE")"
export OS_CLIENT_SECRET
export OS_ACCESS_TOKEN_TYPE="access_token"
export OS_AUTH_TYPE="v3oidcclientcredentials"
export OS_OPENID_SCOPE="openid profile eduperson_entitlement email"
cloud-info-provider-service --yaml-file "$CLOUD_INFO_CONFIG" \
--middleware "$CLOUD_INFO_MIDDLEWARE" \
--ignore-share-errors \
--os-auth-type v3oidcclientcredentials \
--os-discovery-endpoint "$CHECKIN_DISCOVERY" \
--os-client-id "$CLIENT_ID" \
--os-client-secret "$CLIENT_SECRET" \
--os-access-token-type access_token \
--os-openid-scope "openid profile eduperson_entitlement email" \
--format glue21 >cloud-info.out
# Produce the json output also
RCLONE_CONFIG_S3="$(yq -r '.s3' <"$CHECKIN_SECRETS_FILE")"
if test "$RCLONE_CONFIG_S3" != "null"; then
cloud-info-provider-service --yaml-file "$CLOUD_INFO_CONFIG" \
--middleware "$CLOUD_INFO_MIDDLEWARE" \
--ignore-share-errors \
--format glue21json >site.json
fi
fi

# Fail if there are no shares
Expand All @@ -100,4 +98,21 @@ printf '"}]}' >>ams-payload

curl -X POST "$ARGO_URL" -H "content-type: application/json" -d @ams-payload

if [ -f site.json ]; then
# Put this info into S3, configure rclone config with
# a provider named "s3" using env variables
export RCLONE_CONFIG_S3_TYPE=s3
RCLONE_CONFIG_S3_ACCESS_KEY_ID="$(yq -r '.s3.access_key_id' <"$CHECKIN_SECRETS_FILE")"
export RCLONE_CONFIG_S3_ACCESS_KEY_ID
RCLONE_CONFIG_S3_SECRET_ACCESS_KEY="$(yq -r '.s3.secret_access_key' <"$CHECKIN_SECRETS_FILE")"
export RCLONE_CONFIG_S3_SECRET_ACCESS_KEY
RCLONE_CONFIG_S3_ENDPOINT="$(yq -r '.s3.endpoint' <"$CHECKIN_SECRETS_FILE")"
export RCLONE_CONFIG_S3_ENDPOINT
S3_BUCKET_NAME="$(yq -r '.s3.bucket' <"$CHECKIN_SECRETS_FILE")"
export S3_BUCKET_NAME
export RCLONE_CONFIG_S3_ACL=private
export RCLONE_CONFIG_S3_NO_CHECK_BUCKET=true
rclone copy site.json "s3:$S3_BUCKET_NAME/$SITE_NAME"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add a decision record for using rclone instead of Wrangler https://developers.cloudflare.com/r2/objects/upload-objects/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still want to be cloudflare agnostic, rclone was the tool I was aware that could be used without much hassle.
BTW, we don't have decision records (yet). I guess if we want to get started with them, they should come as part of the PR, no?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also in favour of using well known generic and independent tools like rclone instead of cloudflare owned stuff, at least when it can be avoided and does not have a clear/decisive benefit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's fine, but write a decision record for it (using e.g. https://github.com/npryce/adr-tools)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have made an attempt to use ADRs, somehow considering this as "Accepted" already. On the learning curve for this, so guidance on how to better use it is welcomed

Copy link
Member

@gwarf gwarf May 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's interesting.
It would be good to document this kind of things/guidelines/suggestions for repo management somewhere.
We could start by adding it to some internal documentation page for GitHub (like in our internal wiki).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!
Lately I realised that we could also use the repo template to document this: https://github.com/EGI-Federation/repository-template

fi

rm -rf "$VO_CONFIG_PATH"
2 changes: 2 additions & 0 deletions cloud-info/cloud_info_catchall/config_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ def generate_shares(config, secrets):
discoverer = RefresherShareDiscovery(config, secrets[s])
elif "access_token" in secrets[s]:
discoverer = AccessTokenShareDiscovery(config, secrets[s])
else:
continue
token_shares = discoverer.get_token_shares()
shares.update(token_shares)
if not shares:
Expand Down
5 changes: 0 additions & 5 deletions cloud-info/cloud_info_catchall/share_discovery.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,3 @@ class AccessTokenShareDiscovery(ShareDiscovery):

def get_token(self):
return self.secret["access_token"]

def build_share(self, project, access_token):
s = super().build_share(project, access_token)
s["auth"].update({"access_token": access_token})
return s
2 changes: 1 addition & 1 deletion cloud-info/cloud_info_catchall/test_share_discovery.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ def test_build_share(self):
project = {"id": "foobar"}
self.assertEqual(
self.discoverer.build_share(project, "token"),
{"auth": {"project_id": "foobar", "access_token": "token"}},
{"auth": {"project_id": "foobar"}},
)


Expand Down
7 changes: 5 additions & 2 deletions cloud-info/cloud_info_catchall/test_token_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,14 +100,17 @@ def test_valid_token_expired_exception(self, m_calendar, m_decode, m_header, m_a
@patch("cloud_info_catchall.token_generator.get_access_token")
def test_generate_tokens(self, m_get_access, m_valid_token):
tokens = {"foo": {"access_token": "abc"}, "bar": {"access_token": "def"}}
secrets = {"foo": {}, "bar": {}}
secrets = {
"foo": {"client_id": "foo", "client_secret": "secfoo"},
"bar": {"client_id": "bar", "client_secret": "secbar"},
}
m_valid_token.side_effect = [True, False]
m_get_access.return_value = "xyz"
tg.generate_tokens(self.OIDC_CONFIG, "abc", tokens, 8, secrets)
m_valid_token.assert_has_calls(
[call("abc", self.OIDC_CONFIG, 8), call("def", self.OIDC_CONFIG, 8)]
)
m_get_access.assert_called_with("https://example.com", "abc", {})
m_get_access.assert_called_with("https://example.com", "abc", secrets["bar"])


if __name__ == "__main__":
Expand Down
3 changes: 3 additions & 0 deletions cloud-info/cloud_info_catchall/token_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,9 @@ def generate_tokens(oidc_config, scopes, tokens, token_ttl, secrets):
# not our thing
if not isinstance(secrets[s], dict):
continue
if "client_id" not in secrets[s] or "client_secret" not in secrets[s]:
# not suitable for us
continue
if "refresh_token" in secrets[s]:
# ignore those that have refresh token
continue
Expand Down
5 changes: 3 additions & 2 deletions cloud-info/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Cloud info version is 9d4c4c516b9311c77564444cb9ecbb059b7f2192
git+https://github.com/EGI-Federation/cloud-info-provider.git@9d4c4c516b9311c77564444cb9ecbb059b7f2192
# Cloud info version is 43cefc204b3e07211c6c37df2ee20eab845c3428
# 43cefc204b3e07211c6c37df2ee20eab845c3428 includes json glue support
git+https://github.com/EGI-Federation/cloud-info-provider.git@43cefc204b3e07211c6c37df2ee20eab845c3428
enolfc marked this conversation as resolved.
Show resolved Hide resolved
git+https://github.com/ARGOeu/argo-ams-library@devel
python-glanceclient
python-novaclient
Expand Down
4 changes: 3 additions & 1 deletion deploy/deploy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ echo "cloud_info_image: \"ghcr.io/egi-federation/fedcloud-cloud-info:sha-$SHORT_
if ansible-playbook -i inventory.yaml \
--extra-vars @secrets.yaml \
--extra-vars @extra-vars.yaml \
--extra-vars @vos.yaml \
playbook.yaml >ansible.log 2>&1; then
status_summary="success"
color="#6DBF59"
Expand All @@ -32,6 +31,9 @@ fi
# copy the secrets to the /etc/egi/vos dir which is readable from the containers
cp secrets.yaml /etc/egi/vos/secrets.yaml

# make sure the container user (999) can access the files
chown -R 999:999 /etc/egi/

GITHUB_COMMIT_URL="https://api.github.com/repos/EGI-Federation/fedcloud-catchall-operations/commits/$COMMIT_SHA/pulls"

# Find out PR we need to update
Expand Down
Loading