Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

credential chain interrupted by shared config file load error #2723

Open
2 tasks done
sribharghava opened this issue Jul 23, 2024 · 7 comments
Open
2 tasks done

credential chain interrupted by shared config file load error #2723

sribharghava opened this issue Jul 23, 2024 · 7 comments
Assignees
Labels
bug This issue is a bug. p3 This is a minor priority issue queued This issues is on the AWS team's backlog v1-v2-inconsistency v1-v2-inconsistency Behavior has changed from v1 to v2, or feature is missing altogether

Comments

@sribharghava
Copy link

Pre-Migration Checklist

Go Version Used

1.21

Describe the Migration Issue

Migrating to v2 from v1 triggered this issue aws/aws-sdk-go#2455 that got fixed in the v1

Code Comparison

No response

Observed Differences/Errors

This is happening again on v2 version aws/aws-sdk-go#2455. Going through the code flow, it's apparent that we are not ignoring this error even when we have credentials defined in the env vars.

Additional Context

No response

@sribharghava sribharghava added needs-triage This issue or PR still needs to be triaged. v1-v2-inconsistency v1-v2-inconsistency Behavior has changed from v1 to v2, or feature is missing altogether labels Jul 23, 2024
@RanVaknin RanVaknin self-assigned this Jul 23, 2024
@RanVaknin
Copy link
Contributor

Hi @sribharghava ,

This is peculiar because the SDK's credential chain is meant to check many credentials sources (there is a provider for each credential source; INI file, Env variables, Token file, Process etc etc) when one fails, the credential chain goes to the next one, and so on until one provider does return valid credentials.

For example, in an environment like an EC2 instance, there are is no INI file in the file system. The SDK will use the IMDS provider using environment variable info to get credentials.

Can you please describe the issue you are having so we may attempt to reproduce this accordingly.

Thanks,
Ran~

@RanVaknin RanVaknin added response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. p2 This is a standard priority issue and removed needs-triage This issue or PR still needs to be triaged. labels Jul 24, 2024
@sribharghava
Copy link
Author

Thanks @RanVaknin for checking this.

This happens when you mount non existing credential file to a docker container. Docker by default creates a directory at the path when the mounted file is not present. Now, the SDK's credentials chain is failing during the INI file reading as it's trying to read a directory as a file, this error is not handled in the flow which is short circuiting other credentials sources which have valid credentials.

@RanVaknin
Copy link
Contributor

Hi @sribharghava ,

I'm not sure what you mean by "Docker by default creates a directory at the path when the mounted file is not present."

Docker does not create anything at the ~/.aws directory path by default. The SDK's INI provider will specifically try to read from the disk the INI file in the following path:
~/.aws/credentials and ~/.aws/config.

Does your image have any Dockerfile instructions to run something like aws configure which will result in an INI file to be created?

Also I looked at the issue you linked and this seems unrelated. This was some old legacy behavior that does not seem to exist in v2.

For my own check I created the following docker container:

# Dockerfile
FROM golang:1.20-alpine

RUN apk add --no-cache git

RUN git config --global http.lowSpeedLimit 0 \
    && git config --global http.lowSpeedTime 600 \
    && git config --global http.postBuffer 524288000


WORKDIR /app

COPY main.go .

ENV GOPROXY=direct

RUN go mod init tempmod \
    && go get github.com/aws/aws-sdk-go-v2/aws \
    && go get github.com/aws/aws-sdk-go-v2/config \
    && go get github.com/aws/aws-sdk-go-v2/service/s3 \
    && go build -o my-go-app

EXPOSE 8080

CMD ["./my-go-app"]

And then run it while providing the credentials in env variables:

$ docker run -p 8080:8080 \
           -e AWS_ACCESS_KEY_ID=REDACTED \
           -e AWS_SECRET_ACCESS_KEY=REDACTED \
           -e AWS_SESSION_TOKEN=REDACTED \
my-go-app 
there are 56 buckets

With this code:

package main

import (
	"context"
	"fmt"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/s3"
	"log"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithRegion("us-east-1"))
	if err != nil {
		log.Fatal(err)
	}

	client := s3.NewFromConfig(cfg)

	out, err := client.ListBuckets(context.TODO(), &s3.ListBucketsInput{})
	if err != nil {
		panic(err)
	}
	fmt.Printf(`there are %v buckets`, len(out.Buckets))
}
// prints: there are 56 buckets

Thanks,
Ran~

@RanVaknin RanVaknin added response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. and removed response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. labels Jul 25, 2024
@sribharghava
Copy link
Author

@RanVaknin Thanks for sharing your setup. Just update your docker run command to the following and you'll see what I see.

docker run -p 8080:8080 \  
           -e AWS_ACCESS_KEY_ID=REDACTED \
           -e AWS_SECRET_ACCESS_KEY=REDACTED \
           -e AWS_SESSION_TOKEN=REDACTED \ 
           -e AWS_SHARED_CREDENTIALS_FILE=/app/credentials \
           -v nonexisting_file:/app/credentials \
my-go-app

@sribharghava
Copy link
Author

Also I looked at the aws/aws-sdk-go#2455 you linked and this seems unrelated. This was some old legacy behavior that does not seem to exist in v2.

On this, I primarily looked at the title (Looking for credentials file when env vars configured causes error) and not full content, so might not be fully relevant. But you'll get the issue when you run the above command.

@RanVaknin
Copy link
Contributor

RanVaknin commented Jul 26, 2024

Hi @sribharghava ,

Thanks for the latest info. This clarifies the problem. I don't understand the real life use case where you explicitly provide a non-existent INI file, but I tested this this with v1 and it's indeed working.

I'll add this to our backlog.

Thanks,
Ran~

@RanVaknin RanVaknin added bug This issue is a bug. p3 This is a minor priority issue and removed p2 This is a standard priority issue labels Jul 26, 2024
@lucix-aws lucix-aws changed the title MIGRATION ISSUE: (short issue description) credential chain interrupted by shared config file load error Jul 26, 2024
@RanVaknin RanVaknin added queued This issues is on the AWS team's backlog and removed response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. labels Jul 26, 2024
@sribharghava
Copy link
Author

@RanVaknin our real life use-case very briefly.

We use docker compose for building and testing our service locally and on CI(Jenkins).
For example, a section of our sample docker compose looks roughly as follows,

 environment:
      - AWS_PROFILE
      - AWS_ACCESS_KEY_ID
      - AWS_SECRET_ACCESS_KEY
      - AWS_SESSION_TOKEN
      -  ...
      - AWS_SHARED_CREDENTIALS_FILE=/app/.aws/credentials
volumes:
      - ...
      - ${HOME}/.aws/credentials:/app/.aws/credentials

On local shared credentials associated with the developer's role gets picked and on Jenkins(our CI) it's supposed to use ENV vars(setup from Jenkins IAM role on stage setup) as per our Jenkins configuration. Our Jenkins agent of course doesn't have a shared credentials file at ${HOME}/.aws/credentials so when the app container starts on this agent, this non existent file gets mounted as a directory resulting in the above error breaking the credential chain on aws sdk v2 version which was working ok on aws sdk v1 version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a bug. p3 This is a minor priority issue queued This issues is on the AWS team's backlog v1-v2-inconsistency v1-v2-inconsistency Behavior has changed from v1 to v2, or feature is missing altogether
Projects
None yet
Development

No branches or pull requests

2 participants