Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat add support for different s3 providers #123

Merged
merged 2 commits into from
Jul 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ spec:
type: array
provider:
description: '(Required) Provider specifies the cloud provider
that will be used. Supported providers: `aws`, `gcp`, and `azure`'
that will be used. Supported providers: `aws`, `s3`, `gcp`, and `azure`'
type: string
region:
description: (Optional) Region Name.
Expand Down
55 changes: 55 additions & 0 deletions docs/user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
- [Parallel Number of Gatling Load Testing](#parallel-number-of-gatling-load-testing)
- [Configure Cloud Storage Provider](#configure-cloud-storage-provider)
- [Set Amazon S3 as Cloud Storage](#set-amazon-s3-as-cloud-storage)
- [Set S3 as Cloud Storage](#set-s3-as-cloud-storage)
- [Set Google Cloud Storage as Cloud Storage](#set-google-cloud-storage-as-cloud-storage)
- [Set Azure Blob Storage as Cloud Storage](#set-azure-blob-storage-as-cloud-storage)
- [Configure Notification Service Provider](#configure-notification-service-provider)
Expand Down Expand Up @@ -657,6 +658,60 @@ Here is an IAM policy to attach for Gatling Pod to interact with Amazon S3 bucke
- Replace `BUCKET_NAME` above with your bucket name
- To know more about the ways to supply rclone with a set of AWS credentials, please check [this](https://rclone.org/s3/#configuration).

#### Set S3 as Cloud Storage

This section provides guidance on setting up any cloud storage provider that supports the S3 API.
In this example suppose you want to store Gatling reports to a bucket named `gatling-operator-reports` in OVH's S3 provider, specifically in the `de` region.
You configure each fields in `.spec.cloudStorageSpec` and set `RCLONE_S3_ENDPOINT` env like this:

```yaml
apiVersion: gatling-operator.tech.zozo.com/v1alpha1
kind: Gatling
metadata:
name: gatling-sample
spec:
cloudStorageSpec:
provider: "s3"
bucket: "gatling-operator-reports"
region: "de"
env:
- name: RCLONE_S3_ENDPOINT
value: https://s3.de.io.cloud.ovh.net
...omit...
```

However, this is not enough. You must supply Gatling Pod (both Gatling Runner Pod and Gatling Reporter Pod) with credentials to access S3 bucket. Strictly speaking, [rclone](https://rclone.org/) container in Gatling Pod interacts with S3 bucket, thus you need to supply rclone with credentials.

Below is shown how to set S3 credentials via environment variables:

```yaml
...omit...
cloudStorageSpec:
provider: "s3"
bucket: "gatling-operator-reports"
region: "de"
env:
- name: RCLONE_S3_PROVIDER
value: Other
- name: RCLONE_S3_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: s3-keys
key: S3_ACCESS_KEY
- name: RCLONE_S3_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: s3-keys
key: S3_SECRET_ACCESS
- name: RCLONE_S3_ENDPOINT
value: https://s3.de.io.cloud.ovh.net
- name: RCLONE_S3_REGION
value: de
...omit...
```

There are multiple ways to authenticate for more please check [this](https://rclone.org/s3/#configuration).

#### Set Google Cloud Storage as Cloud Storage

Suppose that you want to store Gatling reports to a bucket named `gatling-operator-reports` of Google Cloud Storage, you configure each fields in `.spec.cloudStorageSpec` like this:
Expand Down
65 changes: 0 additions & 65 deletions pkg/cloudstorages/aws.go

This file was deleted.

4 changes: 3 additions & 1 deletion pkg/cloudstorages/cloudstorage.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,9 @@ func GetProvider(provider string, args ...EnvVars) *CloudStorageProvider {
var csp CloudStorageProvider
switch provider {
case "aws":
csp = &AWSCloudStorageProvider{providerName: provider}
csp = &S3CloudStorageProvider{providerName: provider}
Copy link
Contributor

@gold-kou gold-kou Jul 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The backward compatibility for aws looks maintained.
Sounds good ✨ .

case "s3":
csp = &S3CloudStorageProvider{providerName: provider}
case "gcp":
csp = &GCPCloudStorageProvider{providerName: provider}
case "azure":
Expand Down
14 changes: 13 additions & 1 deletion pkg/cloudstorages/cloudstorage_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ var _ = Describe("GetProvider", func() {
provider = "aws"
expectedValue = "aws"
})
It("should get a pointer of AWSCloudStorageProvider that has ProviderName field value = aws", func() {
It("should get a pointer of S3CloudStorageProvider that has ProviderName field value = aws", func() {
cspp := GetProvider(provider)
Expect(cspp).NotTo(BeNil())
Expect((*cspp).GetName()).To(Equal(expectedValue))
Expand All @@ -34,6 +34,18 @@ var _ = Describe("GetProvider", func() {
})
})

Context("Provider is s3", func() {
BeforeEach(func() {
provider = "s3"
expectedValue = "s3"
})
It("should get a pointer of S3CloudStorageProvider that has ProviderName field value = s3", func() {
cspp := GetProvider(provider)
Expect(cspp).NotTo(BeNil())
Expect((*cspp).GetName()).To(Equal(expectedValue))
})
})

Context("Provider is non-supported one", func() {
BeforeEach(func() {
provider = "foo"
Expand Down
91 changes: 91 additions & 0 deletions pkg/cloudstorages/s3.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
package cloudstorages

import (
"fmt"
"strings"
)

type S3CloudStorageProvider struct {
providerName string
customS3ProviderHost string
}

func (p *S3CloudStorageProvider) init(args []EnvVars) {
if len(args) > 0 {
var envs EnvVars = args[0]
for _, env := range envs {
if env.Name == "RCLONE_S3_ENDPOINT" {
p.customS3ProviderHost = p.checkAndRemoveProtocol(env.Value)
break
}
}
}
}

func (p *S3CloudStorageProvider) checkAndRemoveProtocol(url string) string {
idx := strings.Index(url, "://")
if idx == -1 {
return url
}
return url[idx+3:]
}

func (p *S3CloudStorageProvider) GetName() string {
return p.providerName
}

func (p *S3CloudStorageProvider) GetCloudStoragePath(bucket string, gatlingName string, subDir string) string {
// Format s3:<bucket>/<gatling-name>/<sub-dir>
return fmt.Sprintf("s3:%s/%s/%s", bucket, gatlingName, subDir)
}

func (p *S3CloudStorageProvider) GetCloudStorageReportURL(bucket string, gatlingName string, subDir string) string {
// Format https://<bucket>.<s3-provider-host>/<gatling-name>/<sub-dir>/index.html
defaultS3ProviderHost := "s3.amazonaws.com"
s3ProviderHost := defaultS3ProviderHost
if p.customS3ProviderHost != "" {
s3ProviderHost = p.customS3ProviderHost
}

return fmt.Sprintf("https://%s.%s/%s/%s/index.html", bucket, s3ProviderHost, gatlingName, subDir)
}

func (p *S3CloudStorageProvider) GetGatlingTransferResultCommand(resultsDirectoryPath string, region string, storagePath string) string {
template := `
RESULTS_DIR_PATH=%s
rclone config create s3 s3 env_auth=true region %s
while true; do
if [ -f "${RESULTS_DIR_PATH}/FAILED" ]; then
echo "Skip transfering gatling results"
break
fi
if [ -f "${RESULTS_DIR_PATH}/COMPLETED" ]; then
for source in $(find ${RESULTS_DIR_PATH} -type f -name *.log)
do
rclone copyto ${source} --s3-no-check-bucket --s3-env-auth %s/${HOSTNAME}.log
done
break
fi
sleep 1;
done
`
return fmt.Sprintf(template, resultsDirectoryPath, region, storagePath)
}

func (p *S3CloudStorageProvider) GetGatlingAggregateResultCommand(resultsDirectoryPath string, region string, storagePath string) string {
template := `
GATLING_AGGREGATE_DIR=%s
rclone config create s3 s3 env_auth=true region %s
rclone copy --s3-no-check-bucket --s3-env-auth %s ${GATLING_AGGREGATE_DIR}
`
return fmt.Sprintf(template, resultsDirectoryPath, region, storagePath)
}

func (p *S3CloudStorageProvider) GetGatlingTransferReportCommand(resultsDirectoryPath string, region string, storagePath string) string {
template := `
GATLING_AGGREGATE_DIR=%s
rclone config create s3 s3 env_auth=true region %s
rclone copy ${GATLING_AGGREGATE_DIR} --exclude "*.log" --s3-no-check-bucket --s3-env-auth %s
`
return fmt.Sprintf(template, resultsDirectoryPath, region, storagePath)
}
44 changes: 28 additions & 16 deletions pkg/cloudstorages/aws_test.go → pkg/cloudstorages/s3_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ var _ = Describe("GetName", func() {
})
Context("Provider is aws", func() {
It("should get provider name = aws", func() {
csp := &AWSCloudStorageProvider{providerName: provider}
csp := &S3CloudStorageProvider{providerName: provider}
Expect(csp.GetName()).To(Equal(expectedValue))
})
})
Expand All @@ -39,33 +39,45 @@ var _ = Describe("GetCloudStoragePath", func() {
})
Context("Provider is aws", func() {
It("path is aws s3 bucket", func() {
csp := &AWSCloudStorageProvider{providerName: provider}
csp := &S3CloudStorageProvider{providerName: provider}
Expect(csp.GetCloudStoragePath(bucket, gatlingName, subDir)).To(Equal(expectedValue))
})
})
})

var _ = Describe("GetCloudStorageReportURL", func() {
var (
provider string
bucket string
gatlingName string
subDir string
expectedValue string
provider string
bucket string
gatlingName string
subDir string
)
BeforeEach(func() {
provider = "aws"
provider = "s3"
bucket = "testBucket"
gatlingName = "testGatling"
subDir = "subDir"
expectedValue = "https://testBucket.s3.amazonaws.com/testGatling/subDir/index.html"
})
Context("Provider is aws", func() {
It("path is aws s3 bucket", func() {
csp := &AWSCloudStorageProvider{providerName: provider}
Expect(csp.GetCloudStorageReportURL(bucket, gatlingName, subDir)).To(Equal(expectedValue))
Context("Provider is s3", func() {
It("path is aws s3 bucket if RCLONE_S3_ENDPOINT not defined", func() {
csp := &S3CloudStorageProvider{providerName: provider}
Expect(csp.GetCloudStorageReportURL(bucket, gatlingName, subDir)).To(Equal("https://testBucket.s3.amazonaws.com/testGatling/subDir/index.html"))
})

It("path is S3 bucket with custom provider endpoint", func() {
csp := &S3CloudStorageProvider{providerName: provider}
csp.init([]EnvVars{
{
{
Name: "RCLONE_S3_ENDPOINT",
Value: "https://s3.de.io.cloud.ovh.net",
},
},
})
Expect(csp.GetCloudStorageReportURL(bucket, gatlingName, subDir)).To(Equal("https://testBucket.s3.de.io.cloud.ovh.net/testGatling/subDir/index.html"))
})
})

})

var _ = Describe("GetGatlingTransferResultCommand", func() {
Expand Down Expand Up @@ -102,7 +114,7 @@ done
})
Context("Provider is aws", func() {
It("returns commands with s3 rclone config", func() {
csp := &AWSCloudStorageProvider{providerName: provider}
csp := &S3CloudStorageProvider{providerName: provider}
Expect(csp.GetGatlingTransferResultCommand(resultsDirectoryPath, region, storagePath)).To(Equal(expectedValue))
})
})
Expand All @@ -129,7 +141,7 @@ rclone copy --s3-no-check-bucket --s3-env-auth testStoragePath ${GATLING_AGGREGA
})
Context("Provider is aws", func() {
It("returns commands with s3 rclone config", func() {
csp := &AWSCloudStorageProvider{providerName: provider}
csp := &S3CloudStorageProvider{providerName: provider}
Expect(csp.GetGatlingAggregateResultCommand(resultsDirectoryPath, region, storagePath)).To(Equal(expectedValue))
})
})
Expand All @@ -156,7 +168,7 @@ rclone copy ${GATLING_AGGREGATE_DIR} --exclude "*.log" --s3-no-check-bucket --s3
})
Context("Provider is aws", func() {
It("returns commands with s3 rclone config", func() {
csp := &AWSCloudStorageProvider{providerName: provider}
csp := &S3CloudStorageProvider{providerName: provider}
Expect(csp.GetGatlingTransferReportCommand(resultsDirectoryPath, region, storagePath)).To(Equal(expectedValue))
})
})
Expand Down
Loading