Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add storage access rules to outputs #1927

Merged
merged 15 commits into from
Sep 26, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .changeset/two-sloths-clap.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
'@aws-amplify/client-config': minor
'@aws-amplify/backend-output-schemas': patch
'@aws-amplify/integration-tests': patch
'@aws-amplify/backend-storage': patch
'@aws-amplify/backend': patch
'@aws-amplify/backend-cli': patch
---

add storage access rules to outputs
20 changes: 20 additions & 0 deletions packages/backend-output-schemas/src/storage/v1.ts
Original file line number Diff line number Diff line change
@@ -1,9 +1,29 @@
import { z } from 'zod';

const storageAccessActionEnum = z.enum([
'read',
'get',
'list',
'write',
'delete',
]);

const pathSchema = z.record(
z.string(),
z.object({
guest: z.array(storageAccessActionEnum).optional(),
authenticated: z.array(storageAccessActionEnum).optional(),
groups: z.array(storageAccessActionEnum).optional(),
entity: z.array(storageAccessActionEnum).optional(),
resource: z.array(storageAccessActionEnum).optional(),
})
);

const bucketSchema = z.object({
name: z.string(),
bucketName: z.string(),
storageRegion: z.string(),
paths: pathSchema.optional(),
});

export const storageOutputSchema = z.object({
Expand Down
9 changes: 9 additions & 0 deletions packages/backend-storage/src/construct.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import { AttributionMetadataStorage } from '@aws-amplify/backend-output-storage'
import { fileURLToPath } from 'node:url';
import { IFunction } from 'aws-cdk-lib/aws-lambda';
import { S3EventSourceV2 } from 'aws-cdk-lib/aws-lambda-event-sources';
import { StorageAccessDefinitionOutput } from './private_types.js';

// Be very careful editing this value. It is the string that is used to attribute stacks to Amplify Storage in BI metrics
const storageStackType = 'storage-S3';
Expand Down Expand Up @@ -82,6 +83,7 @@ export class AmplifyStorage
readonly resources: StorageResources;
readonly isDefault: boolean;
readonly name: string;
accessDefinition: StorageAccessDefinitionOutput;
/**
* Create a new AmplifyStorage instance
*/
Expand Down Expand Up @@ -143,4 +145,11 @@ export class AmplifyStorage
new S3EventSourceV2(this.resources.bucket, { events })
);
};

/**
* Add access definitions to storage
*/
addAccessDefinition = (accessOutput: StorageAccessDefinitionOutput) => {
this.accessDefinition = accessOutput;
};
}
6 changes: 6 additions & 0 deletions packages/backend-storage/src/private_types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,9 @@ export type StorageError =
* StorageAction type intended to be used after mapping "read" to "get" and "list"
*/
export type InternalStorageAction = Exclude<StorageAction, 'read'>;

/**
* Storage access types intended to be used to map storage access to storage outputs
*/
export type StorageAccessConfig = Record<string, StorageAction[]>;
export type StorageAccessDefinitionOutput = Record<string, StorageAccessConfig>;
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,8 @@ void describe('StorageAccessOrchestrator', () => {
storageAccessPolicyFactory
);

storageAccessOrchestrator.orchestrateStorageAccess();
const storageAccessDefinitionOutput =
storageAccessOrchestrator.orchestrateStorageAccess();
assert.equal(acceptResourceAccessMock.mock.callCount(), 1);
assert.deepStrictEqual(
acceptResourceAccessMock.mock.calls[0].arguments[0].document.toJSON(),
Expand All @@ -102,6 +103,11 @@ void describe('StorageAccessOrchestrator', () => {
acceptResourceAccessMock.mock.calls[0].arguments[1],
ssmEnvironmentEntriesStub
);
assert.deepStrictEqual(storageAccessDefinitionOutput, {
'test/prefix/*': {
acceptor: ['get', 'write'],
},
});
});

void it('handles multiple permissions for the same resource access acceptor', () => {
Expand Down Expand Up @@ -132,7 +138,8 @@ void describe('StorageAccessOrchestrator', () => {
storageAccessPolicyFactory
);

storageAccessOrchestrator.orchestrateStorageAccess();
const storageAccessDefinitionOutput =
storageAccessOrchestrator.orchestrateStorageAccess();
assert.equal(acceptResourceAccessMock.mock.callCount(), 1);
assert.deepStrictEqual(
acceptResourceAccessMock.mock.calls[0].arguments[0].document.toJSON(),
Expand Down Expand Up @@ -164,6 +171,14 @@ void describe('StorageAccessOrchestrator', () => {
acceptResourceAccessMock.mock.calls[0].arguments[1],
ssmEnvironmentEntriesStub
);
assert.deepStrictEqual(storageAccessDefinitionOutput, {
'test/prefix/*': {
acceptor: ['get', 'write', 'delete'],
},
'another/prefix/*': {
acceptor: ['get'],
},
});
});

void it('handles multiple resource access acceptors', () => {
Expand Down Expand Up @@ -204,7 +219,8 @@ void describe('StorageAccessOrchestrator', () => {
storageAccessPolicyFactory
);

storageAccessOrchestrator.orchestrateStorageAccess();
const storageAccessDefinitionOutput =
storageAccessOrchestrator.orchestrateStorageAccess();
assert.equal(acceptResourceAccessMock1.mock.callCount(), 1);
assert.deepStrictEqual(
acceptResourceAccessMock1.mock.calls[0].arguments[0].document.toJSON(),
Expand Down Expand Up @@ -259,6 +275,15 @@ void describe('StorageAccessOrchestrator', () => {
acceptResourceAccessMock2.mock.calls[0].arguments[1],
ssmEnvironmentEntriesStub
);
assert.deepStrictEqual(storageAccessDefinitionOutput, {
'test/prefix/*': {
acceptor1: ['get', 'write', 'delete'],
acceptor2: ['get'],
},
'another/prefix/*': {
acceptor2: ['get', 'delete'],
},
});
});

void it('replaces owner placeholder in s3 prefix', () => {
Expand Down Expand Up @@ -286,7 +311,8 @@ void describe('StorageAccessOrchestrator', () => {
storageAccessPolicyFactory
);

storageAccessOrchestrator.orchestrateStorageAccess();
const storageAccessDefinitionOutput =
storageAccessOrchestrator.orchestrateStorageAccess();
assert.equal(acceptResourceAccessMock.mock.callCount(), 1);
assert.deepStrictEqual(
acceptResourceAccessMock.mock.calls[0].arguments[0].document.toJSON(),
Expand All @@ -310,6 +336,11 @@ void describe('StorageAccessOrchestrator', () => {
acceptResourceAccessMock.mock.calls[0].arguments[1],
ssmEnvironmentEntriesStub
);
assert.deepStrictEqual(storageAccessDefinitionOutput, {
'test/{testOwnerSub}/*': {
acceptor: ['get', 'write'],
},
});
});

void it('denies parent actions on a subpath by default', () => {
Expand Down Expand Up @@ -347,7 +378,8 @@ void describe('StorageAccessOrchestrator', () => {
storageAccessPolicyFactory
);

storageAccessOrchestrator.orchestrateStorageAccess();
const storageAccessDefinitionOutput =
storageAccessOrchestrator.orchestrateStorageAccess();
assert.equal(acceptResourceAccessMock1.mock.callCount(), 1);
assert.deepStrictEqual(
acceptResourceAccessMock1.mock.calls[0].arguments[0].document.toJSON(),
Expand Down Expand Up @@ -396,6 +428,14 @@ void describe('StorageAccessOrchestrator', () => {
Version: '2012-10-17',
}
);
assert.deepStrictEqual(storageAccessDefinitionOutput, {
'foo/*': {
acceptor1: ['get', 'write'],
},
'foo/bar/*': {
acceptor2: ['get'],
},
});
});

void it('combines owner rules for same resource access acceptor', () => {
Expand Down Expand Up @@ -428,7 +468,8 @@ void describe('StorageAccessOrchestrator', () => {
storageAccessPolicyFactory
);

storageAccessOrchestrator.orchestrateStorageAccess();
const storageAccessDefinitionOutput =
storageAccessOrchestrator.orchestrateStorageAccess();
assert.equal(acceptResourceAccessMock.mock.callCount(), 1);
assert.deepStrictEqual(
acceptResourceAccessMock.mock.calls[0].arguments[0].document.toJSON(),
Expand Down Expand Up @@ -457,6 +498,15 @@ void describe('StorageAccessOrchestrator', () => {
acceptResourceAccessMock.mock.calls[0].arguments[1],
ssmEnvironmentEntriesStub
);
assert.deepStrictEqual(storageAccessDefinitionOutput, {
'foo/*/*': {
'auth-with-id': ['write', 'delete'],
auth: ['get'],
},
'foo/{idSub}/*': {
'auth-with-id': ['write', 'delete'],
},
});
});

void it('handles multiple resource access acceptors on multiple prefixes', () => {
Expand Down Expand Up @@ -526,7 +576,8 @@ void describe('StorageAccessOrchestrator', () => {
storageAccessPolicyFactory
);

storageAccessOrchestrator.orchestrateStorageAccess();
const storageAccessDefinitionOutput =
storageAccessOrchestrator.orchestrateStorageAccess();
assert.equal(acceptResourceAccessMock1.mock.callCount(), 1);
assert.deepStrictEqual(
acceptResourceAccessMock1.mock.calls[0].arguments[0].document.toJSON(),
Expand Down Expand Up @@ -594,6 +645,24 @@ void describe('StorageAccessOrchestrator', () => {
Version: '2012-10-17',
}
);
assert.deepStrictEqual(storageAccessDefinitionOutput, {
'foo/*': {
stub1: ['get', 'write'],
},
'foo/bar/*': {
stub2: ['get'],
},
'foo/baz/*': {
stub1: ['get'],
},
'other/*/*': {
stub1: ['get'],
stub2: ['get', 'write', 'delete'],
},
'other/{idSub}/*': {
stub2: ['get', 'write', 'delete'],
},
});
});

void it('throws validation error for multiple rules on the same resource access acceptor', () => {
Expand Down Expand Up @@ -658,7 +727,8 @@ void describe('StorageAccessOrchestrator', () => {
storageAccessPolicyFactory
);

storageAccessOrchestrator.orchestrateStorageAccess();
const storageAccessDefinitionOutput =
storageAccessOrchestrator.orchestrateStorageAccess();
assert.equal(acceptResourceAccessMock.mock.callCount(), 1);
assert.deepStrictEqual(
acceptResourceAccessMock.mock.calls[0].arguments[0].document.toJSON(),
Expand Down Expand Up @@ -695,6 +765,14 @@ void describe('StorageAccessOrchestrator', () => {
acceptResourceAccessMock.mock.calls[0].arguments[1],
ssmEnvironmentEntriesStub
);
assert.deepStrictEqual(storageAccessDefinitionOutput, {
'foo/bar/*': {
auth: ['read', 'get', 'list'],
},
'other/baz/*': {
auth: ['read'],
},
});
});
});
});
Expand Down
19 changes: 18 additions & 1 deletion packages/backend-storage/src/storage_access_orchestrator.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,11 @@ import { entityIdPathToken } from './constants.js';
import { StorageAccessPolicyFactory } from './storage_access_policy_factory.js';
import { validateStorageAccessPaths as _validateStorageAccessPaths } from './validate_storage_access_paths.js';
import { roleAccessBuilder as _roleAccessBuilder } from './access_builder.js';
import { InternalStorageAction, StorageError } from './private_types.js';
import {
InternalStorageAction,
StorageAccessConfig,
StorageError,
} from './private_types.js';
import { AmplifyUserError } from '@aws-amplify/platform-core';

/* some types internal to this file to improve readability */
Expand Down Expand Up @@ -88,11 +92,15 @@ export class StorageAccessOrchestrator {
// verify that the paths in the access definition are valid
this.validateStorageAccessPaths(Object.keys(storageAccessDefinition));

const storageOutputAccessDefinition: Record<string, StorageAccessConfig> =
{};

// iterate over the access definition and group permissions by ResourceAccessAcceptor
Object.entries(storageAccessDefinition).forEach(
([s3Prefix, accessPermissions]) => {
const uniqueDefinitionIdSet = new Set<string>();
// iterate over all of the access definitions for a given prefix
const accessConfig: StorageAccessConfig = {};
accessPermissions.forEach((permission) => {
// iterate over all uniqueDefinitionIdValidations and ensure uniqueness within this path prefix
permission.uniqueDefinitionIdValidations.forEach(
Expand All @@ -105,6 +113,8 @@ export class StorageAccessOrchestrator {
} else {
uniqueDefinitionIdSet.add(uniqueDefinitionId);
}

accessConfig[uniqueDefinitionId] = permission.actions;
}
);
// make the owner placeholder substitution in the s3 prefix
Expand All @@ -113,6 +123,11 @@ export class StorageAccessOrchestrator {
permission.idSubstitution
) as StoragePath;

storageOutputAccessDefinition[prefix] = {
...storageOutputAccessDefinition[prefix],
...accessConfig,
};

// replace "read" with "get" and "list" in actions
const replaceReadWithGetAndList = permission.actions.flatMap(
(action) => (action === 'read' ? ['get', 'list'] : [action])
Expand All @@ -139,6 +154,8 @@ export class StorageAccessOrchestrator {

// iterate over the access map entries and invoke each ResourceAccessAcceptor to accept the permissions
this.attachPolicies(this.ssmEnvironmentEntries);

return storageOutputAccessDefinition;
};

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,9 @@ export class StorageContainerEntryGenerator
);

// the orchestrator generates policies according to the accessDefinition and attaches the policies to appropriate roles
storageAccessOrchestrator.orchestrateStorageAccess();
const storageAccessOutput =
storageAccessOrchestrator.orchestrateStorageAccess();
amplifyStorage.addAccessDefinition(storageAccessOutput);

return amplifyStorage;
};
Expand Down
2 changes: 1 addition & 1 deletion packages/backend-storage/src/storage_outputs_aspect.ts
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,6 @@ export class StorageOutputsAspect implements IAspect {
},
});
}

// both default and non-default buckets should have the name, bucket name, and storage region stored in `buckets` field
outputStorageStrategy.appendToBackendOutputList(storageOutputKey, {
version: '1',
Expand All @@ -104,6 +103,7 @@ export class StorageOutputsAspect implements IAspect {
name: node.name,
bucketName: node.resources.bucket.bucketName,
storageRegion: Stack.of(node).region,
paths: node.accessDefinition,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We didn't need to make changes here

const bucketSchema = z.object({
name: z.string(),
bucketName: z.string(),
storageRegion: z.string(),
});
?

We might also be missing some automated testing if we didn't catch this failure, or Zod is not strict here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is needed to add paths to the amplify_outputs.json file. paths is optional so storage with no access rules defined will not fail and not have paths in their amplify_outputs.json.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the paths be added as optional since we are adding that in the bucketSchema object before stringifying.

Copy link
Contributor Author

@rtpascual rtpascual Sep 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, and yup I believe Zod is not strict enough here, I'll add a test to make sure we cover for both cases of storage with access and without.

}),
},
});
Expand Down
2 changes: 1 addition & 1 deletion packages/backend/src/backend_factory.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ void describe('Backend', () => {
const backend = new BackendFactory({}, rootStack);
const clientConfigPartial: DeepPartialAmplifyGeneratedConfigs<ClientConfig> =
{
version: '1.1',
version: '1.2',
custom: {
someCustomOutput: 'someCustomOutputValue',
},
Expand Down
2 changes: 1 addition & 1 deletion packages/backend/src/backend_factory.ts
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ const rootStackTypeIdentifier = 'root';

// Client config version that is used by `backend.addOutput()`
const DEFAULT_CLIENT_CONFIG_VERSION_FOR_BACKEND_ADD_OUTPUT =
ClientConfigVersionOption.V1_1;
ClientConfigVersionOption.V1_2;

/**
* Factory that collects and instantiates all the Amplify backend constructs
Expand Down
Loading
Loading