Skip to content

Commit

Permalink
Merge pull request #1182 from wireapp/release_2020_07_29
Browse files Browse the repository at this point in the history
Release 2020-07-29
  • Loading branch information
fisx authored Jul 30, 2020
2 parents 0269dfa + b30d08a commit 4cb3364
Show file tree
Hide file tree
Showing 143 changed files with 3,107 additions and 1,844 deletions.
33 changes: 33 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,36 @@
# 2020-07-29

## Release Notes

* This release makes a couple of changes to the elasticsearch mapping and requires a data migration. The correct order of upgrade is:
1. [Update mapping](./docs/reference/elastic-search.md#update-mapping)
1. Upgrade brig as usual
1. [Run data migration](./docs/reference/elastic-search.md#migrate-data)
Search should continue to work normally during this upgrade.
* Now with cargohold using V4 signatures, the region is part of the Authorization header, so please make sure it is configured correctly. This can be provided the same way as the AWS credentials, e.g. using the AWS_REGION environment variable.

## Bug Fixes

* Fix member count of suspended teams in journal events (#1171)
* Disallow team creation when setRestrictUserCreation is true (#1174)

## New Features

* Pending invitations by email lookup (#1168)
* Support s3 v4 signatures (and use package amazonka instead of aws in cargohold) (#1157)
* Federation: Implement ID mapping (brig) (#1162)

## Internal changes

* SCIM cleanup; drop table `spar.scim_user` (#1169, #1172)
* ormolu script: use ++FAILURES as it will not evaluate to 0 (#1178)
* Refactor: Simplify SRV lookup logic in federation-util (#1175)
* handy cqlsh make target to manually poke at the database (#1170)
* hscim: add license headers (#1165)
* Upgrade stack to 2.3.1 (#1166)
* gundeck: drop deprecated tables (#1163)


# 2020-07-13

## Release Notes
Expand Down
5 changes: 5 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,11 @@ git-add-cassandra-schema: db-reset
( echo '-- automatically generated with `make git-add-cassandra-schema`' ; docker exec -i $(CASSANDRA_CONTAINER) /usr/bin/cqlsh -e "DESCRIBE schema;" ) > ./docs/reference/cassandra-schema.cql
git add ./docs/reference/cassandra-schema.cql

.PHONY: cqlsh
cqlsh:
@echo "make sure you have ./deploy/dockerephemeral/run.sh running in another window!"
docker exec -it $(CASSANDRA_CONTAINER) /usr/bin/cqlsh

.PHONY: db-reset
db-reset:
@echo "make sure you have ./deploy/dockerephemeral/run.sh running in another window!"
Expand Down
2 changes: 1 addition & 1 deletion build/alpine/Dockerfile.prebuilder
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ RUN apk add --no-cache \
sed

# get static version of Haskell Stack and use system ghc by default
ARG STACK_ALPINE_VERSION=2.1.3
ARG STACK_ALPINE_VERSION=2.3.1
RUN curl -sSfL https://github.com/commercialhaskell/stack/releases/download/v${STACK_ALPINE_VERSION}/stack-${STACK_ALPINE_VERSION}-linux-x86_64-static.tar.gz \
| tar --wildcards -C /usr/local/bin --strip-components=1 -xzvf - '*/stack' && chmod 755 /usr/local/bin/stack && \
stack config set system-ghc --global true
9 changes: 9 additions & 0 deletions deploy/services-demo/conf/nginz/nginx-docker.conf
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,8 @@ http {
#

# Brig Endpoints
#
## brig unauthenticated endpoints

rewrite ^/api-docs/users /users/api-docs?base_url=http://127.0.0.1:8080/ break;

Expand Down Expand Up @@ -164,6 +166,13 @@ http {
proxy_pass http://brig;
}

location ~* ^/teams/invitations/([^/]*)$ {
include common_response_no_zauth.conf;
proxy_pass http://brig;
}

## brig authenticated endpoints

location /self {
include common_response_with_zauth.conf;
proxy_pass http://brig;
Expand Down
9 changes: 9 additions & 0 deletions deploy/services-demo/conf/nginz/nginx.conf
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,8 @@ http {
#

# Brig Endpoints
#
## brig unauthenticated endpoints

rewrite ^/api-docs/users /users/api-docs?base_url=http://127.0.0.1:8080/ break;

Expand Down Expand Up @@ -161,6 +163,13 @@ http {
proxy_pass http://brig;
}

location ~* ^/teams/invitations/([^/]*)$ {
include common_response_no_zauth.conf;
proxy_pass http://brig;
}

## brig authenticated endpoints

location /self {
include common_response_with_zauth.conf;
proxy_pass http://brig;
Expand Down
14 changes: 5 additions & 9 deletions docs/developer/dependencies.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ sudo dnf install -y pkgconfig haskell-platform libstdc++-devel libstdc++-static

### Ubuntu / Debian:

_Note_: Debian is not recommended due to this issue when running local integration tests: [#327](https://github.com/wireapp/wire-server/issues/327)*. This issue does not occur with Ubuntu.
_Note_: Debian is not recommended due to this issue when running local integration tests: [#327](https://github.com/wireapp/wire-server/issues/327). This issue does not occur with Ubuntu.

```bash
sudo apt install pkg-config libsodium-dev openssl-dev libtool automake build-essential libicu-dev libsnappy-dev libgeoip-dev protobuf-compiler libxml2-dev zlib1g-dev -y
Expand Down Expand Up @@ -59,16 +59,12 @@ sudo installer -pkg /Library/Developer/CommandLineTools/Packages/macOS_SDK_heade

## Haskell Stack

When you're done, ensure `stack --version` is >= 1.6.5
Please refer to [Stack's installation instructions](https://docs.haskellstack.org/en/stable/README/#how-to-install).

You may wish to make executables installed by stack available, by e.g. adding the following to your shell profile:
When you're done, ensure `stack --version` is recent, ideally the same as `STACK_ALPINE_VERSION` in [`build/alpine/Dockerfile.prebuilder`](../../build/alpine/Dockerfile.prebuilder).

```bash
export PATH=~/.local/bin:$PATH
```

### Ubuntu / Debian Unstable
_Note_: Debian stretch packages too old of a version of haskell-stack. It is recommended to retrieve the version available from testing, or unstable, or to use stack to update stack.(https://github.com/commercialhaskell/stack/issues/3686)*
### Ubuntu / Debian
_Note_: The packaged versions of `haskell-stack` are too old. It is recommended to follow the generic instructions or to use stack to update stack (`stack upgrade`).

```bash
sudo apt install haskell-stack -y
Expand Down
48 changes: 7 additions & 41 deletions docs/developer/scim/storage.md
Original file line number Diff line number Diff line change
@@ -1,51 +1,17 @@
# Storing SCIM-related data {#DevScimStorage}

_Author: Artyom Kazak_
_Author: Artyom Kazak, Matthias Fischmann_

---

## Storing user data {#DevScimStorageUsers}

SCIM user data is stored as JSON blobs in the `scim_user` table in Spar, one blob per SCIM-managed user. Those blobs conform to the SCIM standard and are returned by `GET /scim/v2/Users`.

Note that when a user is created via SCIM, the received blob is not written verbatim to the database – it is first parsed by the [hscim](https://github.com/wireapp/hscim) library, and all unknown fields are removed.

Sample blob:

```json
{
"schemas": [
"urn:ietf:params:scim:schemas:core:2.0:User",
"urn:wire:scim:schemas:profile:1.0"
],
"id": "ef4bafda-5be8-46e3-bed2-5bcce55cff01",
"externalId": "[email protected]",
"userName": "lana_d",
"displayName": "Lana Donohue",
"urn:wire:scim:schemas:profile:1.0": {
"richInfo": {
"version": 0,
"fields": [
{ "type": "Title", "value": "Chief Backup Officer" },
{ "type": "Favorite quote", "value": "Monads are just giant burritos" }
]
}
},
"meta": {
"resourceType": "User",
"location": "https://staging-nginz-https.zinfra.io/scim/v2/Users/ef4bafda-5be8-46e3-bed2-5bcce55cff01",
"created": "2019-04-21T04:15:12.535509602Z",
"lastModified": "2019-04-21T04:15:18.185055531Z",
"version": "W/\"e051bc17f7e07dec815f4b9314f76f88e2949a62b6aad8c816086cff85de4783\""
}
}
```

### One-way sync from Spar to Brig {#DevScimOneWaySync}

A user is considered SCIM-managed if they were provisioned with SCIM (when it's the case, `userManagedBy` will be set to `ManagedByScim`). Data about SCIM-managed users is stored both in Brig and Spar, and should always be in sync.

Currently (2019-04-29) we only implement one-way sync – when a user is modified via SCIM, Spar takes care to update data in Brig. However, user data is _not_ updated on the Spar side when it is changed in Brig, and Brig does not yet prohibit changing user data via its API – it relies on clients to be well-behaved and respect `userManagedBy`.
SCIM user data is validated by the spar service and stored as brig users. All fields that wire doesn't care about are silently dropped. `GET /scim/v2/Users` will trigger a lookup in brig, and the data thus obtained is synthesized back into a SCIM record.

Time stamps `created_at` and `last_updated_at` for the SCIM metadata are stored in `spar.scim_user_times`. The are kept in sync with the users that are otherwise stored in brig. (Rationale: we briefly considered using `select writetime(*) from brig.user` for last update and `select writetime(activated) from brig.user` for creation, but this has a drawback: we don't have the time stamps when storing the record, so the `POST` handler would need to do a database write and a consecutive lookup, or an `insert if not exists`.)

Users created by SCIM set the `ManagedBy` field in brig to `ManagedByScim`. This *should* lead to brig disallowing certain update operations (since the single source of truth should be the SCIM peer that has created and is updating the user), but we never got around to implementing that (as of Wed 15 Jul 2020 10:59:11 AM CEST). See also {@SparBrainDump} (grep for `ManagedBy`).


## Storing SCIM tokens {#DevScimStorageTokens}

Expand Down
100 changes: 49 additions & 51 deletions docs/reference/cassandra-schema.cql
Original file line number Diff line number Diff line change
Expand Up @@ -437,13 +437,15 @@ CREATE TABLE galley_test.team_member (

CREATE KEYSPACE gundeck_test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true;

CREATE TABLE gundeck_test.clients (
user uuid,
CREATE TABLE gundeck_test.push (
ptoken text,
app text,
transport int,
client text,
enckey blob,
mackey blob,
PRIMARY KEY (user, client)
) WITH CLUSTERING ORDER BY (client ASC)
connection blob,
usr uuid,
PRIMARY KEY (ptoken, app, transport)
) WITH CLUSTERING ORDER BY (app ASC, transport ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
Expand All @@ -459,26 +461,6 @@ CREATE TABLE gundeck_test.clients (
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

CREATE TABLE gundeck_test.fallback_cancel (
user uuid,
id timeuuid,
PRIMARY KEY (user, id)
) WITH CLUSTERING ORDER BY (id ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 0
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

CREATE TABLE gundeck_test.notifications (
user uuid,
id timeuuid,
Expand Down Expand Up @@ -523,30 +505,6 @@ CREATE TABLE gundeck_test.meta (
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

CREATE TABLE gundeck_test.push (
ptoken text,
app text,
transport int,
client text,
connection blob,
usr uuid,
PRIMARY KEY (ptoken, app, transport)
) WITH CLUSTERING ORDER BY (app ASC, transport ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

CREATE TABLE gundeck_test.user_push (
usr uuid,
ptoken text,
Expand All @@ -555,7 +513,6 @@ CREATE TABLE gundeck_test.user_push (
arn text,
client text,
connection blob,
fallback int,
PRIMARY KEY (usr, ptoken, app, transport)
) WITH CLUSTERING ORDER BY (ptoken ASC, app ASC, transport ASC)
AND bloom_filter_fp_chance = 0.1
Expand Down Expand Up @@ -838,6 +795,28 @@ CREATE TABLE brig_test.service (
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

CREATE TABLE brig_test.team_invitation_email (
email text,
team uuid,
code ascii,
invitation uuid,
PRIMARY KEY (email, team)
) WITH CLUSTERING ORDER BY (team ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

CREATE TABLE brig_test.invitation_info (
code ascii PRIMARY KEY,
id uuid,
Expand Down Expand Up @@ -995,6 +974,25 @@ CREATE TABLE brig_test.service_whitelist_rev (
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

CREATE TABLE brig_test.id_mapping (
mapped_id uuid PRIMARY KEY,
remote_domain text,
remote_id uuid
) WITH bloom_filter_fp_chance = 0.1
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

CREATE TABLE brig_test.team_invitation (
team uuid,
id uuid,
Expand Down
9 changes: 7 additions & 2 deletions docs/reference/spar-braindump.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
# Spar braindump {#SparBrainDump}

_Author: Matthias Fischmann_

---

# the spar service for user provisioning (scim) and authentication (saml) - a brain dump

this is a mix of information on inmplementation details, architecture,
Expand Down Expand Up @@ -252,8 +258,7 @@ If you can't find what you're looking for there, please add at least a
pending test case explaining what's missing.

Side note: Users in brig carry an enum type
[`ManagedBy`](https://github.com/wireapp/wire-server/blob/010ca7e460d13160b465de24dd3982a397f94c16/libs/brig-types/src/Brig/Types/Common.hs#L393-L413);
see also {#DevScimOneWaySync}. This is a half-implemented feature for
[`ManagedBy`](https://github.com/wireapp/wire-server/blob/010ca7e460d13160b465de24dd3982a397f94c16/libs/brig-types/src/Brig/Types/Common.hs#L393-L413). This is a half-implemented feature for
managing conflicts between changes via scim vs. changes from wire
clients; and does currently not affect deletability of users.

Expand Down
21 changes: 12 additions & 9 deletions docs/reference/user/registration.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,22 +172,24 @@ These end-points support 5 flows:
1. new team account
2. new personal (teamless) account
3. invitation code from team, new member
4. ephemeral/guest user
4. ephemeral user
5. [not supported by clients] new *inactive* user account

We need an option to block 1, 2, 5 on-prem; 3, 4 should remain available (no block option). There are also provisioning flows via SAML or SCIM, which are not critical (see below).
We need an option to block 1, 2, 5 on-prem; 3, 4 should remain available (no block option). There are also provisioning flows via SAML or SCIM, which are not critical. In short, this could refactored into:

How to decide whether to block:
* Allow team members to register (via email/phone or SSO)
* Allow ephemeral users

During registration, we can take advantage of [NewUserOrigin](https://github.com/wireapp/wire-server/blob/a89b9cd818997e7837e5d0938ecfd90cf8dd9e52/libs/wire-api/src/Wire/API/User.hs#L625); we're particularly interested in `NewUserOriginTeamUser` --> only `NewTeamMember` or `NewTeamMemberSSO` should be accepted. In case this is a `Nothing`, we need to check if the user expires, i.e., `newUserExpiresIn` must be a `Just`.

So `/register` should only succeed iff at least one of these conditions is true:

```
Body has `team_code` => case 3.
Body has `sso_id` => provisioned by SAML or SCIM.
Body has `email` or `phone` => case 1, 2, or 5.
Otherwise => case 4
newUserTeam == (Just (NewTeamMember _)) OR
newUserTeam == (Just (NewTeamMemberSSO _)) OR
newUserExpiresIn == (Just _)
```

So `/register` blocks iff `email` or `phone` exist and neither `sso_id` nor `team_code` exist.

The rest of the unauthorized end-points is safe:

- `/password-reset`
Expand All @@ -200,6 +202,7 @@ The rest of the unauthorized end-points is safe:
- `/sso`: authenticated via IdP or ok to expose to world (`/metadata`)
- `/scim/v2`: authenticated via HTTP simple auth.
- `~* ^/teams/invitations/info$`: only `GET`; requires invitation code.
- `~* ^/teams/invitations/by-email$`: only `HEAD`.
- `/invitations/info`: discontinued feature, can be removed from nginz config.
- `/conversations/code-check`: link validatoin for ephemeral/guest users.
- `/provider/*`: bots need to be registered to a team before becoming active. so if an attacker does not get access to a team, they cannot deploy a bot.
Expand Down
Loading

0 comments on commit 4cb3364

Please sign in to comment.